Top 34 Kafka Manager Windows Best 144 Answer

You are looking for information, articles, knowledge about the topic nail salons open on sunday near me kafka manager windows on Google, you do not find the information you need! Here are the best content compiled and compiled by the https://chewathai27.com/to team, along with other related topics such as: kafka manager windows Kafka manager, Kafka Manager tool, Kafka-manager-docker, Kafka UI, Kafka Manager docker-compose, Slack kafka, Kafka-eagle, Kafka manager install Ubuntu


Part 4 – Install Kafka manager | Kafka for beginners
Part 4 – Install Kafka manager | Kafka for beginners


GitHub – yahoo/CMAK: CMAK is a tool for managing Apache Kafka clusters

  • Article author: github.com
  • Reviews from users: 32584 ⭐ Ratings
  • Top rated: 4.1 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about GitHub – yahoo/CMAK: CMAK is a tool for managing Apache Kafka clusters CMAK is a tool for managing Apache Kafka clusters. Contribute to yahoo/CMAK development by creating an account on GitHub. …
  • Most searched keywords: Whether you are looking for GitHub – yahoo/CMAK: CMAK is a tool for managing Apache Kafka clusters CMAK is a tool for managing Apache Kafka clusters. Contribute to yahoo/CMAK development by creating an account on GitHub. CMAK is a tool for managing Apache Kafka clusters. Contribute to yahoo/CMAK development by creating an account on GitHub.
  • Table of Contents:

Latest commit

Git stats

Files

READMEmd

Requirements

Configuration

Deployment

Starting the service

Starting the service with Security

Packaging

Credits

Name and Management

License

ConsumerProducer Lag

Migration from Kafka Manager to CMAK

About

Releases
5

Packages 0

Contributors 73

Languages

Footer

GitHub - yahoo/CMAK: CMAK is a tool for managing Apache Kafka clusters
GitHub – yahoo/CMAK: CMAK is a tool for managing Apache Kafka clusters

Read More

Setting up Kafka management for kafka cluster | CodeForGeek

  • Article author: codeforgeek.com
  • Reviews from users: 22583 ⭐ Ratings
  • Top rated: 3.8 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Setting up Kafka management for kafka cluster | CodeForGeek You need to have Java v8 installed in your system before installing the Kafka manager. Download the Kafka manager from Github. … If you have downloaded the zip … …
  • Most searched keywords: Whether you are looking for Setting up Kafka management for kafka cluster | CodeForGeek You need to have Java v8 installed in your system before installing the Kafka manager. Download the Kafka manager from Github. … If you have downloaded the zip … So far we have set up the Kafka cluster with multiple broker and multiple zookeepers. We need a way to see our configuration in a presentable manner.
  • Table of Contents:

Installing Kafka manager

Adding a new cluster in Kafka manager

Recent Posts

Favorite Sites

Setting up Kafka management for kafka cluster | CodeForGeek
Setting up Kafka management for kafka cluster | CodeForGeek

Read More

Install Kafka manager | Kafka for beginners Part 4

  • Article author: morioh.com
  • Reviews from users: 20227 ⭐ Ratings
  • Top rated: 3.6 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Install Kafka manager | Kafka for beginners Part 4 In this Gue we will learn about installing Kafka manager. we will clone the … blog.2ndquadrant.com/compiling-postgresql-extensions-visual-studio-windows … …
  • Most searched keywords: Whether you are looking for Install Kafka manager | Kafka for beginners Part 4 In this Gue we will learn about installing Kafka manager. we will clone the … blog.2ndquadrant.com/compiling-postgresql-extensions-visual-studio-windows … In this Guide we will learn about installing Kafka manager. we will clone the repository and then we will compile the repository.
  • Table of Contents:

What is GEEK

Buddha Community

Plpgsql Check Extension That Allows to Check Plpgsql Source Code

plpgsql_check

Arguments

Triggers

Mass check

Configuration

Dynamic SQL

Refcursors

Temporary tables

Coverage metrics

Note

Attention – SECURITY

Supported pragmas

Compilation on Ubuntu

Compilation plpgsql_check on Windows

Checked on

Plpgsql Check Extension That Allows to Check Plpgsql Source Code

plpgsql_check

Arguments

Triggers

Mass check

Configuration

Dynamic SQL

Refcursors

Temporary tables

Coverage metrics

Note

Attention – SECURITY

Supported pragmas

Compilation on Ubuntu

Compilation plpgsql_check on Windows

Checked on

A Pure PHP Implementation Of The MessagePack Serialization Format

Features

Installation

Usage

Exceptions

Tests

License

Install Kafka manager Kafka for beginners Part 4

How to install PgAdmin 4 on CentOS 8

Install pgAdmin4 on CentOS 8

Install Kafka manager | Kafka for beginners Part 4
Install Kafka manager | Kafka for beginners Part 4

Read More

How to install Kafka Manager for Managing kafka cluster – devopszones

  • Article author: www.devopszones.com
  • Reviews from users: 16151 ⭐ Ratings
  • Top rated: 3.3 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about
    How to install Kafka Manager for Managing kafka cluster – devopszones
    Install of Yahoo Kafka Manager: · Install java · Unzip kafka-manager: · Build kafka-manager · Copy package and Unzip: · Start the Service using below … …
  • Most searched keywords: Whether you are looking for
    How to install Kafka Manager for Managing kafka cluster – devopszones
    Install of Yahoo Kafka Manager: · Install java · Unzip kafka-manager: · Build kafka-manager · Copy package and Unzip: · Start the Service using below … How to install Kafka Manager for Managing kafka clusterLinux,Solaris,Devops,Monitoring,Kafka and Elasticsearch how-tos tutorials,troubleshooting tips, commands : Devopszones
  • Table of Contents:

Recent Articles

How to install  Kafka Manager for Managing kafka cluster

Contributors

Please Support this website

Devopszones Page

Popular

Recent

Comments

Frequent Topics


How to install  Kafka Manager for Managing kafka cluster - devopszones
How to install Kafka Manager for Managing kafka cluster – devopszones

Read More

How to install Kafka Manager for Managing kafka cluster – devopszones

  • Article author: medium.com
  • Reviews from users: 31694 ⭐ Ratings
  • Top rated: 3.1 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about
    How to install Kafka Manager for Managing kafka cluster – devopszones
    Download kafka-manager — https://github.com/yahoo/kafka-manager/releases. “Kafka Manager UI Setup” is published by Bucky Ninja. …
  • Most searched keywords: Whether you are looking for
    How to install Kafka Manager for Managing kafka cluster – devopszones
    Download kafka-manager — https://github.com/yahoo/kafka-manager/releases. “Kafka Manager UI Setup” is published by Bucky Ninja. How to install Kafka Manager for Managing kafka clusterLinux,Solaris,Devops,Monitoring,Kafka and Elasticsearch how-tos tutorials,troubleshooting tips, commands : Devopszones
  • Table of Contents:

Recent Articles

How to install  Kafka Manager for Managing kafka cluster

Contributors

Please Support this website

Devopszones Page

Popular

Recent

Comments

Frequent Topics


How to install  Kafka Manager for Managing kafka cluster - devopszones
How to install Kafka Manager for Managing kafka cluster – devopszones

Read More

Set Up and Run Apache Kafka on Windows

  • Article author: www.confluent.io
  • Reviews from users: 22577 ⭐ Ratings
  • Top rated: 3.0 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Set Up and Run Apache Kafka on Windows How to Run Apache Kafka on Windows · Install WSL 2. Enable the Windows Subsystem for Linux; Enable the Virtual Machine feature; Get the Linux … …
  • Most searched keywords: Whether you are looking for Set Up and Run Apache Kafka on Windows How to Run Apache Kafka on Windows · Install WSL 2. Enable the Windows Subsystem for Linux; Enable the Virtual Machine feature; Get the Linux … Is Windows your favorite development environment? Do you want to run Apache Kafka® on Windows? Thanks to the Windows Subsystem for Linux 2 (WSL 2), now you can, and with
  • Table of Contents:

Set up your environment

Kafka and Windows The good the bad and the ugly

You’re just getting started!

Kafka on Windows What made this possible

Did you like this blog post Share it now

More Articles Like This

Set Up and Run Apache Kafka on Windows
Set Up and Run Apache Kafka on Windows

Read More

Set Up and Run Apache Kafka on Windows

  • Article author: towardsdatascience.com
  • Reviews from users: 12354 ⭐ Ratings
  • Top rated: 3.2 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Set Up and Run Apache Kafka on Windows The tool runs on Windows, Linux and Mac OS. Comparison and Conclusions. If you cannot afford commercial licenses then your options are Yahoo Kafka Manager, … …
  • Most searched keywords: Whether you are looking for Set Up and Run Apache Kafka on Windows The tool runs on Windows, Linux and Mac OS. Comparison and Conclusions. If you cannot afford commercial licenses then your options are Yahoo Kafka Manager, … Is Windows your favorite development environment? Do you want to run Apache Kafka® on Windows? Thanks to the Windows Subsystem for Linux 2 (WSL 2), now you can, and with
  • Table of Contents:

Set up your environment

Kafka and Windows The good the bad and the ugly

You’re just getting started!

Kafka on Windows What made this possible

Did you like this blog post Share it now

More Articles Like This

Set Up and Run Apache Kafka on Windows
Set Up and Run Apache Kafka on Windows

Read More

Connecting to Kafka Manager and Viewing Kafka Information_Distributed Message Service for Kafka_User Guide_Accessing a Kafka Instance_HUAWEI CLOUD

  • Article author: support.huaweicloud.com
  • Reviews from users: 6341 ⭐ Ratings
  • Top rated: 4.9 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Connecting to Kafka Manager and Viewing Kafka Information_Distributed Message Service for Kafka_User Guide_Accessing a Kafka Instance_HUAWEI CLOUD Kafka Manager is an open-source tool for managing Kafka. It can be used only through a web browser. In Kafka Manager, you can view the … …
  • Most searched keywords: Whether you are looking for Connecting to Kafka Manager and Viewing Kafka Information_Distributed Message Service for Kafka_User Guide_Accessing a Kafka Instance_HUAWEI CLOUD Kafka Manager is an open-source tool for managing Kafka. It can be used only through a web browser. In Kafka Manager, you can view the … Connecting to Kafka Manager and Viewing Kafka Information,Accessing a Kafka Instance,User GuideKafka Manager is an open-source tool for managing Kafka. It can be used only through a web browser. In Kafka Manager, you can view the monitoring statistics and broker in
  • Table of Contents:
Connecting to Kafka Manager and Viewing Kafka Information_Distributed Message Service for Kafka_User Guide_Accessing a Kafka Instance_HUAWEI CLOUD
Connecting to Kafka Manager and Viewing Kafka Information_Distributed Message Service for Kafka_User Guide_Accessing a Kafka Instance_HUAWEI CLOUD

Read More

Kafka Manager — Framework Repositories 1.0 documentation

  • Article author: frameworks.readthedocs.io
  • Reviews from users: 13649 ⭐ Ratings
  • Top rated: 4.0 ⭐
  • Lowest rated: 1 ⭐
  • Summary of article content: Articles about Kafka Manager — Framework Repositories 1.0 documentation kafka-manager.zkhosts=”192.168.100.163:2181,192.168.100.164:2181,192.168.100.165:2181″ … Windows¶. bin\kafka-manager.bat. http://localhost:9000/ … …
  • Most searched keywords: Whether you are looking for Kafka Manager — Framework Repositories 1.0 documentation kafka-manager.zkhosts=”192.168.100.163:2181,192.168.100.164:2181,192.168.100.165:2181″ … Windows¶. bin\kafka-manager.bat. http://localhost:9000/ …
  • Table of Contents:

Configuration¶

Starting the service¶

References¶

Kafka Manager — Framework Repositories 1.0 documentation
Kafka Manager — Framework Repositories 1.0 documentation

Read More


See more articles in the same category here: Chewathai27.com/to/blog.

yahoo/CMAK: CMAK is a tool for managing Apache Kafka clusters

CMAK (Cluster Manager for Apache Kafka, previously known as Kafka Manager)

CMAK (previously known as Kafka Manager) is a tool for managing Apache Kafka clusters. See below for details about the name change.

CMAK supports the following:

Manage multiple clusters

Easy inspection of cluster state (topics, consumers, offsets, brokers, replica distribution, partition distribution)

Run preferred replica election

Generate partition assignments with option to select brokers to use

Run reassignment of partition (based on generated assignments)

Create a topic with optional topic configs (0.8.1.1 has different configs than 0.8.2+)

Delete topic (only supported on 0.8.2+ and remember set delete.topic.enable=true in broker config)

Topic list now indicates topics marked for deletion (only supported on 0.8.2+)

Batch generate partition assignments for multiple topics with option to select brokers to use

Batch run reassignment of partition for multiple topics

Add partitions to existing topic

Update config for existing topic

Optionally enable JMX polling for broker level and topic level metrics.

Optionally filter out consumers that do not have ids/ owners/ & offsets/ directories in zookeeper.

Cluster Management

Topic List

Topic View

Consumer List View

Consumed Topic View

Broker List

Broker View

Requirements

Configuration

The minimum configuration is the zookeeper hosts which are to be used for CMAK (pka kafka manager) state. This can be found in the application.conf file in conf directory. The same file will be packaged in the distribution zip file; you may modify settings after unzipping the file on the desired server.

cmak.zkhosts=”my.zookeeper.host.com:2181″

You can specify multiple zookeeper hosts by comma delimiting them, like so:

cmak.zkhosts=”my.zookeeper.host.com:2181,other.zookeeper.host.com:2181″

Alternatively, use the environment variable ZK_HOSTS if you don’t want to hardcode any values.

ZK_HOSTS=”my.zookeeper.host.com:2181″

You can optionally enable/disable the following functionality by modifying the default list in application.conf :

application.features=[“KMClusterManagerFeature”,”KMTopicManagerFeature”,”KMPreferredReplicaElectionFeature”,”KMReassignPartitionsFeature”]

KMClusterManagerFeature – allows adding, updating, deleting cluster from CMAK (pka Kafka Manager)

KMTopicManagerFeature – allows adding, updating, deleting topic from a Kafka cluster

KMPreferredReplicaElectionFeature – allows running of preferred replica election for a Kafka cluster

KMReassignPartitionsFeature – allows generating partition assignments and reassigning partitions

Consider setting these parameters for larger clusters with jmx enabled :

cmak.broker-view-thread-pool-size=< 3 * number_of_brokers>

cmak.broker-view-max-queue-size=< 3 * total # of partitions across all topics>

cmak.broker-view-update-seconds=< cmak.broker-view-max-queue-size / (10 * number_of_brokers) >

Here is an example for a kafka cluster with 10 brokers, 100 topics, with each topic having 10 partitions giving 1000 total partitions with JMX enabled :

cmak.broker-view-thread-pool-size=30

cmak.broker-view-max-queue-size=3000

cmak.broker-view-update-seconds=30

The follow control consumer offset cache’s thread pool and queue :

cmak.offset-cache-thread-pool-size=< default is # of processors>

cmak.offset-cache-max-queue-size=< default is 1000>

cmak.kafka-admin-client-thread-pool-size=< default is # of processors>

cmak.kafka-admin-client-max-queue-size=< default is 1000>

You should increase the above for large # of consumers with consumer polling enabled. Though it mainly affects ZK based consumer polling.

Kafka managed consumer offset is now consumed by KafkaManagedOffsetCache from the “__consumer_offsets” topic. Note, this has not been tested with large number of offsets being tracked. There is a single thread per cluster consuming this topic so it may not be able to keep up on large # of offsets being pushed to the topic.

Authenticating a User with LDAP

Warning, you need to have SSL configured with CMAK (pka Kafka Manager) to ensure your credentials aren’t passed unencrypted. Authenticating a User with LDAP is possible by passing the user credentials with the Authorization header. LDAP authentication is done on first visit, if successful, a cookie is set. On next request, the cookie value is compared with credentials from Authorization header. LDAP support is through the basic authentication filter.

Configure basic authentication

basicAuthentication.enabled=true

basicAuthentication.realm=< basic authentication realm>

Encryption parameters (optional, otherwise randomly generated on startup) :

basicAuthentication.salt=”some-hex-string-representing-byte-array”

basicAuthentication.iv=”some-hex-string-representing-byte-array”

basicAuthentication.secret=”my-secret-string”

Configure LDAP/LDAPS authentication

basicAuthentication.ldap.enabled=< Boolean flag to enable/disable ldap authentication >

basicAuthentication.ldap.server=< fqdn of LDAP server>

basicAuthentication.ldap.port=< port of LDAP server>

basicAuthentication.ldap.username=< LDAP search username>

basicAuthentication.ldap.password=< LDAP search password>

basicAuthentication.ldap.search-base-dn=< LDAP search base>

basicAuthentication.ldap.search-filter=< LDAP search filter>

basicAuthentication.ldap.connection-pool-size=< number of connection to LDAP server>

basicAuthentication.ldap.ssl=< Boolean flag to enable/disable LDAPS>

(Optional) Limit access to a specific LDAP Group

basicAuthentication.ldap.group-filter=< LDAP group filter>

basicAuthentication.ldap.ssl-trust-all=< Boolean flag to allow non-expired invalid certificates>

Example (Online LDAP Test Server):

basicAuthentication.ldap.enabled=true

basicAuthentication.ldap.server=”ldap.forumsys.com”

basicAuthentication.ldap.port=389

basicAuthentication.ldap.username=”cn=read-only-admin,dc=example,dc=com”

basicAuthentication.ldap.password=”password”

basicAuthentication.ldap.search-base-dn=”dc=example,dc=com”

basicAuthentication.ldap.search-filter=”(uid= $capturedLogin$ )”

)” basicAuthentication.ldap.group-filter=”cn=allowed-group,ou=groups,dc=example,dc=com”

basicAuthentication.ldap.connection-pool-size=10

basicAuthentication.ldap.ssl=false

basicAuthentication.ldap.ssl-trust-all=false

Deployment

The command below will create a zip file which can be used to deploy the application.

./sbt clean dist

Please refer to play framework documentation on production deployment/configuration.

If java is not in your path, or you need to build against a specific java version, please use the following (the example assumes zulu java11):

$ PATH=/usr/lib/jvm/zulu-11-amd64/bin:$PATH \ JAVA_HOME=/usr/lib/jvm/zulu-11-amd64 \ /path/to/sbt -java-home /usr/lib/jvm/zulu-11-amd64 clean dist

This ensures that the ‘java’ and ‘javac’ binaries in your path are first looked up in the correct location. Next, for all downstream tools that only listen to JAVA_HOME, it points them to the java11 location. Lastly, it tells sbt to use the java11 location as well.

Starting the service

After extracting the produced zipfile, and changing the working directory to it, you can run the service like this:

$ bin/cmak

By default, it will choose port 9000. This is overridable, as is the location of the configuration file. For example:

$ bin/cmak -Dconfig.file=/path/to/application.conf -Dhttp.port=8080

Again, if java is not in your path, or you need to run against a different version of java, add the -java-home option as follows:

$ bin/cmak -java-home /usr/lib/jvm/zulu-11-amd64

Starting the service with Security

To add JAAS configuration for SASL, add the config file location at start:

$ bin/cmak -Djava.security.auth.login.config=/path/to/my-jaas.conf

NOTE: Make sure the user running CMAK (pka kafka manager) has read permissions on the jaas config file

Packaging

If you’d like to create a Debian or RPM package instead, you can run one of:

sbt debian:packageBin sbt rpm:packageBin

Credits

Most of the utils code has been adapted to work with Apache Curator from Apache Kafka.

Name and Management

CMAK was renamed from its previous name due to this issue. CMAK is designed to be used with Apache Kafka and is offered to support the needs of the Kafka community. This project is currently managed by employees at Verizon Media and the community who supports this project.

License

Licensed under the terms of the Apache License 2.0. See accompanying LICENSE file for terms.

Consumer/Producer Lag

Producer offset is polled. Consumer offset is read from the offset topic for Kafka based consumers. This means the reported lag may be negative since we are consuming offset from the offset topic faster then polling the producer offset. This is normal and not a problem.

Migration from Kafka Manager to CMAK

Setting up Kafka management for kafka cluster

So far we have set up the Kafka cluster with multiple broker and multiple zookeepers. We need a way to see our configuration in a presentable manner.

In this article, we are going to set up the Kafka management software to manage and overview our cluster.

The software we are going to use is called Kafka manager developed by folks at Yahoo.

Installing Kafka manager

You need to have Java v8 installed in your system before installing the Kafka manager.

Download the Kafka manager from Github.

git clone https://github.com/yahoo/kafka-manager

Or

wget https://github.com/yahoo/kafka-manager/archive/1.3.3.22.zip

If you have downloaded the zip file, unzip it using the unzip utility of Ubuntu.

Once the download is completed, we need to build the Kafka manager.

Switch to the Kafka manager folder and run this command.

$ cd kafka-manager

$ ./sbt clean dist

Then go ahead and grab a coffee. This will take some time.

Once the build is completed, you should be seeing a target directory created inside the Kafka manager folder.

Before we start the Kafka manager, we need to do a configuration.

Open up application.conf file which is in the conf directory.

$ vi conf/application.conf

Change the following values:

kafka-manager.zkhosts=”localhost:2181″

basicAuthentication.username=”admin”

basicAuthentication.password=”password”

This is a basic configuration to get you started.

Now run the Kafka manager. First switch to this directory.

$ cd target/universal

You should be seeing a zip file created which is our build. Unzip it and then switch to the folder.

$ cd kafka-manager-1.3.3.22

Now run the Kafka manager using the following command.

$ bin/kafka-manager

By default, it runs on port 9000. Open Kafka manager from your local machine by typing

:9000

Type in the username and password you have set in the config. You should be seeing a Kafka manager screen.

Adding a new cluster in Kafka manager

From the Cluster menu, click on Add Cluster button. Give it a proper name and type the zookeeper host addresses separated by commas.

In our case, we have 3 zookeepers running on 3 EC2 instances. So add all three addresses like this.

:2181, :2181, :2181

Check the Enable JMX Polling button and click on save. Kafka manager will register the cluster and provide you all information about it.

Now you should be seeing cluster information like this.

You can view topics, brokers and their profiling information using Kafka manager.

You can create, list topics too.

Kafka manager is really simple and easy to set up a tool for our Kafka cluster.

This article is a part of a series, check out other articles here:

1: What is Kafka

2: Setting Up Zookeeper Cluster for Kafka in AWS EC2

3: Setting up Multi-Broker Kafka in AWS EC2

4: Setting up Authentication in Multi-broker Kafka cluster in AWS EC2

5: Setting up Kafka management for Kafka cluster

6: Capacity Estimation for Kafka Cluster in production

7: Performance testing Kafka cluster

Kafka for beginners Part 4

plpgsql_check

I founded this project, because I wanted to publish the code I wrote in the last two years, when I tried to write enhanced checking for PostgreSQL upstream. It was not fully successful – integration into upstream requires some larger plpgsql refactoring – probably it will not be done in next years (now is Dec 2013). But written code is fully functional and can be used in production (and it is used in production). So, I created this extension to be available for all plpgsql developers.

If you like it and if you would to join to development of this extension, register yourself to postgresql extension hacking google group.

Features

check fields of referenced database objects and types inside embedded SQL

using correct types of function parameters

unused variables and function argumens, unmodified OUT argumens

partially detection of dead code (due RETURN command)

detection of missing RETURN command in function

try to identify unwanted hidden casts, that can be performance issue like unused indexes

possibility to collect relations and functions used by function

possibility to check EXECUTE stmt agaist SQL injection vulnerability

I invite any ideas, patches, bugreports.

plpgsql_check is next generation of plpgsql_lint. It allows to check source code by explicit call plpgsql_check_function.

PostgreSQL PostgreSQL 10, 11, 12, 13 and 14 are supported.

The SQL statements inside PL/pgSQL functions are checked by validator for semantic errors. These errors can be found by plpgsql_check_function:

Active mode

postgres=# CREATE EXTENSION plpgsql_check; LOAD postgres=# CREATE TABLE t1(a int, b int); CREATE TABLE postgres=# CREATE OR REPLACE FUNCTION public.f1() RETURNS void LANGUAGE plpgsql AS $function$ DECLARE r record; BEGIN FOR r IN SELECT * FROM t1 LOOP RAISE NOTICE ‘%’, r.c; — there is bug – table t1 missing “c” column END LOOP; END; $function$; CREATE FUNCTION postgres=# select f1(); — execution doesn’t find a bug due to empty table t1 f1 ──── (1 row) postgres=# \x Expanded display is on. postgres=# select * from plpgsql_check_function_tb(‘f1()’); ─[ RECORD 1 ]─────────────────────────── functionid │ f1 lineno │ 6 statement │ RAISE sqlstate │ 42703 message │ record “r” has no field “c” detail │ [null] hint │ [null] level │ error position │ 0 query │ [null] postgres=# \sf+ f1 CREATE OR REPLACE FUNCTION public.f1() RETURNS void LANGUAGE plpgsql 1 AS $function$ 2 DECLARE r record; 3 BEGIN 4 FOR r IN SELECT * FROM t1 5 LOOP 6 RAISE NOTICE ‘%’, r.c; — there is bug – table t1 missing “c” column 7 END LOOP; 8 END; 9 $function$

Function plpgsql_check_function() has three possible formats: text, json or xml

select * from plpgsql_check_function(‘f1()’, fatal_errors := false); plpgsql_check_function ———————————————————————— error:42703:4:SQL statement:column “c” of relation “t1” does not exist Query: update t1 set c = 30 — ^ error:42P01:7:RAISE:missing FROM-clause entry for table “r” Query: SELECT r.c — ^ error:42601:7:RAISE:too few parameters specified for RAISE (7 rows) postgres=# select * from plpgsql_check_function(‘fx()’, format:=’xml’); plpgsql_check_function ──────────────────────────────────────────────────────────────── error42P01relation “foo111” does not existRETURNSELECT (select a from foo111) (1 row)

Arguments

You can set level of warnings via function’s parameters:

Mandatory arguments

function name or function signature – these functions requires function specification. Any function in PostgreSQL can be specified by Oid or by name or by signature. When you know oid or complete function’s signature, you can use a regprocedure type parameter like ‘fx()’::regprocedure or 16799::regprocedure . Possible alternative is using a name only, when function’s name is unique – like ‘fx’ . When the name is not unique or the function doesn’t exists it raises a error.

Optional arguments

relid DEFAULT 0 – oid of relation assigned with trigger function. It is necessary for check of any trigger function.

fatal_errors boolean DEFAULT true – stop on first error

other_warnings boolean DEFAULT true – show warnings like different attributes number in assignmenet on left and right side, variable overlaps function’s parameter, unused variables, unwanted casting, ..

extra_warnings boolean DEFAULT true – show warnings like missing RETURN , shadowed variables, dead code, never read (unused) function’s parameter, unmodified variables, modified auto variables, ..

performance_warnings boolean DEFAULT false – performance related warnings like declared type with type modificator, casting, implicit casts in where clause (can be reason why index is not used), ..

security_warnings boolean DEFAULT false – security related checks like SQL injection vulnerability detection

anyelementtype regtype DEFAULT ‘int’ – a real type used instead anyelement type

anyenumtype regtype DEFAULT ‘-‘ – a real type used instead anyenum type

anyrangetype regtype DEFAULT ‘int4range’ – a real type used instead anyrange type

anycompatibletype DEFAULT ‘int’ – a real type used instead anycompatible type

anycompatiblerangetype DEFAULT ‘int4range’ – a real type used instead anycompatible range type

without_warnings DEFAULT false – disable all warnings

all_warnings DEFAULT false – enable all warnings

newtable DEFAULT NULL , oldtable DEFAULT NULL – the names of NEW or OLD transitive tables. These parameters are required when transitive tables are used.

Triggers

When you want to check any trigger, you have to enter a relation that will be used together with trigger function

CREATE TABLE bar(a int, b int); postgres=# \sf+ foo_trg CREATE OR REPLACE FUNCTION public.foo_trg() RETURNS trigger LANGUAGE plpgsql 1 AS $function$ 2 BEGIN 3 NEW.c := NEW.a + NEW.b; 4 RETURN NEW; 5 END; 6 $function$

Missing relation specification

postgres=# select * from plpgsql_check_function(‘foo_trg()’); ERROR: missing trigger relation HINT: Trigger relation oid must be valid

Correct trigger checking (with specified relation)

postgres=# select * from plpgsql_check_function(‘foo_trg()’, ‘bar’); plpgsql_check_function ——————————————————– error:42703:3:assignment:record “new” has no field “c” (1 row)

For triggers with transitive tables you can set a oldtable or newtable parameters:

create or replace function footab_trig_func() returns trigger as $$ declare x int; begin if false then — should be ok; select count(*) from newtab into x; — should fail; select count(*) from newtab where d = 10 into x; end if; return null; end; $$ language plpgsql; select * from plpgsql_check_function(‘footab_trig_func’,’footab’, newtable := ‘newtab’);

Mass check

You can use the plpgsql_check_function for mass check functions and mass check triggers. Please, test following queries:

— check all nontrigger plpgsql functions SELECT p.oid, p.proname, plpgsql_check_function(p.oid) FROM pg_catalog.pg_namespace n JOIN pg_catalog.pg_proc p ON pronamespace = n.oid JOIN pg_catalog.pg_language l ON p.prolang = l.oid WHERE l.lanname = ‘plpgsql’ AND p.prorettype <> 2279;

or

SELECT p.proname, tgrelid::regclass, cf.* FROM pg_proc p JOIN pg_trigger t ON t.tgfoid = p.oid JOIN pg_language l ON p.prolang = l.oid JOIN pg_namespace n ON p.pronamespace = n.oid, LATERAL plpgsql_check_function(p.oid, t.tgrelid) cf WHERE n.nspname = ‘public’ and l.lanname = ‘plpgsql’

or

— check all plpgsql functions (functions or trigger functions with defined triggers) SELECT (pcf).functionid::regprocedure, (pcf).lineno, (pcf).statement, (pcf).sqlstate, (pcf).message, (pcf).detail, (pcf).hint, (pcf).level, (pcf).”position”, (pcf).query, (pcf).context FROM ( SELECT plpgsql_check_function_tb(pg_proc.oid, COALESCE(pg_trigger.tgrelid, 0)) AS pcf FROM pg_proc LEFT JOIN pg_trigger ON (pg_trigger.tgfoid = pg_proc.oid) WHERE prolang = (SELECT lang.oid FROM pg_language lang WHERE lang.lanname = ‘plpgsql’) AND pronamespace <> (SELECT nsp.oid FROM pg_namespace nsp WHERE nsp.nspname = ‘pg_catalog’) AND — ignore unused triggers (pg_proc.prorettype <> (SELECT typ.oid FROM pg_type typ WHERE typ.typname = ‘trigger’) OR pg_trigger.tgfoid IS NOT NULL) OFFSET 0 ) ss ORDER BY (pcf).functionid::regprocedure::text, (pcf).lineno

Passive mode

Functions should be checked on start – plpgsql_check module must be loaded.

Configuration

plpgsql_check.mode = [ disabled | by_function | fresh_start | every_start ] plpgsql_check.fatal_errors = [ yes | no ] plpgsql_check.show_nonperformance_warnings = false plpgsql_check.show_performance_warnings = false

Default mode is by_function, that means that the enhanced check is done only in active mode – by plpgsql_check_function. fresh_start means cold start.

You can enable passive mode by

load ‘plpgsql’; — 1.1 and higher doesn’t need it load ‘plpgsql_check’; set plpgsql_check.mode = ‘every_start’; SELECT fx(10); — run functions – function is checked before runtime starts it

Limits

plpgsql_check should find almost all errors on really static code. When developer use some PLpgSQL’s dynamic features like dynamic SQL or record data type, then false positives are possible. These should be rare – in well written code – and then the affected function should be redesigned or plpgsql_check should be disabled for this function.

CREATE OR REPLACE FUNCTION f1() RETURNS void AS $$ DECLARE r record; BEGIN FOR r IN EXECUTE ‘SELECT * FROM t1’ LOOP RAISE NOTICE ‘%’, r.c; END LOOP; END; $$ LANGUAGE plpgsql SET plpgsql.enable_check TO false;

A usage of plpgsql_check adds a small overhead (in enabled passive mode) and you should use it only in develop or preprod environments.

Dynamic SQL

This module doesn’t check queries that are assembled in runtime. It is not possible to identify results of dynamic queries – so plpgsql_check cannot to set correct type to record variables and cannot to check a dependent SQLs and expressions.

When type of record’s variable is not know, you can assign it explicitly with pragma type :

DECLARE r record; BEGIN EXECUTE format(‘SELECT * FROM %I’, _tablename) INTO r; PERFORM plpgsql_check_pragma(‘type: r (id int, processed bool)’); IF NOT r.processed THEN …

Attention: The SQL injection check can detect only some SQL injection vulnerabilities. This tool cannot be used for security audit! Some issues should not be detected. This check can raise false alarms too – probably when variable is sanitized by other command or when value is of some compose type.

Refcursors

plpgsql_check should not to detect structure of referenced cursors. A reference on cursor in PLpgSQL is implemented as name of global cursor. In check time, the name is not known (not in all possibilities), and global cursor doesn’t exist. It is significant break for any static analyse. PLpgSQL cannot to set correct type for record variables and cannot to check a dependent SQLs and expressions. A solution is same like dynamic SQL. Don’t use record variable as target when you use refcursor type or disable plpgsql_check for these functions.

CREATE OR REPLACE FUNCTION foo(refcur_var refcursor) RETURNS void AS $$ DECLARE rec_var record; BEGIN FETCH refcur_var INTO rec_var; — this is STOP for plpgsql_check RAISE NOTICE ‘%’, rec_var; — record rec_var is not assigned yet error

In this case a record type should not be used (use known rowtype instead):

CREATE OR REPLACE FUNCTION foo(refcur_var refcursor) RETURNS void AS $$ DECLARE rec_var some_rowtype; BEGIN FETCH refcur_var INTO rec_var; RAISE NOTICE ‘%’, rec_var;

Temporary tables

plpgsql_check cannot verify queries over temporary tables that are created in plpgsql’s function runtime. For this use case it is necessary to create a fake temp table or disable plpgsql_check for this function.

In reality temp tables are stored in own (per user) schema with higher priority than persistent tables. So you can do (with following trick safetly):

CREATE OR REPLACE FUNCTION public.disable_dml() RETURNS trigger LANGUAGE plpgsql AS $function$ BEGIN RAISE EXCEPTION SQLSTATE ’42P01′ USING message = format(‘this instance of %I table doesn”t allow any DML operation’, TG_TABLE_NAME), hint = format(‘you should to run “CREATE TEMP TABLE %1$I(LIKE %1$I INCLUDING ALL);” statement’, TG_TABLE_NAME); RETURN NULL; END; $function$; CREATE TABLE foo(a int, b int); — doesn’t hold data ever CREATE TRIGGER foo_disable_dml BEFORE INSERT OR UPDATE OR DELETE ON foo EXECUTE PROCEDURE disable_dml(); postgres=# INSERT INTO foo VALUES(10,20); ERROR: this instance of foo table doesn’t allow any DML operation HINT: you should to run “CREATE TEMP TABLE foo(LIKE foo INCLUDING ALL);” statement postgres=# CREATE TABLE postgres=# INSERT INTO foo VALUES(10,20); INSERT 0 1

This trick emulates GLOBAL TEMP tables partially and it allows a statical validation. Other possibility is using a [template foreign data wrapper] (https://github.com/okbob/template_fdw)

You can use pragma table and create ephemeral table:

BEGIN CREATE TEMP TABLE xxx(a int); PERFORM plpgsql_check_pragma(‘table: xxx(a int)’); INSERT INTO xxx VALUES(10);

Dependency list

A function plpgsql_show_dependency_tb can show all functions, operators and relations used inside processed function:

postgres=# select * from plpgsql_show_dependency_tb(‘testfunc(int,float)’); ┌──────────┬───────┬────────┬─────────┬────────────────────────────┐ │ type │ oid │ schema │ name │ params │ ╞══════════╪═══════╪════════╪═════════╪════════════════════════════╡ │ FUNCTION │ 36008 │ public │ myfunc1 │ (integer,double precision) │ │ FUNCTION │ 35999 │ public │ myfunc2 │ (integer,double precision) │ │ OPERATOR │ 36007 │ public │ ** │ (integer,integer) │ │ RELATION │ 36005 │ public │ myview │ │ │ RELATION │ 36002 │ public │ mytable │ │ └──────────┴───────┴────────┴─────────┴────────────────────────────┘ (4 rows)

Profiler

The plpgsql_check contains simple profiler of plpgsql functions and procedures. It can work with/without a access to shared memory. It depends on shared_preload_libraries config. When plpgsql_check was initialized by shared_preload_libraries , then it can allocate shared memory, and function’s profiles are stored there. When plpgsql_check cannot to allocate shared momory, the profile is stored in session memory.

Due dependencies, shared_preload_libraries should to contains plpgsql first

postgres=# show shared_preload_libraries ; ┌──────────────────────────┐ │ shared_preload_libraries │ ╞══════════════════════════╡ │ plpgsql,plpgsql_check │ └──────────────────────────┘ (1 row)

The profiler is active when GUC plpgsql_check.profiler is on. The profiler doesn’t require shared memory, but if there are not shared memory, then the profile is limmitted just to active session.

When plpgsql_check is initialized by shared_preload_libraries , another GUC is available to configure the amount of shared memory used by the profiler: plpgsql_check.profiler_max_shared_chunks . This defines the maximum number of statements chunk that can be stored in shared memory. For each plpgsql function (or procedure), the whole content is split into chunks of 30 statements. If needed, multiple chunks can be used to store the whole content of a single function. A single chunk is 1704 bytes. The default value for this GUC is 15000, which should be enough for big projects containing hundred of thousands of statements in plpgsql, and will consume about 24MB of memory. If your project doesn’t require that much number of chunks, you can set this parameter to a smaller number in order to decrease the memory usage. The minimum value is 50 (which should consume about 83kB of memory), and the maximum value is 100000 (which should consume about 163MB of memory). Changing this parameter requires a PostgreSQL restart.

The profiler will also retrieve the query identifier for each instruction that contains an expression or optimizable statement. Note that this requires pg_stat_statements, or another similar third-party extension), to be installed. There are some limitations to the query identifier retrieval:

if a plpgsql expression contains underlying statements, only the top level query identifier will be retrieved

the profiler doesn’t compute query identifier by itself but relies on external extension, such as pg_stat_statements, for that. It means that depending on the external extension behavior, you may not be able to see a query identifier for some statements. That’s for instance the case with DDL statements, as pg_stat_statements doesn’t expose the query identifier for such queries.

a query identifier is retrieved only for instructions containing expressions. This means that plpgsql_profiler_function_tb() function can report less query identifier than instructions on a single line.

Attention: A update of shared profiles can decrease performance on servers under higher load.

The profile can be displayed by function plpgsql_profiler_function_tb :

postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb(‘fx(int)’); ┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐ │ lineno │ avg_time │ source │ ╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡ │ 1 │ │ │ │ 2 │ │ declare result int = 0; │ │ 3 │ 0.075 │ begin │ │ 4 │ 0.202 │ for i in 1..$1 loop │ │ 5 │ 0.005 │ select result + i into result; select result + i into result; │ │ 6 │ │ end loop; │ │ 7 │ 0 │ return result; │ │ 8 │ │ end; │ └────────┴──────────┴───────────────────────────────────────────────────────────────────┘ (9 rows)

The profile per statements (not per line) can be displayed by function plpgsql_profiler_function_statements_tb:

CREATE OR REPLACE FUNCTION public.fx1(a integer) RETURNS integer LANGUAGE plpgsql 1 AS $function$ 2 begin 3 if a > 10 then 4 raise notice ‘ahoj’; 5 return -1; 6 else 7 raise notice ‘nazdar’; 8 return 1; 9 end if; 10 end; 11 $function$ postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname from plpgsql_profiler_function_statements_tb(‘fx1’); ┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐ │ stmtid │ parent_stmtid │ parent_note │ lineno │ exec_stmts │ stmtname │ ╞════════╪═══════════════╪═════════════╪════════╪════════════╪═════════════════╡ │ 0 │ ∅ │ ∅ │ 2 │ 0 │ statement block │ │ 1 │ 0 │ body │ 3 │ 0 │ IF │ │ 2 │ 1 │ then body │ 4 │ 0 │ RAISE │ │ 3 │ 1 │ then body │ 5 │ 0 │ RETURN │ │ 4 │ 1 │ else body │ 7 │ 0 │ RAISE │ │ 5 │ 1 │ else body │ 8 │ 0 │ RETURN │ └────────┴───────────────┴─────────────┴────────┴────────────┴─────────────────┘ (6 rows)

All stored profiles can be displayed by calling function plpgsql_profiler_functions_all :

postgres=# select * from plpgsql_profiler_functions_all(); ┌───────────────────────┬────────────┬────────────┬──────────┬─────────────┬──────────┬──────────┐ │ funcoid │ exec_count │ total_time │ avg_time │ stddev_time │ min_time │ max_time │ ╞═══════════════════════╪════════════╪════════════╪══════════╪═════════════╪══════════╪══════════╡ │ fxx(double precision) │ 1 │ 0.01 │ 0.01 │ 0.00 │ 0.01 │ 0.01 │ └───────────────────────┴────────────┴────────────┴──────────┴─────────────┴──────────┴──────────┘ (1 row)

There are two functions for cleaning stored profiles: plpgsql_profiler_reset_all() and plpgsql_profiler_reset(regprocedure) .

Coverage metrics

plpgsql_check provides two functions:

plpgsql_coverage_statements(name)

plpgsql_coverage_branches(name)

Note

There is another very good PLpgSQL profiler – https://bitbucket.org/openscg/plprofiler

My extension is designed to be simple for use and practical. Nothing more or less.

plprofiler is more complex. It build call graphs and from this graph it can creates flame graph of execution times.

Both extensions can be used together with buildin PostgreSQL’s feature – tracking functions.

set track_functions to ‘pl’; … select * from pg_stat_user_functions;

Tracer

plpgsql_check provides a tracing possibility – in this mode you can see notices on start or end functions (terse and default verbosity) and start or end statements (verbose verbosity). For default and verbose verbosity the content of function arguments is displayed. The content of related variables are displayed when verbosity is verbose.

postgres=# do $$ begin perform fx(10,null, ‘now’, e’stěhule’); end; $$; NOTICE: #0 ->> start of inline_code_block (Oid=0) NOTICE: #2 ->> start of function fx(integer,integer,date,text) (Oid=16405) NOTICE: #2 call by inline_code_block line 1 at PERFORM NOTICE: #2 “a” => ’10’, “b” => null, “c” => ‘2020-08-03’, “d” => ‘stěhule’ NOTICE: #4 ->> start of function fx(integer) (Oid=16404) NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM NOTICE: #4 “a” => ’10’ NOTICE: #4 <<- end of function fx (elapsed time=0.098 ms) NOTICE: #2 <<- end of function fx (elapsed time=0.399 ms) NOTICE: #0 <<- end of block (elapsed time=0.754 ms) The number after # is a execution frame counter (this number is related to deep of error context stack). It allows to pair start end and of function. Tracing is enabled by setting plpgsql_check.tracer to on . Attention - enabling this behaviour has significant negative impact on performance (unlike the profiler). You can set a level for output used by tracer plpgsql_check.tracer_errlevel (default is notice ). The output content is limited by length specified by plpgsql_check.tracer_variable_max_length configuration variable. In terse verbose mode the output is reduced: postgres=# set plpgsql_check.tracer_verbosity TO terse; SET postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$; NOTICE: #0 start of inline code block (oid=0) NOTICE: #2 start of fx (oid=16405) NOTICE: #4 start of fx (oid=16404) NOTICE: #4 end of fx NOTICE: #2 end of fx NOTICE: #0 end of inline code block In verbose mode the output is extended about statement details: postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$; NOTICE: #0 ->> start of block inline_code_block (oid=0) NOTICE: #0.1 1 –> start of PERFORM NOTICE: #2 ->> start of function fx(integer,integer,date,text) (oid=16405) NOTICE: #2 call by inline_code_block line 1 at PERFORM NOTICE: #2 “a” => ’10’, “b” => null, “c” => ‘2020-08-04’, “d” => ‘stěhule’ NOTICE: #2.1 1 –> start of PERFORM NOTICE: #2.1 “a” => ’10’ NOTICE: #4 ->> start of function fx(integer) (oid=16404) NOTICE: #4 call by fx(integer,integer,date,text) line 1 at PERFORM NOTICE: #4 “a” => ’10’ NOTICE: #4.1 6 –> start of assignment NOTICE: #4.1 “a” => ’10’, “b” => ’20’ NOTICE: #4.1 <-- end of assignment (elapsed time=0.076 ms) NOTICE: #4.1 "res" => ‘130’ NOTICE: #4.2 7 –> start of RETURN NOTICE: #4.2 “res” => ‘130’ NOTICE: #4.2 <-- end of RETURN (elapsed time=0.054 ms) NOTICE: #4 <<- end of function fx (elapsed time=0.373 ms) NOTICE: #2.1 <-- end of PERFORM (elapsed time=0.589 ms) NOTICE: #2 <<- end of function fx (elapsed time=0.727 ms) NOTICE: #0.1 <-- end of PERFORM (elapsed time=1.147 ms) NOTICE: #0 <<- end of block (elapsed time=1.286 ms) Special feature of tracer is tracing of ASSERT statement when plpgsql_check.trace_assert is on . When plpgsql_check.trace_assert_verbosity is DEFAULT , then all function's or procedure's variables are displayed when assert expression is false. When this configuration is VERBOSE then all variables from all plpgsql frames are displayed. This behaviour is independent on plpgsql.check_asserts value. It can be used, although the assertions are disabled in plpgsql runtime. postgres=# set plpgsql_check.tracer to off; postgres=# set plpgsql_check.trace_assert_verbosity TO verbose; postgres=# do $$ begin perform fx(10,null, 'now', e'stěhule'); end; $$; NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false NOTICE: "a" => ’10’, “res” => null, “b” => ’20’ NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM NOTICE: “a” => ’10’, “b” => null, “c” => ‘2020-08-05’, “d” => ‘stěhule’ NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM ERROR: assertion failed CONTEXT: PL/pgSQL function fx(integer) line 12 at ASSERT SQL statement “SELECT fx(a)” PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM SQL statement “SELECT fx(10,null, ‘now’, e’stěhule’)” PL/pgSQL function inline_code_block line 1 at PERFORM postgres=# set plpgsql.check_asserts to off; SET postgres=# do $$ begin perform fx(10,null, ‘now’, e’stěhule’); end; $$; NOTICE: #4 PLpgSQL assert expression (false) on line 12 of fx(integer) is false NOTICE: “a” => ’10’, “res” => null, “b” => ’20’ NOTICE: #2 PL/pgSQL function fx(integer,integer,date,text) line 1 at PERFORM NOTICE: “a” => ’10’, “b” => null, “c” => ‘2020-08-05’, “d” => ‘stěhule’ NOTICE: #0 PL/pgSQL function inline_code_block line 1 at PERFORM DO

Attention – SECURITY

Tracer prints content of variables or function arguments. For security definer function, this content can hold security sensitive data. This is reason why tracer is disabled by default and should be enabled only with super user rights plpgsql_check.enable_tracer .

Pragma

You can configure plpgsql_check behave inside checked function with “pragma” function. This is a analogy of PL/SQL or ADA language of PRAGMA feature. PLpgSQL doesn’t support PRAGMA, but plpgsql_check detects function named plpgsql_check_pragma and get options from parameters of this function. These plpgsql_check options are valid to end of group of statements.

CREATE OR REPLACE FUNCTION test() RETURNS void AS $$ BEGIN … — for following statements disable check PERFORM plpgsql_check_pragma(‘disable:check’); … — enable check again PERFORM plpgsql_check_pragma(‘enable:check’); … END; $$ LANGUAGE plpgsql;

The function plpgsql_check_pragma is immutable function that returns one. It is defined by plpgsql_check extension. You can declare alternative plpgsql_check_pragma function like:

CREATE OR REPLACE FUNCTION plpgsql_check_pragma(VARIADIC args[]) RETURNS int AS $$ SELECT 1 $$ LANGUAGE sql IMMUTABLE;

Using pragma function in declaration part of top block sets options on function level too.

CREATE OR REPLACE FUNCTION test() RETURNS void AS $$ DECLARE aux int := plpgsql_check_pragma(‘disable:extra_warnings’); …

Shorter syntax for pragma is supported too:

CREATE OR REPLACE FUNCTION test() RETURNS void AS $$ DECLARE r record; BEGIN PERFORM ‘PRAGMA:TYPE:r (a int, b int)’; PERFORM ‘PRAGMA:TABLE: x (like pg_class)’; …

Supported pragmas

echo:str – print string (for testing)

status:check , status:tracer , status:other_warnings , status:performance_warnings , status:extra_warnings , status:security_warnings

enable:check , enable:tracer , enable:other_warnings , enable:performance_warnings , enable:extra_warnings , enable:security_warnings

disable:check , disable:tracer , disable:other_warnings , disable:performance_warnings , disable:extra_warnings , disable:security_warnings

type:varname typename or type:varname (fieldname type, …) – set type to variable of record type

table: name (column_name type, …) or table: name (like tablename) – create ephereal table

Pragmas enable:tracer and disable:tracer are active for Postgres 12 and higher

Compilation

You need a development environment for PostgreSQL extensions:

make clean make install

result:

[pavel@localhost plpgsql_check]$ make USE_PGXS=1 clean rm -f plpgsql_check.so libplpgsql_check.a libplpgsql_check.pc rm -f plpgsql_check.o rm -rf results/ regression.diffs regression.out tmp_check/ log/ [pavel@localhost plpgsql_check]$ make USE_PGXS=1 all clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./ -I/usr/local/pgsql/include/server -I/usr/local/pgsql/include/internal -D_GNU_SOURCE -c -o plpgsql_check.o plpgsql_check.c clang -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I/usr/local/pgsql/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -shared -o plpgsql_check.so plpgsql_check.o -L/usr/local/pgsql/lib -Wl,–as-needed -Wl,-rpath,’/usr/local/pgsql/lib’,–enable-new-dtags [pavel@localhost plpgsql_check]$ su root Password: ******* [root@localhost plpgsql_check]# make USE_PGXS=1 install /usr/bin/mkdir -p ‘/usr/local/pgsql/lib’ /usr/bin/mkdir -p ‘/usr/local/pgsql/share/extension’ /usr/bin/mkdir -p ‘/usr/local/pgsql/share/extension’ /usr/bin/install -c -m 755 plpgsql_check.so ‘/usr/local/pgsql/lib/plpgsql_check.so’ /usr/bin/install -c -m 644 plpgsql_check.control ‘/usr/local/pgsql/share/extension/’ /usr/bin/install -c -m 644 plpgsql_check–0.9.sql ‘/usr/local/pgsql/share/extension/’ [root@localhost plpgsql_check]# exit [pavel@localhost plpgsql_check]$ make USE_PGXS=1 installcheck /usr/local/pgsql/lib/pgxs/src/makefiles/../../src/test/regress/pg_regress –inputdir=./ –psqldir=’/usr/local/pgsql/bin’ –dbname=pl_regression –load-language=plpgsql –dbname=contrib_regression plpgsql_check_passive plpgsql_check_active plpgsql_check_active-9.5 (using postmaster on Unix socket, default port) ============== dropping database “contrib_regression” ============== DROP DATABASE ============== creating database “contrib_regression” ============== CREATE DATABASE ALTER DATABASE ============== installing plpgsql ============== CREATE LANGUAGE ============== running regression test queries ============== test plpgsql_check_passive … ok test plpgsql_check_active … ok test plpgsql_check_active-9.5 … ok ===================== All 3 tests passed. =====================

Compilation on Ubuntu

Sometimes successful compilation can require libicu-dev package (PostgreSQL 10 and higher – when pg was compiled with ICU support)

sudo apt install libicu-dev

Compilation plpgsql_check on Windows

You can check precompiled dll libraries http://okbob.blogspot.cz/2015/02/plpgsqlcheck-is-available-for-microsoft.html

or compile by self:

Download and install PostgreSQL for Win32 from http://www.enterprisedb.com Download and install Microsoft Visual C++ Express Lern tutorial http://blog.2ndquadrant.com/compiling-postgresql-extensions-visual-studio-windows Build plpgsql_check.dll Install plugin copy plpgsql_check.dll to PostgreSQL\14\lib copy plpgsql_check.control and plpgsql_check–2.1.sql to PostgreSQL\14\share\extension

Checked on

gcc on Linux (against all supported PostgreSQL)

clang 3.4 on Linux (against PostgreSQL 10)

for success regress tests the PostgreSQL 10 or higher is required

Compilation against PostgreSQL 10 requires libICU!

Licence

Copyright (c) Pavel Stehule ([email protected])

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Note

If you like it, send a postcard to address

Pavel Stehule Skalice 12 256 01 Benesov u Prahy Czech Republic

I invite any questions, comments, bug reports, patches on mail address [email protected]

Author: okbob

Source Code: https://github.com/okbob/plpgsql_check

License: View license

#postgresql

So you have finished reading the kafka manager windows topic article, if you find this article useful, please share it. Thank you very much. See more: Kafka manager, Kafka Manager tool, Kafka-manager-docker, Kafka UI, Kafka Manager docker-compose, Slack kafka, Kafka-eagle, Kafka manager install Ubuntu

Leave a Comment