v13.1 (Sep 2025)

What’s new in Citus 13.1?

Welcome to the release notes for Citus 13.1. The headline for 13.1 is the addition of various useful views, propagation of more DDLs to ease role management and performance improvementes for certain operations as well as various bugfixes. This page dives deep into many of the changes in Citus 13.1 open source extension to PostgreSQL, including the following:

Citus stat counters

As of Citus 13.1, we have introduced a new view called citus_stat_counters that provides insights for various Citus-specific statistics.

Although most of the users usually operate a single distributed database, the view is designed to separately track statistics for multiple databases. Also, although the view also includes the databases where Citus is not installed, the statistics for those databases will always be zero as they cannot perform any distributed operations.

All in all, this view exposes the following statistics for each database in the local node to track inter-node connection health, focusing on the inter-node connections originating from the local node:

  • connection_establishment_succeeded
  • connection_establishment_failed
  • connection_reused

While the first two are relatively easier to follow, the third one covers the case where a connection is reused. This can happen when a connection was already established to the desired node, Citus decided to cache it for some time (see citus.max_cached_conns_per_worker & citus.max_cached_connection_lifetime) and then reused it for a new remote operation.

The view also tracks the following statistics to monitor distributed query execution, focusing on the distributed queries originating from the local node:

  • query_execution_single_shard
  • query_execution_multi_shard

Note that both query_execution_single_shard and query_execution_multi_shard are not only tracked for the top-level queries but also for the subplans for complex queries. The reason is that for some queries, e.g., the ones that go through recursive planning, after Citus performs the heavy work during subplan executions, the work that needs to be done for the top-level query becomes quite straightforward. And for such query types, it would be misleading if we only incremented the query stat counters for the top-level query. Similarly, for non-pushable INSERT .. SELECT and MERGE queries, we perform separate counter increments for the SELECT / source part of the query besides the final INSERT / MERGE query. You can also read this blog post to learn more about recursive planning in Citus.

And here is an example output of the view:

SELECT * FROM citus_stat_counters;

┌───────┬───────────┬────────────────────────────────────┬─────────────────────────────────┬───────────────────┬──────────────────────────────┬─────────────────────────────┬───────────────────────────────┐
  oid      name    connection_establishment_succeeded  connection_establishment_failed  connection_reused  query_execution_single_shard  query_execution_multi_shard           stats_reset          
├───────┼───────────┼────────────────────────────────────┼─────────────────────────────────┼───────────────────┼──────────────────────────────┼─────────────────────────────┼───────────────────────────────┤
 16384  dist_db_1                                1897                                2               6213                          4449                          292                                
 17216  dist_db_2                                 422                                0               2060                           731                          256  2025-09-12 11:44:39.186173+03 
     5  postgres                                    0                                0                  0                             0                            0                                
     4  template0                                   0                                0                  0                             0                            0                                
     1  template1                                   0                                0                  0                             0                            0                                
└───────┴───────────┴────────────────────────────────────┴─────────────────────────────────┴───────────────────┴──────────────────────────────┴─────────────────────────────┴───────────────────────────────┘
(5 rows)

A few more things to know about these new stat counters:

  • Today we don't persist them on server shutdown. In other words, stat counters are automatically reset in case of a server restart.
  • Feature can be enabled & disabled by using citus.enable_stat_counters. Note that the value of this GUC is on by default and controls statistics collection only for the current session. So, if you want to disable statistics collection for all sessions in the database or for a specific user, you need to set it in postgresql.conf, or use ALTER ROLE ... SET or ALTER DATABASE ... SET for the desired roles or databases.
  • citus_stat_counters() can be used to query the stat counters for a specific database oid.
  • Finally, citus_stat_counters_reset() can be used to reset those statistics for a specific database oid or for the current database if nothing or 0 is provided.

Citus nodes view

As of Citus 13.1, we have introduced a new view called citus_views that provides information about the nodes in the cluster. The view is created in the citus schema and is used to display the node name, port, role, and active status of each node in the pg_dist_node table. The view does not require superuser access.

Example 1: Get all nodes

SELECT * from citus_nodes;

 nodename  | nodeport |    role     | active
---------------------------------------------------------------------
 localhost |    57637 | worker      | t
 localhost |    57638 | worker      | f
 localhost |    57636 | coordinator | t
(3 rows)

Example 2: Get all active nodes

SELECT * from citus_nodes where active = 't';
 nodename  | nodeport |    role     | active
---------------------------------------------------------------------
 localhost |    57637 | worker      | t
 localhost |    57636 | coordinator | t
(3 rows)

Example 3: Get coordinator nodes

SELECT * from citus_nodes where role = 'coordinator';
 nodename  | nodeport |    role     | active
---------------------------------------------------------------------
 localhost |    57636 | coordinator | t
(1 row)

Example 4: Get worker nodes

SELECT * from citus_nodes where role = 'worker';
 nodename  | nodeport |  role  | active
---------------------------------------------------------------------
 localhost |    57637 | worker | t
 localhost |    57638 | worker | f
(2 rows)

Propagation of more DDLs

Citus 13.1 adds support for propagating the following DDLs from coordinator to ease role and database management in a Citus cluster:

  • ALTER DATABASE .. SET .. commands #7181
  • ALTER USER RENAME commands #7204
  • COMMENT ON <database>/<role> commands #7388
  • CREATE/DROP database commands #7240, #7253
  • GRANT/REVOKE rights on table columns #7918
  • REASSIGN OWNED BY commands #7319
  • SECURITY LABEL on tables and columns #7956

This release also adds support to propagate some of the role and database management commands from any node in the cluster:

  • CREATE/DROP database commands #7359
  • SECURITY LABEL ON ROLE commands #7508
  • Role management commands in general #7278

Notable fixes

Citus 13.1 also includes several bug fixes and performance improvements, including but not limited to:

  • Fixes performance issue when creating distributed tables if many already exist #7575
  • Fixes performance issue when distributing a table that depends on an extension #7574
  • Fixes performance issue when using \d tablename on a server with many tables #7577
  • Fixes timeout when underlying socket is changed for an inter-node connection #7377
  • Makes sure to avoid incorrectly pushing-down the outer joins between distributed tables and recurring relations (like reference tables, local tables and VALUES(..) etc.) prior to Postgres 17 #7937
  • Makes sure to prevent INSERT ... SELECT queries involving subfield or sublink, to avoid crashes #7912
  • Prevents incorrectly pushing nextval() call down to workers to avoid using incorrect sequence value for some types of INSERT .. SELECTs #7976