Citus 11 for Postgres goes fully open source, with query from any node

Written by Marco Slot
June 17, 2022

Citus 11.0 is here! Citus is a PostgreSQL extension that adds distributed database superpowers to PostgreSQL. With Citus, you can create tables that are transparently distributed or replicated across a cluster of PostgreSQL nodes. Citus 11.0 is a new major release, which means that it comes with some very exciting new features that enable new levels of scalability.

The biggest enhancement in Citus 11.0 is that you can now always run distributed queries from any node in the cluster because the schema & metadata are automatically synchronized. We already shared some of the details in the Citus 11.0 beta blog post, but we also have big surprise for those of you who use Citus open source that was not part of the initial beta.

When we do a new Citus release, we usually release 2 versions: The open source version and the enterprise release which includes a few extra features. However, there will be only one version of Citus 11.0, because everything in the Citus extension is now fully open source!

That means that you can now rebalance shards without blocking writes, manage roles across the cluster, isolate tenants to their own shards, and more. All this comes on top of the already massive enhancement in Citus 11.0: You can query your Citus cluster from any node, creating a truly distributed PostgreSQL experience.

In this blog post we will cover the highlights of:

If you want to know everything that’s new, you can check out the Updates page for Citus 11.0, which contains a detailed breakdown of all the new features and other improvements.

Remaining Citus Enterprise features are now open source

Long ago, Citus Data was an enterprise software company. Over time our team’s focus shifted towards open-source, becoming a cloud vendor, and then becoming an integral part of Azure. With the new focus, our team has developed all new features as part of the Citus open source project on GitHub. Making Citus open source enables you to interact directly with developers and the community, know the code you run, avoid lock-in concerns, and it creates a better developer experience for everyone.

Last year as part of the Citus 10 release, we already open sourced the shard rebalancer, an important component of Citus which allows you to easily scale out your cluster by moving data to new nodes. The shard rebalancing feature is also useful for performance reasons, to balance data across all the nodes in your cluster.

Now, as part of Citus 11.0, the remaining enterprise features become open source as well:

My favourite newly open-sourced feature is the non-blocking shard rebalancer

Perhaps the most exciting of the newly open-sourced features is non-blocking shard moves. While we open sourced the shard rebalancer back in Citus 10, writes to shards being moved were blocked during the shard move in the open source version. Now in Citus 11, Citus moves shards around by using logical replication. That way, your application will only experience a brief blip in write latencies when scaling out the cluster by moving existing data to new nodes. A prerequisite is that all your Postgres tables have primary keys.

Now that the non-blocking aspect of the shard rebalancer has been open sourced, you get the exact same shard rebalancing functionality when you run Citus locally, on-premises, in your CI environment, or in the managed service in Azure.

Querying distributed Postgres tables from any node

Citus 11 also comes with an important new feature: Automatic schema & metadata syncing.

In a typical Citus deployment, your application performs distributed queries via a coordinator. Connecting via the coordinator makes Citus largely indistinguishable from single-node PostgreSQL from your application’s point-of-view.

diagram 1: users and items are distributed tables, and their metadata is only on the coordinator
Figure 1: A Citus cluster in Citus 10.2 or earlier, where users and items are distributed tables—and their metadata is only on the coordinator.

The coordinator can handle high distributed query throughput (100k/sec), but there are applications that still need higher throughput or have queries that do a relatively large amount of processing on the coordinator (e.g. search with large result sets). Fortunately, distributed queries in Citus 11 can be handled by any node, because distributed table schema & metadata is synchronized from the coordinator to all the nodes. You still do your DDL commands and cluster administration via the coordinator but can choose to load balance heavy distributed query workloads across worker nodes.

diagram 2: users and items are distributed tables, and with the new automated metadata syncing feature
Figure 2: A Citus 11.0 cluster where users and items are distributed tables—and with the new automated metadata syncing feature, their metadata is synchronized to all nodes.

While metadata syncing already existed before Citus 11 as a special mode with some limitations (we sometimes referred to it as “Citus MX”), it is now universal and automatic. Any Citus cluster will always have distributed table metadata on all the nodes, as well as all your views, functions, etc., meaning any node can perform distributed queries.

The Citus 11 beta blog post gives more details on how to operate your cluster when querying from any node. The blog post describes how you can view the activity across all the nodes and associate internal queries with distributed queries using global process identifiers (GPID). The post also describes how you can load balance connections from your applications across your Citus nodes.

Bottom line, what does this new metadata syncing / query-from-any-node feature mean for you and your app?

  • No app changes required: your application can continue to route your Postgres queries to the Citus coordinator like you’ve always done, and let Citus figure out how to distribute the queries.
  • Now the most demanding data-intensive apps can choose to query from any node: If you want and need to, you can load balance your Postgres queries across the Citus worker nodes. Be sure to follow the instructions on how to configure the cluster in terms of max connections and load balancing.

Upgrading to Citus 11

If you are currently running a Citus cluster, upgrading to Citus 11 is straightforward. After installing the new package and restarting PostgreSQL, the 1st step is to run the following commands on all the nodes:

ALTER EXTENSION citus UPDATE;

Then when all nodes are upgraded, the 2nd step is to connect to the coordinator and run:

CALL citus_finish_citus_upgrade();

The 2nd step above is new in Citus 11. The citus_finish_citus_upgrade function will ensure that all the nodes have metadata, such that your existing cluster behaves the same as a brand new Citus 11 cluster. We recommend also calling citus_finish_citus_upgrade after any future Citus upgrade, since we may add additional steps.

No application changes are required when switching to Citus 11. You can continue running all your queries via the coordinator, and that remains the simplest approach for most applications. After upgrading, you have the option of running some or all your queries via worker nodes, and of course can use all the new features such as the non-blocking rebalancer.

One thing to consider when upgrading to Citus 11 is that a few seldom-used features have been deprecated:

  • Shard placement invalidation was used to handle write failures to shards that are replicated using statement-based shard replication. When a write failed on a shard placement, it would get invalidated, such that the system can continue with the remaining replicas. While this behaviour has some availability benefits, it also had many shortcomings. Citus still supports statement-based shard replication for scaling reads, so existing distributed tables that use shard replication can be upgraded but shard placements will no longer be invalidated on failure after the upgrade.
  • Append-distributed tables are distributed tables which require frequently creating new shards when loading new data. The downside of this approach is that tables end up with too many shards, and because there is no well-defined distribution column many relational features were unavailable. Existing append-distributed tables will be read-only from Citus 11.0. We recommend switching to hash-distributed tables.
  • Distributed cstore_fdw tables are distributed tables where the shards were foreign tables that use cstore_fdw extension. Since Citus has built-in columnar access method, the combination of distributed tables with cstore_fdw is now deprecated. We recommend converting to columnar access method before upgrading to Citus 11.0.

Wait, where are my shards?

If you have used Citus before, you may have occasionally connected to your worker nodes to see the shards that store the data in distributed tables and reference tables. Each worker node will have a different set of shards, for instance:

\d
            List of relations
┌────────┬──────────────┬───────┬───────┐
 Schema      Name      Type   Owner 
├────────┼──────────────┼───────┼───────┤
 public  citus_tables  view   marco 
 public  ref_102040    table  marco 
 public  test_102105   table  marco 
 public  test_102107   table  marco 
└────────┴──────────────┴───────┴───────┘

In Citus 11, when you connect to any of the worker nodes, you see distributed tables and reference tables, but not the shards:

\d
            List of relations
┌────────┬──────────────┬───────┬───────┐
 Schema      Name      Type   Owner 
├────────┼──────────────┼───────┼───────┤
 public  citus_tables  view   marco 
 public  ref           table  marco 
 public  test          table  marco 
└────────┴──────────────┴───────┴───────┘
(3 rows)

What’s cool is that every node in your cluster now looks the same, but where are the shards?

We found that users and various tools get confused by seeing a mixture of distributed tables and shards. For instance, pg_dump will try to dump both the shards and the distributed tables. We therefore hide the shards from catalog queries, but they are still there, and you can query them directly if needed.

For cases where you need to see the shards in specific application, we introduced a new setting:

-- show shards only to pgAdmin and psql (based on their application_name):
set citus.show_shards_for_app_name_prefixes to 'pgAdmin,psql';

-- show shards to all applications:
set citus.show_shards_for_app_name_prefixes to '*';

\d
            List of relations
┌────────┬──────────────┬───────┬───────┐
 Schema      Name      Type   Owner 
├────────┼──────────────┼───────┼───────┤
 public  citus_tables  view   marco 
 public  ref           table  marco 
 public  ref_102040    table  marco 
 public  test          table  marco 
 public  test_102105   table  marco 
 public  test_102107   table  marco 
└────────┴──────────────┴───────┴───────┘
(6 rows)

Hidden preview feature in Citus 11: Triggers on distributed tables

Triggers are an important Postgres feature for maintaining complex data models—and for relational databases more broadly. When a row is inserted, updated, or deleted, a trigger function can perform additional actions on the database. Since all Citus nodes now have metadata, a trigger on a shard of a distributed table can now perform actions on other distributed tables from the worker node that stores the shard.

The Citus approach to triggers scales very well because the Postgres trigger calls are pushed down to each shard. However, Citus currently has no way of knowing what a trigger function will do, which means it could do things that cause transactional issues. For instance, the trigger function might not see some uncommitted writes if it tries to access other shards. The way to avoid that is to only access co-located shard keys from the trigger function. For now, we ask users to explicitly enable triggers using the citus.enable_unsafe_triggers setting:

CREATE TABLE data (key text primary key, value jsonb);
SELECT create_distributed_table('data','key');

CREATE TABLE data_audit (operation text, key text, new_value jsonb, change_time timestamptz default now());
SELECT create_distributed_table('data_audit','key', colocate_with := 'data');

-- we know this function only writes to a co-located table using the same key
CREATE OR REPLACE FUNCTION audit_trigger()
RETURNS trigger
AS $$
DECLARE
BEGIN
    INSERT INTO data_audit VALUES (TG_OP, Coalesce(OLD.key, NEW.key), NEW.value);
    RETURN NULL;
END;
$$ LANGUAGE plpgsql;

-- so, it is safe to enable triggers on distributed tables
SET citus.enable_unsafe_triggers TO on;

CREATE TRIGGER data_audit_trigger
AFTER INSERT OR UPDATE OR DELETE ON data
FOR EACH ROW EXECUTE FUNCTION audit_trigger();

As long as you are careful to only access co-located keys, using triggers with Citus gives you a great way to take advantage of automatic schema & metadata syncing without necessarily having to load balance queries across nodes. By pushing more work into trigger function(s), fewer distributed queries and network round trips are needed, which improves overall scalability.

New frontiers in Distributed PostgreSQL

With the new Citus 11 release, we are entering new frontiers. Imagine if there was a FOSS tool that turned the latest version of PostgreSQL into a distributed database that can scale out from a single node, routes or parallelizes queries across the cluster for high performance at any scale, allows you to connect your application to any node, scales out without interruption, and you can get a cluster with a few clicks on Azure or run it yourself in any environment. Moreover, it can meet the demands of extremely data-intensive workloads. That is what is available with Citus 11.

If you want to learn more about the new Citus 11 features, including how to monitor your cluster and load-balance traffic, check out my talk on Citus 11 at Citus Con that covers many of the details and includes a demo.

video thumbnail for Citus 11: A look at the Elicorn's horn YouTube play icon
Figure 3: Video thumbnail of Marco’s talk at Citus Con: An Event for Postgres, titled “Citus 11: A look at the Elicorn’s Horn”.

If you use and rely on Citus, you might want to check out our Updates page for Citus 11.0 for details on everything that’s new. And if you’re new to Citus, I recommend going to the Getting started page.

For those of you who use Citus in the cloud, we will soon be updating Citus on Azure. That way, you can get all the latest capabilities of the now fully-open-source Citus extension as a managed service, with high availability, backups, major version upgrades, read replicas, and more, in a few clicks.

Watch the recording of the Citus 11 Release Party

Watch the Citus 11 Release Party replay hosted by a few of the engineers who eat, drink, and breathe the Citus extension to Postgres. See demos of user management, non-blocking shard rebalancer, query from any node, and cluster activity views.

Citus 11 launch party preshow thumbnail
Figure 4: Thumbnail from the Citus 11 Release Party Livestream on YouTube that happened on Tue 28 June, 2022. Watch the recording to see all the demos.
Marco Slot

Written by Marco Slot

Former lead engineer for the Citus database engine at Microsoft. Speaker at Postgres Conf EU, PostgresOpen, pgDay Paris, Hello World, SIGMOD, & lots of meetups. Talk selection team member for Citus Con: An Event for Postgres. PhD in distributed systems. Loves mountain hiking.

@marcoslot marcocitus