compression of postgresql wal archivesAs hardware and software evolve, the bottlenecks in a database system also shift. Many old problems might disappear and new types of problems pop-up.

Old Limitations

There were days when CPU and Memory was a limitation. More than a decade back, servers with 4 cores were “High End” and as a DBA, my biggest worry was managing the available resources. And for an old DBA like me, Oracle’s attempt to pool CPU and Memory from multiple host machines for a single database using RAC architecture was a great attempt to solve it.

Then came the days of storage speed limitations. It was triggered by the emergence of multi-core with multi-thread processors becoming common, as well as memory size and bus speed increasing. Enterprises tried to solve it with sophisticated SAN drives, Specialized Storages with cache, etc. But it has remained for many years, even now as enterprises started increasingly shifting to NVMe drives.

Recently we started observing a new bottleneck which is becoming a pain point for many database users. As the capability of the single-host server increased, it started processing a huge number of transactions. There are systems that produce thousands of WAL files in a couple of minutes, and there were a few cases reported where WAL archiving to a cheaper, slower disk system was not able to catch up with WAL generation. To add more complexity, many organizations prefer to store WAL archives over a low bandwidth network. (There is an inherent problem in Postgres Archiving that if it lags behind, it tends to lag more because the archive process needs to search among .ready files. which won’t be discussed here.)

In this blog post, I would like to bring to your attention the fact that compressing WALs can be easily achieved if you are not already doing it, as well as a query to monitor the archiving gap.

Compressing PostgreSQL WALs

The demands and requirements for compressing WALs before archiving are increasing day by day. Luckily, most of the PostgreSQL backup tools like pgbackrest/wal-g etc already take care of it. The archive_command  invokes these tools, silently archiving for users.

For example, in pg_backrest, we can specify archive_command, which uses the gzip behind the scene.

Or in WAL-G, we can specify:

This does the lz4 compression of WAL files.

But what if we are not using any specific backup tool for WAL compression for archiving? We can still compress the WALs using Linux tools like gzip or bzip, etc. Gzip will be available in most of the Linux installations by default, so configuring it will be an easy task.

However, 7za is the most interesting among all the compression options for WAL, which gives the highest compression as fast as possible, which is the major criterion in a system with high WAL generation. You may have to explicitly install the 7za, which is part of the 7zip package from an extra repo.

On CentOS 7 it is:

On Ubuntu it is:

Now we should be able to specify the archive_command like this:

In my test system, I could see archived WAL files of less than 200kb. Size can vary according to the content of the WALs, which depends on the type of transaction on the database.

Compressing 16MB files to kilobyte rages is definitely going to save network bandwidth and storage while addressing the problem of archiving falling behind.

Restoring the WALs

Archiving and getting the highest compression is just one part, but we should also be able to restore them when required. The backup tools provide their own restore command options. For example, pgbackrest can use archive-get :

Wal-g provides wal-fetch for the same purpose.

In case you are opting for manual archive compression using gzip, we can use the gunzip utility in restore_command as follows:

If you are already started using PostgreSQL 12, this parameter can be set using ALTER SYSTEM:

OR

For 7za as shown above, you may use the following:

However, unlike archive_command changes, restore_command changes require you to restart the standby database.

Monitoring Archive Progress

The current WAL archive is available from pg_stat_archiver status, but finding out the gap using the WAL file names is a bit tricky. A sample query which I used to find out the WAL archive lagging is this:

The caveat here is that both current WAL and the WAL to be archived are of the same timeline in order for this query to work, which is the common case. Very rarely we may encounter a different case than that in production. So this query could be of good help when monitoring the WAL archiving of a PostgreSQL server.

Learn more about the Percona Distribution for PostgreSQL.


Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.

Download PDF

10 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
GreenReaper

Log shipping is one way to do it, but TLS compression can also provide a 70% bonus on replication streaming, which is more of an up-to-the-second production-level replication mechanism.

This means both savings on transfer and an increase in the amount of data that may be streamed (though since replication is done on a single core, CPU usage may be your limit, ~50Mbps – just right for making full use of DSL).

Unfortunately, it’s become increasing difficult to make this work on current operating systems, after padding-related vulnerabilities relating largely to website use of TLS gave compression within the protocol a bad name.

PostgreSQL hasn’t yet provided a solution for this, though, despite attempts to do so by some developers, so it’s still useful. Here’s what we do to get it to work.

First off, custom-compile openssl to re-enable SSL compression. You will need to redo this every time openssl/libssl1.1 updates (to remains secure and compressed).

On both SENDER and STANDBY:

apt-get source -t buster openssl
cd openssl[tab]

patch -p0 options |= SSL_OP_NO_COMPRESSION | SSL_OP_ENABLE_MIDDLEBOX_COMPAT;
+ ret->options |= SSL_OP_ENABLE_MIDDLEBOX_COMPAT;

ret->ext.status_type = TLSEXT_STATUSTYPE_nothing;

rm ssl/ssl_lib.c.orig

edit debian/rules
Change ‘no-zlib’ to ‘zlib-dynamic’
edit debian/changelog
Make an entire copy of the topmost segment and change the version to have ib at the end (don’t use a -ib as this breaks backported security fixes)
This will ensure it is viewed as the latest version and not updated.
CFLAGS=’-march=native’ DEB_BUILD_OPTIONS=’nocheck nodoc’ ionice -c3 nice dpkg-buildpackage -b -j4 –no-sign

sudo dpkg -i ../openssl_1.1.1d-0+deb10u1ib_amd64.deb ../libssl1.1_1.1.1d-0+deb10u1ib_amd64.deb ../libssl-dev_1.1.1d-0+deb10u1ib_amd64.deb
[adjust as appropriate]

Add to /etc/postgresql/12/main/environment
PGSSLCOMPRESSION = 1
OPENSSL_CONF = ‘/etc/postgresql/12/main/openssl.conf’

Create /etc/postgresql/12/main/openssl.cnf

openssl_conf = default_conf

[default_conf]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
MinProtocol = TLSv1.2
MaxProtocol = TLSv1.2
Compression = On

Note: Setting something like
CipherString = DEFAULT@SECLEVEL=2:+AES256:+CHACHA20
is not effective here. Ideally we’d use something like this on the sender to prefer AES-128 instead:
ssl_ciphers = ‘DEFAULT@SECLEVEL=2:+AES256:+CHACHA20’ # Prioritize better-performing (lower-CPU) AES-128 while exclude outdated ciphers
however doing so seems to preclude compression, probably because it is reset at the same time.

If requiring compression over a pgbouncer network connection, use:
server_tls_protocols = tlsv1.2
or, if the pgbouncer is local and the server remote
client_tls_protocols = tlsv1.2

You can test to see by setting log_connections = on and client_min_messages = notice on the server and checking to see whether compression=on is included within the log string. If it’s off, you’ve not done something right.

GreenReaper

I’m hopeful, but not patient enough to wait. 🙂

Multi-core replication/compression might be a good idea, as with TLS/zlib compression, one Xeon-D 2.7Ghz CPU core can only do ~50Mbit or 15-30MByte/sec; enough for us, but perhaps not for others.

Also, forgot to mention: for an initial base backup, the PostgreSQL environment is not included (as I guess is expected, as you’re not running the main binary or specifying a cluster). You seem to need:
OPENSSL_CONF=’/etc/postgresql/12/main/openssl.conf’ pg_receivewal … [with sslcompression=1 in the connection string, and the openssl configuration file above]

Alex

Hi Jobin!

Thanks for posting!

It would be great to make a performance benchmark of various compression algorithms.

I think it makes sense to mention that gzip supports different methods of compression, see man gzip.

Also, it would be useful to check the zstd algorithm made by Facebook used as a compression algorithm for WAL.
https://facebook.github.io/zstd/

What do you think?

ugobUgo

How does this compare to using wal_compression? wal_compression doesn’t help having smaller WAL archive files footprint?

François

Hi,

We’ve encountered a bottle issue with gzip, as wal archives started to pile up in the pg_data/pg_wal folder.
To solve this, we’ve replaced it with “pigz -p 3 –fast” so it can use 3 cpu cores instead of one.

Best regards.

Jobin Augustine

Thank you @François
I hope your valuable comment will be helpful for readers of this post.
Yes, In a system with very high WAL generation, single CPU core may not be sufficient.

GreenReaper

Just to update this for PostgreSQL 14: it’s still possible to use SSL compression, but since PostgreSQL now tries to force it off, you have make an additional patch to ssl/ssl_lib.c – now it’s not enough to just remove the default of SSL_OP_NO_COMPRESSION (unfortunately mangled above by comment formatting), you need to add op &= ~SSL_OP_NO_COMPRESSION; to the function SSL_CTX_set_options() – this works when combined with an OPENSSL_CONF that causes compression to default on, as above.

For many this may be too much hassle for too little benefit, but for us it saves monthly transfer and lowers bandwidth use at the cost of a little CPU time.