In one of my previous blog posts, Why PostgreSQL WAL Archival is Slow, I tried to explain three of the major design limitations of PostgreSQL’s WAL archiver which is not so great for a database with high WAL generation. In this post, I want to discuss how pgBackRest is addressing one of the problems (cause number two in the previous post) using its Asynchronous WAL archiving feature.

Let me explain the problem in a bit more detail here.

The PostgreSQL archiver process has an internal function pgarch_archiveXlog()  which calls the system()  system call which executes the archive_command  which will be a shell command/script.

So the execution is one-after-another without any parallelism or batching. Each execution starts up a separate process.

Let’s look at a simple WAL archive using Unix/Linux cp  command:

Archiver process (PID 2639) started a cp process (PID 2954).  Then it starts another process for the next WAL segment (PID 2957).

Then another one for the next WAL segment:

We can argue that starting up a cp command can be light.

But what if we need to use scp, where the ssh key exchanges and authentication negotiations need to happen? The startup time can easily go higher than the time for transferring a 16MB file. And what if we want to take the WAL to Cloud storage like Amazon S3 or Azure Storage? The startup time/cost can go much higher than just transferring the small file.

Typically Database Administrator’s worry is not about the overhead. but the WAL archiving falling behind the WAL generation which intern affects the space utilization, availability of the database, and reliability of backups, point-in-time recoverability, etc.

Even a single-threaded pgbench load can show us the limitations of WAL archiving and how things can fall apart.

Here is a sample test result of direct WAL push to S3 bucket with archive_command=/usr/bin/aws s3 cp %p s3://pg-jobin

In a matter of a couple of minutes, the archiver started lagging by 278 WAL segments.

With pgBackRest, the compression of WAL segments does help to reduce the impact. But still, the gap is significant.

** The archive_command  used in this case is pgbackrest --stanza=pg0app archive-push %p

If the cloud bucket access is logged/audited, it can reveal the activities by WAL archive push.

In a typical test environment, the best case for WAL push to S3 was taking around 500 milliseconds for each WAL segment files. (Sometimes, I saw slower performance which takes up to 1400 ms). So effectively, one or two WAL segments can be archived per second. But this won’t be sufficient for a very high transaction system.

pgBackRest and Asynchronous WAL Archive

pgBackRest addresses the above-mentioned limitation by converting the archive_command  which is inherently a sequential and synchronous operation into a parallel and asynchronous operation. Additionally, it is able to reuse the connection.

The asynchronous operation works using a spool directory for exchanging the status. The spool-path must be configured if asynchronous archiving enabled. Spool-path is used for storing data for the asynchronous archive-push and archive-get command. The asynchronous archive-push command writes acknowledgments into the spool path when it has successfully stored WAL in the archive (and errors on failure) so the foreground process can quickly notify PostgreSQL.

Parameters for the Asynchronous and Parallel Operation

archive-async

This is the parameter to enable the asynchronous archive-push and archive-get commands. By default, it is disabled (n). Setting this like archive-async=y  enables the asynchronous archive mode.

archive-push-queue-max

This is a protection mechanism. If the archive process becomes slow, too many WAL segments can get accumulated on the system. This can lead to filling up the disk and database outage. DBAs are always looking for a solution that can protect their database and there are a lot of visits to blog posts like PostgreSQL WAL Retention and Clean Up: pg_archivecleanup for the same reason.

This parameter allows us to specify the maximum size of the PostgreSQL archive to be queued up. If this maximum limit is reached, pgBackRest will just drop the WALs and report back to PostgreSQL that the archiving was successful. Cheating! But, it saves the database from a difficult situation. Obviously, the point-in-time recovery won’t be possible as we are losing WAL segments. So, a fresh full backup is required if this happens.

archive-get-queue-max

This parameter is not important for the WAL archive push, but for restoring (get). It can use the spool-path location for speeding up the restore of WAL to PostgreSQL.

process-max

This parameter specifies the parallelism and these many parallel processes will be executing. Each process will perform compression and transfer to make the command run faster. This parameter can be specified for each of the backup commands as such:

Configuration for Asynchronous WAL Archive

pgBackRest’s backup configuration for asynchronous and parallel backup is just a matter of adding the above-mentioned parameter values to the configuration file. Here is a sample configuration file.

My test configuration file finally looks as follows:

As explained by archive-async, spool-path, process-max are specified.

On testing with the same load which originally created a gap up to 278, I couldn’t see any considerable gap anymore. The maximum I could detect is a gap of 10, and the gap got closed almost instantaneously once the load decreased. Many times, the time is taken for WAL push was just 2 milliseconds at the PostgreSQL level.

This is because the WAL is already pushed by pgBackRest processes and it just acknowledges back to PostgreSQL about the success.

How it Works

If we look at the backend process, the work of pushing the WAL segment is actually performed by another 4+1 process. See the PIDs 8930,8931,8932,8934 and 8935:

They are independent of the process (PID: 8928) created by the archive_command  of PostgreSQL’s archiver Process (PID 21747). This process created by PostgreSQL mainly works as an intermediary.

pgBackRest creates directories and files in this spool directory in spool-path for processing and coordination with asynchronous jobs.

The internal logic can be summarised as:

An .ok  file indicates that it is already *been* processed. So only those .ready  files which don’t have corresponding .ok  files are to be processed. pgBackRest directly checks in the pg_wal/archive_status  directory to get the list of ready files ( readyList). Any .ok  files that do not have a corresponding .ready  file in pg_wal/archive_status  are removed, as they are already processed and acknowledged back to PostgreSQL.

Important: Asynchronous archive push is more complex and consumes more resources than single-threaded archive push. So I suggest using it only if it is really unavoidable. Finding out the bare minimum value for process-max parameter is also important to avoid server resource wastage.

At Percona, we recommend pgBackRest as the backup solution for PostgreSQL, and it is included in the Percona Distribution for PostgreSQL for making it easy for our users.

References

  1. https://pgbackrest.org/configuration.html
  2. https://github.com/pgbackrest/pgbackrest (source code)
  3. https://pgstef.github.io/2019/03/26/pgbackrest_archiving_tricks.html

Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.

Download PDF

4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
David Steele

This an excellent article — clearly written and very accurate.

I do have one small correction to suggest: An .ok file indicates that it is already taken for processing should really be An .ok file indicates that it is already *been* processed. Jobs do not need to be “claimed” because the async process holds a lock the entire time it is running so another async process won’t start and try to run the same jobs.

Markus

Cool. 2 Questions.

Is there any signal (log entry, …) when pgbackrest deletes the archive-push-queue when it reached archive-push-queue-max?

Would a single pgbackrest server be fast enough to serve around 200 postgres instances that way?

Jobin Augustine

Hi Markus,
Yes, pgBackrest spits out warning message so that there will be log entry in PostgreSQL log when pgbackrest deletes the archive-push-queue when it reached archive-push-queue-max

Regarding,
>>Would a single pgbackrest server be fast enough to serve around 200 postgres instances that way?
A dedicated pgBackRest backup server will be constrained by the resources available to it like CPU, IO and Network bandwidths. But quantifying in terms of number of PostgreSQL instances will be difficult because environments vary a lot in terms of it size and transaction volume.