Configuring work_mem in Postgres

Photo by Sergio Ibanez

One of the worst performance hits a Postgres query can take is having to perform a sort or hash operation on disk. When these space-intensive operations require more memory than is available, Postgres uses disk space instead. Disk space is much slower to read and write than RAM, so this generally takes significantly longer.

The best solution to this problem is to avoid having to perform the operation entirely, for example by adding a judicious index.

The second best solution is to reduce the amount of space required for the operation, by working on fewer rows or less data per row. If you can reduce the amount of memory needed, the operation can take place in RAM rather than requiring slow disk access.

However, these options are not always available. Assuming that the server has enough memory, it often makes sense to allow postgres to use more RAM for these operations before choosing to use disk space. This is done by adjusting the work_mem system config parameter.

The default value for work_mem is 4MB. This is generally acknowledged to be too small for most modern systems. For example, Christophe Pettus suggests that 16MB is a good starting point for most people. So it’s pretty normal to at least consider increasing it. You can check what the current value is with the query:

SHOW work_mem;

There are dangers in changing the work_mem value. It refers to the memory available to a single operation – one individual hash, bitmap scan or sort – and so even a single query can use several times the defined value. When you consider that servers are often serving many queries simultaneously, you’ll see why setting the value too high can lead to the server running out of memory. This is, to put it mildly, something you probably want to avoid.

A value too small, on the other hand, will result in too many operations taking place on disk, which is in turn much less efficient than using RAM.

Therefore it’s usually a good idea to experiment with values before changing them for the whole server. You can change the work_mem just for your current session by running a query, eg:

SET work_mem TO '16MB';
  • Possible units available are “kB”, “MB”, “GB” and “TB”

  • If you use an integer (with no units), Postgres will interpret this as a value in kilobytes

Although you can still destroy the server’s performance by setting work_mem for your session this way, it is much harder. Values that are too low will only hurt the queries you run yourself, and a value has to be much too high before it can hog all the available memory with just one query.

Note: as of PostgreSQL 13, there is a new hash_mem_multiplier parameter, which allows you to increase the maximum amount of memory available to hash-based operations, without also doing so for sort-based operations. As the name suggests, this acts as a multiplier of work_mem, and has a default of 1.0. Increasing this can be useful if you want to allow larger hash-based operations, without risking memory pressure from more sort-based operations being done in memory.

Once you’re happy that you want to change the work_mem value for your whole server, you can set the value permanently by adding/modifying the work_mem line in your data/postgresql.conf file:

work_mem = 16MB

Then reload the config using a query:

select pg_reload_conf();

Or run the reload from the command line:

pg_ctl reload -D postgres\data

Hopefully this will help you tweak your work_mem parameter to its optimal value!


Our product, pgMustard, is a query plan visualization tool with a difference – it adds performance tips, scored by their potential, and gives advice in plain English. Find out more, or try it five times for free.


Last updated: 12-Oct-2021