Choice of Table Column Types and Order When Migrating to PostgreSQL

Stephen Frost

9 min read

Contributing author David Youatt

An underappreciated element of PostgreSQL performance can be the data types chosen and their organization in tables. For sites that are always looking for that incremental performance improvement, managing the exact layout and utilization of every byte of a row (also known as a tuple) can be worthwhile. This is an important consideration for databases that are migrating from other databases to PostgreSQL as the data types available in PostgreSQL and how they are laid out is unlike many other platforms.

When to use the NUMERIC/DECIMAL Data Type Vs. Other Numeric Types in PostgreSQL

What is important, when trying to squeeze out every bit of performance, is optimizing the organization of your data and minimizing overhead. Much of this is done by the PostgreSQL programmer and the compiler, but there are things you can do to improve performance, including choosing the right column type and order of columns in table definitions.

Variable length

The NUMERIC type (same as DECIMAL) makes sense for things like money, because PG stores the value precisely, including decimal places for fractions, and the range of numbers that can be represented precisely is larger than can be represented by an integer or big integer, and supports storing fractional parts of numbers. Computations are precise with no rounding, but the storage format is base-10000 and, importantly, its storage is variable size. There are several options for storing numbers in most databases, including PostgreSQL. The PostgreSQL documentation is the authoritative source (or look at the source code) for types to represent numbers.

On-disk storage of NUMERIC is actually base-10000, not base-10. This means that there are actually 4 base-10 digits per base-10000 digit and each base-10000 digit takes up 2 bytes. The reason that the variable storage size matters is that variable-length data adds an additional header—1 byte when the variable-length data is less than 127 bytes, 4 bytes otherwise—and even just comparing two numeric values against each other is much more expensive than doing the same for integers or bigints.

So, how does this work in practice? Storage of the vast majority of numerics will be less than 127 bytes, in which case you have:

  • 1 byte for the length (assuming less than 127 bytes)
  • 2 bytes for the numeric header
  • 2 bytes for each base-10000 digit (for up to 4 base-10 digits)

and therefore, numerics tend to require between 5 and 11 bytes to store.

We can compare that against the storage required for integer and bigints using the function pg_catalog.pg_column_size():

=> select
  c1 as numeric, pg_column_size(c1) as numeric_size,
  c2 as int, pg_column_size(c2) as int_size,
  c3 as bigint, pg_column_size(c3) as bigint_size
from t1;

     numeric      | numeric_size |    int     | int_size |      bigint      | bigint_size
------------------+--------------+------------+----------+------------------+-------------
                1 |            5 |          1 |        4 |                1 |           8
               12 |            5 |         12 |        4 |               12 |           8
              123 |            5 |        123 |        4 |              123 |           8
             1234 |            5 |       1234 |        4 |             1234 |           8
            12345 |            7 |      12345 |        4 |            12345 |           8
           123456 |            7 |     123456 |        4 |           123456 |           8
          1234567 |            7 |    1234567 |        4 |          1234567 |           8
         12345678 |            7 |   12345678 |        4 |         12345678 |           8
        123456789 |            9 |  123456789 |        4 |        123456789 |           8
       1234567890 |            9 | 1234567890 |        4 |       1234567890 |           8
      12345678901 |            9 |            |          |      12345678901 |           8
     123456789012 |            9 |            |          |     123456789012 |           8
    1234567890123 |           11 |            |          |    1234567890123 |           8
   12345678901234 |           11 |            |          |   12345678901234 |           8
  123456789012345 |           11 |            |          |  123456789012345 |           8
 1234567890123456 |           11 |            |          | 1234567890123456 |           8
(16 rows)

Looking at this, we can see that 'integer' is always going to be smaller (and faster!) to use than 'numeric', and 'bigint' will be smaller once you get up into the hundreds of millions (and it'll also be faster, of course).

Alternative Types with Fixed Length

Unfortunately, people often use the NUMERIC/DECIMAL type when migrating from other databases to PostgreSQL, even though the actual values in a given column are integers (because it’s a primary key, for example). You’ll get far better performance and typically less space used by using either INTEGER or BIGINT in those cases.

PG also supports float or double types (REAL and DOUBLE PRECISION, hello FORTRAN), which may be appropriate if exact precision isn’t required (such as with measurements). Of course, PostgreSQL supports the standard SQL type syntax of float(n) where n = 1..24 maps to REAL and n = 25..53 maps to DOUBLE PRECISION, and just float means DOUBLE PRECISION. If you've ever had to be aware of the details of IEEE 754, or know about the related exponent and fraction bits, those ranges will look familiar.

Using a fixed width data type likely will be more efficient and can be smaller in space required than NUMERIC (it does depend on the exact values being stored).

Type Size, Alignment, and Order

If you’re worried about how much space is needed to store your data on disk, and you probably should be, the column order also matters. When doing binary IO, PG accesses binary data in the row directly, doesn’t serialize data. PostgreSQL doesn’t reorder, compress across columns (though a value in a given column may be compressed) or generally make attempts to avoid wasted space. That’s left up to the database designer to consider and decide on.

One aspect of this direct mapping from disk to memory is that memory access alignment must be respected, at a minimum for performance but sometimes also for function, depending on the architecture. Meaning that data types must be stored at certain offsets in memory, which can introduce alignment “holes”. To ensure you don’t introduce alignment holes, you should order the columns in your table definition with the largest fixed-width columns first, followed by smaller fixed-width columns, and then variable-length fields at the end.

For example, if you have columns with sizes integer, bigint, you would want to create the table with columns in this order: bigint, integer.

If you create it as integer, bigint, then you’ll end up with a 4-byte alignment hole (just completely dead and wasted space) between the integer and the bigint.

For example:

0    4    8   12   16   20   24   28   32
+----+----+----+----+----+----+----+----+
|int4| W  |  bigint |int4|  W |  bigint | ...
+----+----+----+----+----+----+----+----+

Where "W" is wasted space, because the bigint will be naturally aligned on an 8-byte address. By reordering the column definitions, you can avoid the wasted space, in memory and for binary storage:

0    4    8   12   16   20   24   28   32
+----+----+----+----+----+----+----+----+
| bigint  |  bigint |int4|int4| ...
+----+----+----+----+----+----+----+----+

In just this simple example, you have saved 8-bytes of memory and storage, or 8 out of 32 bytes or 25%. Imagine if you have wide rows with many columns, and a large table with many rows.

PostgreSQL will tell what size a type is and how it will be aligned with this query:

SELECT typname,typbyval,typlen,typalign FROM pg_catalog.pg_type ORDER BY 3 DESC,1;

Here are the first few lines. Note that length of -1 is a variable length type. Note that several of the wider types are geometric types, and note that uuid is as long as two bigints, which you should keep in mind if you consider using a UUID.

        typname                | typbyval | typlen | typalign
---------------------------------------+----------+--------+----------
 name                                  | f        |     64 | c
 sql_identifier                        | f        |     64 | c
 box                                   | f        |     32 | d
 lseg                                  | f        |     32 | d
 circle                                | f        |     24 | d
 line                                  | f        |     24 | d
 interval                              | f        |     16 | d
 point                                 | f        |     16 | d
 uuid                                  | f        |     16 | c
 aclitem                               | f        |     12 | i
 timetz                                | f        |     12 | d
 float8                                | t        |      8 | d
 int8                                  | t        |      8 | d
 internal                              | t        |      8 | d
 macaddr8                              | f        |      8 | i
 money                                 | t        |      8 | d

and for NUMERIC:

typname | typbyval | typlen | typalign
---------+----------+--------+----------
 numeric | f        |     -1 | i
(1 row)

Why?

Because when PostgreSQL does binary IO to storage, it uses the in-memory storage layout. It doesn't pack and unpack individual items (columns) to minimize size.

But Why?

CPUs want data to be "naturally aligned" to their natural address location. For example, the natural alignment for a byte is on a 1-byte address boundary, for a short integer on a 2-byte address, for a 32-bit integer (and float) on 4-byte address, 64-bit integer (and double) on an 8-byte boundary. RISC CPUs like ARM and MIPS strictly enforce this. If your data is not naturally aligned, you get a runtime error. Intel architectures will adjust misaligned data at runtime, but at a high performance cost. Fortunately, this is mostly controlled by the compiler when the PostgreSQL source code is compiled, though overly clever programmers can cause misalignments.

In general, the PostgreSQL source code plus the compiler decides memory alignment for you, but it does not change the order that you define things. You can help by using types with fixed length (not DECIMAL or NUMERIC), and if you can, by declaring your table's columns in order from largest fixed size to smallest fixed size followed by variable sized data like NUMERIC/DECIMAL. Note that ordering your columns like this not only reduces memory use and storage requirements, but also affects performance by resulting in better HW cache use, and better TLB use, but that's getting way off in to the weeds.

Note that standard fixed-length types like int2, integer (int4), bigint (int8), REAL (float4), DOUBLE PRECISION (float8) will use CPU native types and instructions, while operations on multi-precision types will be implemented partly in software.

Summary

The short story is that you can help your performance cause, storage-wise, memory-wise and cpu-wise by:

  • using the NUMERIC/DECIMAL when you really need it, like counting money
  • choosing alternative types like INTEGER, BIGINT, REAL, DOUBLE PRECISION when you don't
  • declaring your tables' columns from largest fixed to smallest fixed size follow by variable length types like NUMERIC/DECIMAL
Avatar for Stephen Frost

Written by

Stephen Frost

April 27, 2021 More by this author