By Kaarel Moppel – Last week the first release candidate of the upcoming Postgres version 11 was released, stating that everything is going more or less as planned – great news! The 11 release is promising a lot of very cool stuff as usual, including – better partitioning, more parallezation and JIT capabilities (which are not activated by default though). This blog is about testing the query performance of v11.

Besides those headline features, there are hundreds or rather thousands (`git diff –shortstat REL_10_5..REL_11_RC1` says “3192 files changed, 324206 insertions, 178499 deletions”) of smaller changes, and some of them for sure boosting performance, also on some typical queries. Release notes didn’t state any numerical performance indicators though as usually, so people have to find it out on their own – so that’s what this blog post is about. I’m basically running 4 quite simple analytical/aggregate type of queries and laying out the numbers for you to evaluate – so jump at the summarizing table end of the post or get into the details by reading on.

Test query performance

I carried over 3 out of 4 test queries from the last year’s similar test for 9.6 vs 10 (check it out here if you’re into archaeology) and added one query to test index traversing performance. Queries are all tied to the schema generated by our good old friend pgbench, with one minor adjustment – creating a copy of the pgbench_accounts table be able to simulate a join on two 1 million rows tables. This time I also didn’t disable the parallel features as they’re “on” by default for both versions and thus probably won’t be touched for most setups.


/* Sum up 50mio rows */
SELECT sum(abalance) FROM pgbench_accounts CROSS JOIN generate_series(1, 5)

/* Unique columns joining */
SELECT count(*) FROM pgbench_accounts JOIN pgbench_accounts_copy using (aid)

/* Using Grouping Sets analytical feature */
SELECT count(*) FROM (SELECT aid, bid, count(*) FROM pgbench_accounts GROUP BY CUBE (aid, bid)) a;

/* B-tree index traversing */
SELECT COUNT(DISTINCT aid) FROM pgbench_accounts where aid % 2 = 0;

For pbench “scale” I chose 100, meaning 10 Million rows (1.3GB) will be populated into the pgbench_accounts table. This gives enough rows but still fits into memory so that runtimes are more predictable.

Hardware & Server Config

I spun up a lower end node on Linode, labelled “Linode 8GB” (4 cores, 8GB RAM, 160 GB SSD, Ubuntu 18.04) but because we’re not testing hardware, but comparing 2 Postgres versions, it should be fine. Also, even though we’ve got an SSD, the disk doesn’t really matter here, as it’s a read-only test and I’ve chosen queries and the test data amount in a way that everything will fit into RAM. This is giving us an algorithmic comparison. For additional testing fairness my test script also performed queries in turns between 10.5 and 11 RC1 to try to suppress the effects of possible background load jitter of a shared virtual machine.

For running Postgres I used official the official Postgres project maintained repo for both 10 and 11. Concerning server settings I left everything to defaults, except below changes on both clusters for reasons described in comments.

shared_buffers='2GB'		# to ensure buffers our dataset
work_mem='2GB'		# increase work_mem  as it helps a lot for analytical stuff
max_parallel_workers_per_gather=2	# the default actually, so just as a reminder
shared_preload_libraries='pg_stat_statements'	# for storing/analyzing test results
logging_collector=on

Results of query performance

After running my test script (6h on both clusters), which basically does all the work out of the box and can be found here, I got the numbers below.

QueryInstanceMean time (s)% ChangeStddev time (ms)% Change
SUM10.51.5432.5
SUM11RC11.39-9.724.2-25.5
JOIN10.53.7574.5
JOIN11RC13.84+2.372.0-3.5
CUBE10.515.0852
CUBE11RC19.58-56.6542-57.2
INDEX10.53.23199
INDEX11RC12.16-49.5167-19.2

So what do these numbers tell us? 2 queries out of 4 have improved very significantly! +50% is very good, given how refined Postgres algorithms already are. The INDEX query can be easily explained by the fact that previously index access could not be parallelized. For the CUBE (grouping sets feature) I didn’t find anything from the release notes. It would be interesting to know though so please comment if you have some insights. The other 2 queries are “on par” and the small 2.3% JOIN loss (although latencies are more consistent now) is being sufficiently compensated for on other fronts.

Concluding remarks

To sum it up – the general impression from 11 RC1 is very positive. The numbers look much better compared to last year’s improvements. So hopefully it will be an awesome release and a relief for those who have to deal with tons of data – personally I’m itching to upgrade some systems already:)

NB! Not to forget – these are just “some” queries that I conjured up and it’s also a RC release so although it looks very promising I wouldn’t take those numbers as a guarantee for anything outside of the described test set.