#292 — February 13, 2019

Read on the Web

Postgres Weekly

Loading Terabytes of Data from Postgres into BigQuery — A quick look at how you can transfer large amounts of data from a Postgres database into Google’s BigQuery, perhaps for analytics work or even opening it up to the public.

Pavel Tiunov

Because Your Data Is Your Business — PGX's open source experts provide the database and application skills necessary to solve database problems, accelerate existing applications, refactor infrastructure, and develop new applications.

PostgreSQL Experts, Inc. sponsor

▶  How Postgres Used fsync Incorrectly for 20 Years — An interesting look into an issue not just faced by Postgres but by many systems that thought fsync (a system call that flushes file buffers into a final committed/persisted state) worked in a way that it actually doesn’t. With “disastrous consequences for data durability/consistency” this is an interesting area to learn about if you like digging deep. Hate video? There’s more info in this Postgres wiki entry.

Tomas Vondra

SortSupport: How Postgres Sorts at Speed — This is pretty technical, but also a great read if you want to understand SortSupport, one of the systems within Postgres for improving sort performance. “Sorting for types that have had Sortsupport implemented usually gets twice as fast or more, a speedup that transfers directly into common database operations like ORDER BY, DISTINCT, and CREATE INDEX.”

Brandur Leach

An Overview of the Index Changes in Postgres 11 — PostgreSQL 11 introduced several changes to the way it handles database indexes. This post provides an overview and how you can take advantage of them.

Amit Jain

▶  Breaking Postgres at Scale — A 50 minute talk from FOSDEM digging into some of the scaling “discontinuities” in Postgres and how to mitigate them. If you don’t want your database failing in weird and wonderful ways as you scale, this is a valuable talk.

Christophe Pettus

pgDash Insights to Maximize PostgreSQL Performance — pgDash provides time-series dashboards, diagnostics, analytics and more to help you maximize the performance of your Postgres deployment.

pgDash by RapidLoop sponsor

The Most Useful Postgres Extension? pg_stat_statements — Inspired by a talk in the PostgreSQL devroom at FOSDEM, a post on the usefulness of the pg_stat_statements extensionand why every Postgres developer should have it in their toolbox.

Craig Kerstiens

Connection Pooling, A Performance Best Practice for using Azure Database for PostgreSQL — The latest in a series of posts on improving the performance and scale of databases on Azure’s platform.

Parikshit Savjani

Move Fast and Migrate Things: How We Automated Migrations in Postgres — Benchling is a life science data management platform which means they need to work with a lot of data, and as with Braintree/PayPal last week, it’s neat to see how they approach migrations at their level.

Vineet Gopal

supported by

💡 Tip of the Week

How to see the elapsed time for your queries in psql
by Peter Cooper

For some reason I'm particularly keen to know how long things take. I even have a setting on my shell to include the running time of the last command in my prompt. But how can we get something similar when working with psql?

It's easier than you might think! Just type:

\timing

And psql responds with "Timing is on." You can also add on or off to the end if you want to be explicit.

Now we get basic timings when we run queries:

If you really like this and want it to be permanent, create a .psqlrc file in your home directory and put \timing inside and it'll be run automatically each time you load psql.

Of course, this tip is solely to improve your day to day psql experience. If you want to really dig into query times seriously, prepend your queries with EXPLAIN ANALYZE instead and you'll get more insight into why the timings are as they are. But that's a tip for another day.. :-)

This week's tip is sponsored by strongDM, the best way to secure access to your servers/databases. Schedule a 10 min demo.

If you have any ideas for your own tips, reach out here - we'll pay $50 for your tip if we use it in an issue.