#313 — July 10, 2019

Read on the Web

Postgres Weekly

Amazon Aurora PostgreSQL Serverless Now Generally Available — A fair bit to unpack here! Amazon Aurora is a performance-oriented AWS-based database that provides MySQL and Postgres compatibility and charges by the hour. However, the serverless variant auto-scales and lets you simply ‘pay as you go’ (to a point). MySQL has supported this for a year already, but now Postgres users get their turn to try out the approach.

Amazon Web Services

Generated Columns in Postgres 12“The data warehousing community will be happy to know that PostgreSQL now has a feature to generate columns based on data in other columns.” And here’s how it works.

Kirk Roybal

In-Memory Computing Adds Speed and Scale to PostgreSQL — Combining in-memory computing and PostgreSQL accelerates application performance, powers faster transactions, enables massive scalability, and allows better business analytics—with no rip-and-replace of an existing architecture. White paper available here.

GridGain Systems, Inc sponsor

E-Maj 3.1.0: A Way to Log and Rollback Table Updates — A Postgres extension which enables fine-grained write logging and ‘time travel’ on subsets of the database. This release now supports Postgres 9.5 through 12. GitHub repo.

Philippe Beaudoin

The CSV Output Format in psql — CSV is a convenient and generic (if slightly flaky) data interchange format that Postgres has supported in various ways for years. Postgres 12 takes things a step further by letting psql output all tabular results in CSV, if you so choose.

Daniel Vérité

The Fastest Way to Load Data into Postgres with Python — If you’ve got a large collection of ‘dirty’ data that needs to be fetched and transformed and then entered into Postgres, this is an extensive tutorial that will guide you on the way.

Haki Benita

Postgres's Interval, Date, Timestamp and Time Data Types — A straightforward introduction to a variety of types that Postgres implements to allow you to represent and work with time durations (also see our Tip of the Week, below, for a related tip).

Elein Mustain

Free eBook: How to Get a 3x Performance Improvement on Your Postgres Database — Learn our best practices for optimizing Postgres query performance for customers like Atlassian and how to reduce data loaded from disk by 500x.

pganalyze sponsor

Generate Primary Keys (Almost) Automatically — Maybe I live a sheltered existence but I can’t think of a reason I’d want to do this.. but it’s good to know I can(!)

Luca Ferrari

Serving Dynamic Vector Tiles from PostGIS — PostGIS extends Postgres into a powerful geographic and geospatial data engine which can also produce and work with tiled maps.

Paul Ramsey

Terraform PostgreSQL ProviderTerraform is a popular code-based infrastructure management system.


supported by EnterpriseDB

💡 Tip of the Week

Using date_trunc to reduce time precision

date_trunc is one of many date/time functions built into Postgres, but certainly scores high in its bang for the buck for me.

date_trunc truncates timestamps to a certain level of precision, such as a specific hour, second, day, week, year, or so on.

As an example, let's create a very simple orders table which stores solely the time an order is created:

CREATE TABLE orders (created_at timestamp);

  ('2019-01-01 04:05:33'),
  ('2019-01-01 09:05:51'),
  ('2019-01-03 02:55:14'),
  ('2019-01-08 14:12:07');

Now, we can use date_trunc upon created_at to, say, group and count orders by day or week:

SELECT date_trunc('day', created_at) AS day,
       COUNT(created_at) FROM orders
       GROUP BY day
       ORDER BY day;
day                 | count
2019-01-01T00:0 ... | 2
2019-01-08T00:0 ... | 1
2019-01-03T00:0 ... | 1

This basic example shows that if we group by created_at truncated to the day level, we can count how many orders there were on each unique day.

This week's tip is sponsored by EnterpriseDB, the most complete open-source database platform. Learn more about our enterprise Postgres solutions.

🗓 Upcoming Postgres Events