#366 — July 29, 2020

Web Version

Postgres Weekly

What is 'Fill Factor' and How Does It Affect Performance? — The ‘fill factor’ defines just how tightly packed table storage is. For example, when a row is updated, the ‘new’ row might end up in a different page of storage due to size limitations, whereas a less tightly packed table would allow the update to happen almost in situ.

Kaarel Moppel

A SQL Style Guide — We linked to this a couple of years ago but Bruce Momjian has reminded us of this handy SQL style guide to ensure legible and maintainable queries.

Simon Holywell

[Case Study] Successful Migration to PostreSQL by Global Gaming Company — Learn how 2ndQuadrant helped International Gaming Technology (IGT) successfully Migrate to PostgreSQL from an expensive and proprietary DBMS to deliver high quality gaming experiences worldwide, as well as how they've experienced zero outages since the switch.

2ndQuadrant PostgreSQL Services sponsor

Unicode Normalization in Postgres 13 — If you know nothing about what ‘fully composed’ or ‘fully decomposed’ means when it comes to Unicode or what Postgres does when considering the equality of Unicode strings, this is a short and sweet introduction to some ideas you might not need to know in detail but should probably know about.

Peter Eisentraut

Avoiding Passwords in Log Files“Because Postgres uses SQL queries to manage user accounts, including password assignment, it is possible for passwords to appear in the server logs..”

Bruce Momjian

A Safer Price Type in Postgres — Is it worth modelling prices by creating a domain/type rather than just sticking to cents in a number column? Vados experiments.


Postgres 13's EXPLAIN Now Includes WAL Information — You’ll be able to see how many WAL records are created and similar info.

Luca Ferrari

Monitor and Visualize Postgres Database Metrics End-To-End — Monitor PostgreSQL performance end-to-end with OOTB, customizable, drag-and-drag dashboards in Datadog. Try it free.

Datadog sponsor

Installing TimescaleDB on macOS with Postgres.appTimescaleDB is a popular time series data Postgres extension and Postgres.app is an equally popular way to run Postgres on the Mac. This tutorial brings the two together.

Prathamesh Sonpatki

Using Custom Types with Citus and Postgres, From Popular Hack to Transparent Feature — How custom types work with Citus and how user-defined PostgreSQL types are automatically propagated to all the nodes in a Citus cluster.

Nils Dijk

▶  Postgres Performance Tuning and Optimization — An hour-long talk that focuses on tuning configuration settings (both in Postgres and on Linux generally) and what things they each affect.

Ibrar Ahmed (Percona)

What It Was Like Speaking Online at EuroPython 2020 — A speaker shares a little of the experience of giving a talk at an online conference.

Paolo Melchiorre

GoodJob: A New Postgres-Based, Multithreaded, Background Job System for Ruby on Rails — Ben calls GoodJob a “second-generation” backend because it focuses on compatibility with ActiveJob. It’s suited for use cases queuing fewer than 1 million jobs/day.

Ben Sheldon

supported by YugabyteDB

💡 Tip of the Week

Adding constraints to JSON objects

I find it easy to think of JSONB columns as being open to anything – I'll often create a JSONB 'metadata' column and use it as a sort of generic bucket to throw data into that I may or may not need later. They perform this job well.

While you can use JSON in a very dynamic, 'seat of the pants' fashion, you can also bring it into the structured world with queries, functions, and constraints.

For example, you might have a books table and be storing data about books within it as JSON documents:

create table books(k serial primary key, doc jsonb not null);

insert into books(doc) values
    { "ISBN"    : 4582546494267,
      "title"   : "Macbeth",
      "author"  :
        {"given_name": "William",
         "family_name": "Shakespeare"},
      "year"    : 1623

One type of constraint that might help in a table like this is to ensure that we even have a JSON document of some type. You can do this by checking the type of the document and ensuring that it's an object (i.e. that it has keys and values, rather than just being the number 10, say):

alter table books
add constraint books_doc_is_object
  jsonb_typeof(doc) is not null and
  jsonb_typeof(doc) = 'object'

We can go a step further by validating the data within the JSON document. For example, an ISBN is a 13 digit integer that uniquely represents a book (or other forms of written material). We can ensure that the "ISBN" value given in a JSON document is a thirteen digit integer like so:

alter table books
add constraint books_doc_isbn_ok
  doc->>'ISBN' is not null              and
  jsonb_typeof(doc->'ISBN') = 'number'  and
  (doc->>'ISBN')::bigint > 0            and
  length(doc->>'ISBN') = 13

This validation goes into some depth checking that the 'ISBN' value is present, that it's a number, then casts it into an integer to check that it's positive in value, and then checking that its length is equal to 13.

You can go a lot further than this, but the newsletter would be too long, so check out Bryn Llewellyn's document data modeling with JSON data types article for more. It's written about YugabyteDB but since its JSON support is built on top of Postgres, most of what's written applies to Postgres too.

This week’s tip is sponsored by YugabyteDB, the high-performance distributed SQL database for internet-scale applications. Serve applications with SQL query flexibility and cloud native agility.