Postgres support for JSON is 10 years old!

JSON is everywhere, even in your SQL database. Since Postgres added JSON support 10 years ago, we've seen widespread adoption. When JSON support was first released in 2012 the implementation was a fairly flat representation of the JSON data type. It had limited querying and indexing capabilities. Craig, who was active in early JSON discussions with Postgres contributors, admits "Postgres cheated" with these early implementations. As JSONB has been introduced and adopted, I think you can see that JSON use in Postgres is now one of its biggest strengths.

Crunchy's JSON Tutorial

If you're new to JSON in Postgres or just want to play around, jump over to our JSON + Postgres tutorial and give it a spin.

JSONB

Today, when creating a column for JSON data in Postgres, nearly all use-cases call for the JSONB data type. JSONB stores data in a binary (B), binary form. JSONB has been fundamental in improving the Postgres-JSON experience. With it, Postgres has functions for modifying, extracting, and re-forming JSON data.

I'm going to go out on a limb and say that Postgres has as much support for JSON as any other database vendor. Yet, accessing that JSONB functionality in Postgres is not as straight forward. With most scenarios, there is more than one method to get the intended result. JSONB queries are a combination of traditional SQL syntax, JSONB, and JSONPath queries. Three people may get the same result, yet they use completely different set of functions.

For instance: assuming querying a nested set of objects, a programmer can either

  1. Turn those into a set of records using jsonb_to_recordset, then filter the records based on conditionals, then rejoin to JSON using to_jsonb
  2. Use the JSONPath to query and compare the objects
  3. Use JSON operators to extractand compare.

So, how can you use Postgres with JSONB while keeping your sanity?

JSON Storage

The good news: With Postgres, JSONB documents can be as large as 255MB in size. JSONB documents can store any structure and store different structures on different records. Four layer deep arrays and objects? No problem!

The practical news: Read only JSONB blobs can be a wild west of size and format. But, if you want to query JSONB documents, they should not be that large. Consistent, simple, well-defined JSON schema rewards a developer with performance and readable queries.

JSON Queries & Filtering

The good news: Postgres functions can extract values from inside of a JSON blob. These functions access and transform objects, arrays, and values. Postgres has JSONPath conditions for extracting with precision.

The practical news: Query statements filtering values for a single field is easy to read. Single conditionals mean the mental buffer does not overflow. Additional conditionals increase the verbosity of the functions, which become hard-to-read statements.

Once values are extracted from JSONB, the values are returned as strings. Being strongly-typed, Postgres will need those values cast for comparison. Thus, JSONB conditionals are encased in type casting statements.

JSON filtering in Postgres can start to feel complex or unreadable. But, it's on-par with other databases in the space. An expressive query is required for an equally expressive JSON document. Keep documents simple, and the queries will follow.

JSON Query Performance

The good news: Postgres has indexes for all data structures found within JSONB.

The practical news: Simple indexes perform reliably. BTREE are the most predcitable and resource effective indexes. When using BTREE, index creation integrates the JSON extraction and transformation. Postgres optimizes for these explicit values, and the human query builder understands the indexes. BTREE will perform as you would expect for any index on Postgres.

For more expansive queries that may query any of the object's keys, GIN indexes are an option. GIN indexes can be powerful, yet they should not be thrown on every JSONB field (this is never a good performance strategy for any type of data, but particularly GIN indexes). GIN indexes have caveats for which operators work with different index settings. When using GIN indexes, set aside extensive time for testing each time a query needs to use it.

How do Postgres' indexes compare to other JSON functionality?

Key-value store databases limit the number of indexes, thus focusing performance on a limited number of queries, and preventing index sprawl. This strategy prevents database users from sabotaging their own experience. Search specific indexes throw computer resources it, thus you'll use CPU + RAM + money. General application databases trust the programmer to use the provided toolset. Postgres is in this category -- it is up to the user to use the tool wisely.

JSON Modifications & JSON Upserts

The good news: With Postgres, a single update statement can update or upsert values in a JSONB document. Postgres provides the ability to do in-place updates with JSONB.

The practical news: With Postgres, inplace-updates are similar to any update. The query contains the method of building a new document. As we said about queries, complex JSON documents require complex in-place update queries. Still, it may be worth it. Inplace-updates avoid the network round-trip, which rewards performance.

Using Postgres with JSON Practically

Today, anything is possible with JSON with Postgres. Complex situations can create SQL spaghetti. A developer choosing complex JSON will pay the price for it on code-complexity. Yet, I would argue this is the case with any other JSON database.

The words of advice for Postgres + JSON IN 2022: if you feel like you are struggling to make JSON work with Postgres, consider using a table.

Want to try it out? Checkout the JSON tutorial in the Crunchy Data Postgres playground.

Avatar for Christopher Winslett

Written by

Christopher Winslett

December 9, 2022 More by this author