Hacker News new | past | comments | ask | show | jobs | submit login
Squeeze the hell out of the system you have (danslimmon.com)
687 points by sbmsr 9 months ago | hide | past | favorite | 373 comments



The bit on the database performance issues leads me to my hottest, flamiest take for new projects:

- Design your application's hot path to never use joins. Storage is cheap, denormalize everything and update it all in a transaction. It's truly amazing how much faster everything is when you eliminate joins. For your ad-hoc queries you can replicate to another database for analytical purposes.

On this note, I have mixed feelings about Amazon's DynamoDB, but one things about it is to use it properly you need to plan your use first, and schema second. I think there's something you can take from this even with a RDBMS.

In fact, I'd go as far to say as joins are unnecessary for nonanalytical purposes these days. Storage is so mind booglingly cheap and the major DBs have ACID properties. Just denormalize, forreal.

- Use something more akin to UUIDs to prevent hot partitions. They're not a silver bullet and have their own downsides, but you'll already be used to the consistently "OK" performance that can be horizontally scaled rather than the great performance of say integers that will fall apart eventually.

/hottakes

my sun level take would be also to just index all columns. but that'll have to wait for another day.


Honestly I couldn’t disagree more. I built a startup and paid little attention to perf for years 1-5, and finally in year 6 we started to get bitten by some perf issues in specific tables, and spent a few engineer-months optimizing.

In terms of tech debt it would have been way more expensive to make everything perform well from the start, we would have moved much slower and probably failed during a few crunch points.

Instead we paid probably a few $k/mo more than we really needed to on machines, and in return saved man-months of effort at a time when we couldn’t hire enough engineers and the opportunity cost for feature work was huge. (Keep in mind that making everything perform well would have required us to do 10-20x as much work, because we could not know ahead of time where the hot spots would be. Some were surprising.)

Joins may be evil at scale, but most startups don’t have scale problems, at least not at first.

Denormalizing can be a good optimization but you pay a velocity cost in keeping all the copies in sync across changes. Someone will write the bug that misses a denormalized non-canonical field and serves up stale data to a user. It’s usually cheaper (in total cost, ie CapEx+OpEx) to write the join and optimize later with a read-aside cache or whatever, rather than contorting your schema.


> a few $k/mo

Isn’t that the cost of one engineer already?


In straight dollars, perhaps yes. But the new servers don't show up and spend 3 to 6 months before accomplishing anything meaningful, don't require sick time which can cause the optimizations to slip, and don't take 3 months to find the right fit for hire.

Part of the cost consideration is deterministic results. I will pay a premium for near-guaranteed good but probably sub-optimal results and will actively avoid betting on people I haven't met and don't know exist.

In my hiring, I hire now to solve problems we expect to hit after 4 quarters. It almost never makes sense to hire anyone into a full-time role for any project in a shorter timeframe. If you were wrong about the specific problems you expect to have in a year, you have a person who is trained in your development environment, tooling, and projects, and you already budgeted to use them in-depth in a year. There's no emergency. There is time to pivot. But if you're wrong about the need to hire someone now full time, you front load all of the risk and if it doesn't work out, you are stuck with an employee you do not need (and stuck is the right word. Have you ever terminated someone? It is harder than you think it is, and I don't mean just for emotional reasons).

Buy hardware over people. Treat the people you have as if the business depends on them. Let them know that it does. Everyone is happier this way.


> > a few $k/mo

> Isn’t that the cost of one engineer already?

Only for very cheap engineers and very large values of “a few”. $120k/year is pretty low total compensation for an engineer (and the cost of an engineer exceeds their total comp because there is also gear, and the share of management, HR, and other support they consume) and amounts to $10k/month.


In the Bay Area, no, an engineer costs an order of magnitude more. (For a round number, think $15-20k/mo including office space, benefits, etc. for a senior engineer; that's perhaps a bit high for the period I'm discussing but it also isn't attempting to price the cost of equity grants. At that time Google was probably spending something like $35-40k/mo (maybe higher, I don't know their office/perk costs) on equivalent talent at SWE5 including the liquid RSU grants.) But of course run the cost/benefit calc for your own cost of labor.

More importantly, it's critical to think in terms of opportunity cost. Like I said, we couldn't hire engineers fast enough at that time, so if I put someone on this work it would be taking them off some other important project. Plausibly for a fast-growing startup that means eschewing work that's worth $1-2m/eng-yr or more (just looking at concrete increases in company valuation, not present value of future gains). So we're talking on the order of $100k/eng-mo opportunity cost.


> I built a startup and paid little attention to perf for years 1-5, and finally in year 6 we started to get bitten by some perf issues in specific tables, and spent a few engineer-months optimizing.

This screams of if I don’t see it it the problem doesn’t exist view of the world.

How do you know it’s not a problem? Perhaps customers would have signed up if it as faster?

The problem is also treating it in terms of business value and/or cost.

A lot of things are “free” and yet it’s ignored.

For most people, in simple cases like turning on http3, brotli, switching to newer instances and many others are all quick wins that I see ignored 90% of the time.

A good design, implementing some good practices etc are performance specific and don’t always cost more.


A denormalized database model is considered bad desig to begin with, and has performance costs on its own. This is why the OP says this is a "hot take". :)

Maybe there are situations where this actually helps, although the resulting datastructure to me looks more like a multi-key cache.


Thank you. I’ve been reading these comments and thinking I’m losing my mind.


That is a hot take... ;)

But joins should never impact performance in a large way if they're on the same server and properly indexed. "It's truly amazing how much faster everything is when you eliminate joins" is just not true if you're using joins correctly. Sadly, many developers simply never bother to learn.

On the other hand, having to write a piece of data to 20 different spots instead of 1 is going to be dramatically slower performance-wise, not to mention make your queries tremendously more complex and bug-prone, when you remember to update a value in 18 spots but forget about 2 of them.

You mention cheap storage as an advantage for denormalizing, but storage is the least problem here. It's a vastly larger surface area for bugs, and terrible write performance (that can easily chew up read performance).


Storage might be cheap, but memory and bandwidth aren’t.

Memory is the new disk, and disk is the new tape.

You want everything to remain resident in memory, and spool backups and transaction logs to disk.

If you’re joining from disk, you’ve probably done something wrong.

E.g.: compression is often a net win because while it uses more CPU, it allows more data to fit into memory. And if it doesn’t fit, it reduces the disk I/O required.

This is why I look upon JSON-based document databases in horror. They’re bloating the data out many times over by denormalizing and then expand that into a verbose and repetitive text format.

This is why we have insanity like placeholders for text on web apps now — they’re struggling to retrieve a mere kilobyte of data!


Joins are not inherently expensive, but they can lead to expensive queries. For example, say I want to find the 10 most recent users with a phone number as their primary contact method:

SELECT …

FROM User

JOIN ContactMethod on ContactMethod.userId = User.id

WHERE ContactMethod.priority = ‘primary’ AND ContactMethod.type = ‘phoneNumber’

ORDER BY User.createdAt DESC

LIMIT 10

If there are a very large number of users, and a very large number of phone number primary contacts, you cannot make this query fast/efficient (on most RDBMSes). You CAN make this query fast/efficient by denormalizing, ensuring the user creation date and primary contact method are on the same table, and then creating a compound index. But if they’re in separate tables, and you have to join, you can’t make it efficient, because you can’t create cross-table compound indeces.

This pattern of join, filter by something in table A, sort by something in table B, and query out one page of data, is something that comes up a lot. It’s why ppl thing joins are generally expensive, but it’s more like they’re expensive in specific cases.


For 10 million users + telephones, this takes 1ms.

    create table users (
        id serial primary key not null,
        created_at timestamp not null default now()
    );

    create table users_telephones (
        user_id int references users(id) not null,
        is_primary boolean not null default true,
        telephone varchar not null
    );

    insert into users
    select i, NOW() + (random() * (interval '90 days')) + '30 days' from generate_series(1, 10000000) i;
    insert into users_telephones select id, true, random() :: text from users limit 10000000; -- all users have a primary telephone
    insert into users_telephones select id, false, random() :: text from users limit 200000; -- some users have a non primary telephone
    create index on users(created_at);
    create index on users_telephones(user_id);
    create index on users_telephones(user_id, is_primary) where is_primary;

    select count(*) from users;
    count   
    ----------
    10000000
    (1 row)

    Time: 160.911 ms


    select count(*) from users_telephones;
    count   
    ----------
    10200000
    (1 row)

    Time: 176.361 ms


    select
        *
    from
        users u
        join users_telephones ut on u.id = ut.user_id
    where
        ut.is_primary
    order by
        created_at
    limit
        10;

    id    |         created_at         | user_id | is_primary |     telephone      
    ---------+----------------------------+---------+------------+--------------------
    9017755 | 2023-09-11 11:45:37.65744  | 9017755 | t          | 0.7182410419408853
    6061687 | 2023-09-11 11:45:39.271054 | 6061687 | t          | 0.3608686654204689
    9823470 | 2023-09-11 11:45:39.284201 | 9823470 | t          | 0.3026398665522869
    2622527 | 2023-09-11 11:45:39.919549 | 2622527 | t          | 0.1929579716250771
    7585920 | 2023-09-11 11:45:40.256742 | 7585920 | t          | 0.3830236472843005
    5077138 | 2023-09-11 11:45:41.076164 | 5077138 | t          | 0.9058939392225689
    1496883 | 2023-09-11 11:45:42.459194 | 1496883 | t          | 0.1519510558344308
    9234364 | 2023-09-11 11:45:42.965896 | 9234364 | t          | 0.8254433522266105
    6988331 | 2023-09-11 11:45:43.130548 | 6988331 | t          | 0.9577098184736457
    7916398 | 2023-09-11 11:45:43.559425 | 7916398 | t          | 0.9681218675498862
    (10 rows)

    Time: 0.973 ms


Ty for benchmarking, but this isn’t a good benchmark for the issue I’m talking about.

This is only fast because 100% of users have a phone number as a primary contact, so the join filter is essentially meaningless. If in the contact table, the filtered number is a small percentage of the total (e.g. most users have an email as their primary contact, not a phone number), but still a good size (e.g. there’s still hundreds of thousands to millions of phone primary contacts), it’s a much harder query.

It’s probably also fast because you have a warm cache - e.g. there’s enough memory for the DB to have the indexes 100% in memory, which is just not feasible with large DBs in the real world, where you can easily have >100GB of indexes + hot data, and the DB can’t keep it all in memory. In most real world scenarios, having to somewhat frequently read pages of indexes off disk, into memory, to satisfy queries, is common.

Try it again, with the exact same data, but:

- Search for users with a non-primary phone contact (you have 200,000 of these, and 10,000,000 users)

- Give the DB say 1/3 the memory of your total index size, so the complete indexes can’t be in memory

- Run the query right after starting PG up, to ensure the cache is cold (with a hot cache, almost everything is fast, but in real world situations with lots of users the cache isn’t consistently hot)


> It’s probably also fast because you have a warm cache - e.g. there’s enough memory for the DB to have the indexes 100% in memory, which is just not feasible with large DBs in the real world, where you can easily have >100GB of indexes + hot data, and the DB can’t keep it all in memory.

That's the point? Sure there is a scale where it's infeasible, but you can quite easily (albeit it's pricey) get DB instances with hundreds or thousands of GiB of RAM. Even if you can't get everything into it, your working set is often not the size of the entire data set. My company's main MySQL primary has around 800 GiB of storage, and only about 1/3 of that in RAM. Disk read IOPS are usually 0.

Nevertheless, I recreated this in Postgres 15.

The total index size of the two tables is 790 MB according to `\di+*`, so I'll set `shared_buffers` to 263 MB, then restart and re-run.

For reference, time with current settings is about 6.8 msec for `is_primary`, and 1395 msec for `NOT is_primary`.

After a restart, I ran the query for `NOT is_primary` first, which took 1740 msec. The first run of the query with `is_primary` took 21 msec.

My DB is hosted on older hardware, and the files themselves live on NVMe drives presented via Ceph over 1GBe.

EDIT: I forgot that Postgres uses OS cache for a lot of its memory, not just its own. Re-did this, running `sync; sync; sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'` in between shutting down/starting up Postgres. 16726 msec and 79 msec, respectively. So yes, a lot slower, but a. I don't think this is realistic for a production database b. I'm still not clear about how you think JOINs enter into this. The slowdown comes entirely from having to run a table scan.


First off, ty for running all these benchmarks, above and beyond!

FWIW, I don’t think joins are bad, I’m 100% for normalized DB schemas with joins. But I’ve done tonnes of performance work over the past ~10 years, and run into a bunch of real world cases where, when caches are cold (which does happen frequently with large datasets and limited budgets), queries similar to the above (join two tables, read a page of data, sorting on one table and filtering on the other) take 10s of seconds, sometimes even minutes with very large datasets. In those cases, the only solution has been to denormalize so we can create a compound index, with filter key(s) as the prefix, sort key as the suffix, which makes the query consistently fast. I am not at all suggesting this as the default, better to default to normalized with joins, and only do this for these specific cases. Generally a company just has one or a few cases where this is necessary, can just do the ugly denormalization + compound indexes for these few hot spots. But when ppl say “joins are slow”, these cases are examples of where it’s true.

Re: your above 17 second query, if you do something like adding fields to the user table for primary and secondary contact method (denormalizing), and then create a compound index with the necessary filter keys first, and the sort key second, I think you’ll find the query (which no longer needs a join) is quite fast even if caches are ice cold.


Created a new table that contains `user_id, created_at, phone_primary, phone_secondary`. Inserted all 10,200,000 rows. Notably (I'll come back to this) due to the generation of the rows, the primary key (`user_id`) is an unsorted integer - this was _not_ done with a serial or identity.

  postgres=# CREATE INDEX sec_phone_created_at ON hn_phone_new (phone_secondary, created_at) WHERE phone_secondary IS NOT NULL;
I reset `shared_buffers` down to the same as before - 263 MB - although the size of this index is tiny, < 10 MB, so realistically I can't shrink buffers down that far anyway. I then did the same `sync/drop cache` as before.

  postgres=# SELECT * FROM hn_phone_new WHERE phone_secondary IS NOT NULL ORDER BY created_at LIMIT 10;
     id   |     created_at      |   phone_primary    |  phone_secondary   
  --------+---------------------+--------------------+--------------------
    58816 | 1995-05-23 03:22:02 | +49 030 522866-87  | +1 159-445-4810
    49964 | 1995-05-23 03:23:00 | +61 02 7440 8606   | +254 20 925 892
   171828 | 1995-05-23 05:06:47 | +380 32 393-35-89  | +49 030 429376-29
    78333 | 1995-05-23 05:31:22 | +380 32 147-11-20  | +52 55 6409 5253
    24264 | 1995-05-23 06:47:21 | +44 0131 6506 1823 | +49 030 610965-83
    96662 | 1995-05-23 06:57:03 | +52 55 1473 0538   | +61 02 5414 8204
    15023 | 1995-05-23 07:55:37 | +44 0131 7959 1581 | +44 0131 8491 6194
    52029 | 1995-05-23 08:59:19 | +380 32 430-77-54  | +254 20 374 856
    20518 | 1995-05-23 09:51:14 | +380 32 264-21-79  | +52 55 7787 0236
    80273 | 1995-05-23 14:59:26 | +61 02 8863 4466   | +33 01 16 10 78 56
  (10 rows)

  Time: 2258.807 ms (00:02.259)
So yes, significant improvement as you'd expect. I then dropped the index and swapped the order:

  postgres=# DROP INDEX sec_phone_created_at;
  postgres=# CREATE INDEX created_at_sec_phone ON hn_phone_new (created_at, phone_secondary) WHERE phone_secondary IS NOT NULL;
Reset everything as before, and re-ran the same query:

  Time: 221.392 ms
Thinking that like MySQL, a portion of the `shared_buffers` had been saved to disk and put back in upon restart (honestly I don't know if Postgres does this), I attempted to flush it by running a few `SELECT COUNT(*)` on other, larger tables, then re-running the query.

  Time: 365.961 ms
This is what `EXPLAIN VERBOSE` looks like for the original index:

   Limit  (cost=8.44..8.45 rows=1 width=61)
     Output: id, created_at, phone_primary, phone_secondary
     ->  Sort  (cost=8.44..8.45 rows=1 width=61)
           Output: id, created_at, phone_primary, phone_secondary
           Sort Key: hn_phone_new.created_at
           ->  Index Scan using sec_phone_created_at on public.hn_phone_new  (cost=0.42..8.43 rows=1 width=61)
                 Output: id, created_at, phone_primary, phone_secondary
  (7 rows)
And this is what it looks like for the second, with the columns swapped:

  Limit  (cost=0.42..8.43 rows=1 width=61)
     Output: id, created_at, phone_primary, phone_secondary
     ->  Index Scan using created_at_sec_phone on public.hn_phone_new  (cost=0.42..8.43 rows=1 width=61)
           Output: id, created_at, phone_primary, phone_secondary
  (4 rows)
So it actually needs to be reversed, so that the query planner doesn't have to add a sort step for the ORDER BY.


Yeah, I believe which order is best (sortKey/filterKey or filterKey/sortKey) really depends on the specific data/queries, best to try both and pick the best one - looks like sortKey/filterKey in this case :)

But I think this does show how, for specific data/queries, sometimes you do have to denormalize, so that you can create the ideal compound index, for specific problem queries. Should still go with normalized schemas and joins as a default, but if problem queries pop up like this that are taking 10, 20, 30 seconds sometimes (when caches are cold), compromising a bit on clean schemas/code for performance makes sense.

I also created a benchmark here, for Postgres: https://gist.github.com/yashap/6d7a34ef37c6b7d3e4fc11b0bece7...


If you’re limited in RAM and can’t upsize, then yes, this does appear to be a good trade off. You can always refactor later and normalize if necessary.

BTW, although it wouldn’t have helped for your specific benchmark schema creation of TYPES, I’ll plug my genSQL tool [0] for generating random data. It’s primarily designed around MySQL, but it can produce CSVs easily, which every DB can load.

Turns out a lot of random() calls in most languages is slow af, so mine avoids that by (mostly) batching them in a C library. Should be able to create a million somethings in under 10 seconds on modern hardware in Python 3.11.

[0]: https://github.com/stephanGarland/genSQL


Looks useful, ty!


Thanks for bothering to work it through, I was too lazy.

But, yeah, exactly. Everyone thinks they need to optimise the life out of this stuff at the beginning but the db can do a lot with normalised data and the appropriate indexes.

Side note - is_primary isn’t required in the partial index itself since they’ll all be “true” due to the where clause.


Thanks for the effort.

Probably nitpicking but these types of measures are usually tricky to interpret because there is a high chance your indexes (maybe even rows) are still on PostgreSQL shared buffers and OS cache and might not reflect real usage performance.

To get a more "worst-case" measure, after your inserts and indexes creation, you can restart your database server + flush OS pages cache (e.g. drop_caches for Linux), then do the measure.

Sometimes the difference is huge, although I don't suspect it will be in this case.


imo a properly configured Postgres server should have enough RAM to keep any hot data in cache. The cached path is the accurate measurement.


What proportion of users had a primary telephone contact? I think you'd need to be skipping over a lot of users (those without a primary telephone contact) to hit the pathological case that's implied.


Decided to re-create this in MySQL on fairly old hardware, and with actual phone numbers - the latter shouldn't make a difference since they're still VARCHAR, but I already have a program [0] to generate schema with them, so why not?

I did have to do a few manual updates after data load because the aforementioned program can't make foreign keys yet, and also for bools (which MySQL stores as tinyint(1)) I'm randomly generating them via `id & 1`, which isn't what you had.

Also, I gave `hn_phone` its own auto-increment int as a PK, so I could have a non-unique index on `user_id`. In MySQL, if you create a table without a PK, you get one of these, in descending order of precedence:

* The first indexed UNIQUE NOT NULL column promoted to PK

* An invisible, auto-generated, auto-incrementing integer column called `my_row_id` as PK (MySQL >= 8.0.30, if sql_generate_invisible_primary_key=1)

* A hidden index called `GEN_CLUST_INDEX` created on a super-invisible (i.e. doesn't show up in table definition) column called `ROW_ID`, but that column is shared across the entire DB so please don't do this

It's worth noting that since the first 10,000,000 rows all have `is_primary` set, this can finish extremely quickly. If you invert that match with these tables, you have to do a table scan on `hn_phone`, and the time jumps up to about 5650 msec. If you change the `hn_phone` index to be a composite on (`user_id`, `is_primary`) and then rewrite the query to use a subquery instead of a join, the time drops to around 7 msec. You might see a slight speed-up if you index `created_at` in descending order if that was the normal access pattern.

Anyway:

  OS: Debian Bullseye 5.10.0-23-amd64
  Virtualized: Yes (Proxmox)
  CPU: E5-2650 v2 @ 2.60GHz
  Allocated Core Count: 16
  Allocated RAM: 64 GiB PC3-12800R
  Disk: Samsung PM983 1.92 TiB via Ceph
  Filesystem: XFS
  Mount Options: defaults,noatime
  MySQL Version: 8.0.34
  MySQL Options (non-default):
    innodb_buffer_pool_instances = 16
    innodb_buffer_pool_chunk_size = 134217728
    innodb_buffer_pool_size = 17179869184
    innodb_numa_interleave = 1
    innodb_sync_array_size = 16 # this shouldn't apply here, but listing anyway
    innodb_flush_method = O_DIRECT
    innodb_read_io_threads = 16
    innodb_write_io_threads = 16 # this shouldn't apply here, but listing anyway

  CREATE TABLE `hn_user` (
    `id` int unsigned NOT NULL AUTO_INCREMENT, 
    `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, 
    PRIMARY KEY (`id`), 
    KEY `user_created_at` (`created_at`)
  );

  CREATE TABLE `hn_phone` (
    `id` int unsigned NOT NULL AUTO_INCREMENT, 
    `user_id` int unsigned NOT NULL, 
    `is_primary` tinyint(1) NOT NULL DEFAULT '1', 
    `phone` varchar(255) NOT NULL, 
    PRIMARY KEY (`id`), 
    KEY `user_id` (`user_id`), 
    CONSTRAINT `hn_phone_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `hn_user` (`id`)
  );

  mysql> SELECT COUNT(*) FROM hn_user UNION SELECT COUNT(*) FROM hn_phone;
  +----------+
  | COUNT(*) |
  +----------+
  | 10000000 |
  | 10200000 |
  +----------+
  2 rows in set (1.20 sec)

  mysql> SELECT 
    u.id, 
    u.created_at, 
    ut.is_primary, 
    ut.phone 
  FROM 
    hn_user u 
    JOIN hn_phone ut ON u.id = ut.user_id 
  WHERE 
    ut.is_primary 
  ORDER BY 
    u.created_at DESC 
  LIMIT 10;

  +---------+---------------------+------------+--------------------+
  | id      | created_at          | is_primary | phone              |
  +---------+---------------------+------------+--------------------+
  | 6906106 | 2023-08-12 06:08:25 |          1 | +61 02 5317 2261   |
  | 6906106 | 2023-08-12 06:08:25 |          1 | +254 20 294 205    |
  | 6738922 | 2023-08-12 06:07:12 |          1 | +61 02 1247 3361   |
  | 6738922 | 2023-08-12 06:07:12 |          1 | +44 0131 8386 4494 |
  | 7449553 | 2023-08-12 06:03:55 |          1 | +61 02 7649 6731   |
  | 7449553 | 2023-08-12 06:03:55 |          1 | +61 02 7893 9835   |
  | 6908862 | 2023-08-12 05:51:52 |          1 | +81 03 6743-6893   |
  | 6908862 | 2023-08-12 05:51:52 |          1 | +44 0131 8414 7888 |
  | 4134961 | 2023-08-12 05:51:42 |          1 | +1 614-908-1719    |
  | 4134961 | 2023-08-12 05:51:42 |          1 | +44 0131 9898 8958 |
  +---------+---------------------+------------+--------------------+
  10 rows in set (0.00 sec)

  mysql> WITH latest_event AS (
    SELECT 
      event_id 
    FROM 
      performance_schema.events_statements_history_long 
    WHERE 
      sql_text LIKE 'SELECT u.id%' 
    ORDER BY 
      event_id DESC 
    LIMIT 1
  ) 
  SELECT 
    event_name, 
    TRUNCATE(
      TIMER_WAIT / POW(10, 9), 
      3
    ) AS 'duration (msec)' 
  FROM 
    performance_schema.events_stages_history_long stg 
    JOIN latest_event ON stg.nesting_event_id = latest_event.event_id 
  UNION 
  SELECT 
    "total", 
    TRUNCATE(
      TIMER_WAIT / POW(10, 9), 
      3
    ) 
  FROM 
    performance_schema.events_statements_history_long stmt 
    JOIN latest_event ON stmt.event_id = latest_event.event_id;

  +------------------------------------------------+-----------------+
  | event_name                                     | duration (msec) |
  +------------------------------------------------+-----------------+
  | stage/sql/starting                             |           0.261 |
  | stage/sql/Executing hook on transaction begin. |           0.003 |
  | stage/sql/starting                             |           0.016 |
  | stage/sql/checking permissions                 |           0.006 |
  | stage/sql/checking permissions                 |           0.005 |
  | stage/sql/Opening tables                       |           0.134 |
  | stage/sql/init                                 |           0.008 |
  | stage/sql/System lock                          |           0.023 |
  | stage/sql/optimizing                           |           0.034 |
  | stage/sql/statistics                           |           0.087 |
  | stage/sql/preparing                            |           0.074 |
  | stage/sql/executing                            |            0.74 |
  | stage/sql/end                                  |           0.003 |
  | stage/sql/query end                            |           0.003 |
  | stage/sql/waiting for handler commit           |           0.025 |
  | stage/sql/closing tables                       |           0.019 |
  | stage/sql/freeing items                        |           0.176 |
  | stage/sql/cleaning up                          |           0.003 |
  | total                                          |           1.654 |
  +------------------------------------------------+-----------------+
  19 rows in set (0.00 sec)

[0]: https://github.com/stephanGarland/genSQL # shameless plug; it's super messy and probably unintuitive, but it's getting better/faster and has been a fun ride learning how fast you can make Python (and when to offload to C).


With an index on User (createdAt, id) and one on ContactMethod ( primary,ContactMethod,userId), it should be fast (check that the the execution plan starts with User). Except if lot of recent users have no phones, but that will not be better in a single table (except if columnar storage)


I imagine so too. You’ll be able to interate over the users in order and hit an instant index on contact method.

    select
      *
    from
      user
    where
      exists (
        select
          true
        from
          contact_method cm
        where
          cm.contact_id = contact.id
          and cm.method = 'phone'
          and cm.primary
      )
    order by
      created_at desc
    limit 10

    —- partial index is even faster
    create index primary_phone on contact_method (contact_id) where method = 'phone' and primary;



In my experience with SQL, a query like that should return in under a second even if you have 100k or more users.

There are some other tricks you can use if you're clever/lucky as well. If you're just using integer IDs (which is reasonable if your system isn't distributed) then you could order by userid on you ContactMethod table and still get the same speed as you would with no join.



> If there are a very large number of users, and a very large number of phone number primary contacts, you cannot make this query fast/efficient

I think specific numbers would help make this point better. With a few hundred thousand to low millions of users this should be plenty fast in Postgres for example. That’s magnitudes more than most startups ever reach anyway.



Lots of replies to this one! I created a little benchmark that you can easily run yourself, as long as you have Docker installed. It shows how, for cases like the one I described above, the only way to have consistently fast queries (i.e. even with a cold cache) is to denormalize, so you can create the ideal compound index. The normalize/join version takes 15x longer, which can be the difference between 1s and 15s queries, 2s and 30s, etc.

The benchmark: https://gist.github.com/yashap/6d7a34ef37c6b7d3e4fc11b0bece7...

Note: I think in almost all cases you should start with a denormalized schema and use joins. But when you hit cases like the above, it's fine to denormalize just for these specific cases - often you'll just have one or a few such cases in your entire app, where the combination of data size/shape/queries means you cannot have efficient queries without denormalizing. And when people say "joins are slow", it's often cases like this that they're talking about - it's not the join itself that's slow, but rather that cross-table compound indexes are impossible in most RDBMSes, and without that you just can't create good enough indexes for fast queries with lots of data and cold caches.


Nice benchmark script. With EXPLAIN (ANALYZE, BUFFERS), I see that

* normalized/join version needs to read 5600 pages

* normalized/join version with an additional UNIQUE INDEX .. INCLUDE (type) needs to read 4500 pages

* denormalized version only needs to read 66 pages, almost 100x fewer

Related to this pagination use case, when using mysql, even the denormalized version may take minutes: https://dom.as/2015/07/30/on-order-by-optimization/


Ooh ty, will give that article a read! And yeah, that's really the trick to queries that are consistently fast, even with cold caches - read few pages :)


Yes this is the exact situation where sql falls short. You can't make cross-table indexes to serve OLAP-esq queries, and the most recent X is the common one for pagination in applications. I prefer to denormalize manually at write time in a transaction, rather than use triggers or materialized views.


Why do you prefer manually doing this rather than using materialized views? Materialized views seem easier to create and maintain?


Because they are o(n) complexity to refresh so either you settle for eventual complexity or have expensive writes. By forwarding just the index data to the right table you maintain an consistent idiomatic index at 0(1) write cost


Have you considered implementing this with database triggers instead of in your application logic?

Requires a bit of brainpower to set up a system around it, but it makes your application logic dramatically simpler.

(You don't have to remember to update `foo.bar` every time you write a function that touches `moo.bar`, and if you run migrations in SQL, the updates will also cascade naturally).

It's really high up on my personal wish list for Postgres to support incrementally-materialized views. It doesn't seem like it would be impossible to implement (since, as I suggested, you can implement it on your own with triggers), but IDK, I assume there are higher-priority issues on their docket.


considered and rejected because triggers complicate other database operations like restores. Triggers to me are like global operations, they really make a strong statement about storage which does not tend to be true over longer time horizons when you consider all the stackeholders of the data storage system. They make sense as an application feature, but they are implemented as globals, and this is where the problems begin.


Agreed. Matarialized views will be limited in use until we get nice incremental updating versions.


If the tables involved in the join are of 100M+ records what I do when the joins use varchar columns to improve the performance is to use an additional integer column of the varchar one that is a CRC of it (or hash if you prefer that) and use the integer one instead in the join.


That seems weirdly convoluted. So you store two columns to represent the foreign key?


If you use a varchar as FK then you're definitely doing something wrong from beginning. OP was talking about getting the phone number under certain conditions, and a phone number column is a varchar.


I don’t think they ever did a lookup on phone number and even if you needed to, you would just index it and it’d be fast. You can even use things like tri-gram indexes to give you super fast partial phone number matching.


I agree but I’m talking in the context where you can’t vertically scale anymore.

I also don’t think it’s worth the trouble “never using joins” for an existing project. Denormalize as necessary. But for a green one I honestly think since our access patterns can be understood as you continue you can completely get rid of joins.

Again, assuming your new project can’t fit on a single machine. If it can you’re best just following the “traditional” advice, or better yet keep everything in memory.


Well then you're only really talking about < 0.1% of projects, since a single server and replication and caching will get you as far as you need, even for most social networks.

And if you are needing to massively shard because you're Facebook or Twitter, then it's not much of a hot take at all. But it's also massively oversimplified advice. Because the tradeoffs between joins and denormalization depend entirely on each specific scenario, and have to be analyzed use case by use case. In many cases, joins still win out -- e.g. retrieving the profile name and photo ID for the author of each post being displayed on your screen.

I'm just worried that people without experience will see your advice and think it's a good rule of thumb for their single-server database, because they think joins are scary and you've provided some kind of confirmation.


How is it that your new project can't fit on a single machine?


Could be plenty of reasons, but say you want to support spikey traffic like whatever is related to events, news, realtime information, etc., or highly seasonal stuff like a school portal or selling flowers on mother's day, you want the ability to scale horizontally very quickly to accommodate spikes


Everyone thinks they start their project with Facebook scale problems.


Perhaps they're building an internet search engine.


Also, one could just create a materialized view for the join and solve the problem at the proper layer.


9 out of 10 will fail before they need to scale


There are "tall" applications and "wide" applications. Almost all advice you ever read about database design and optimization is for "tall" applications. Basically, it means that your application is only doing one single thing, and everything else is in service of that. Most of the big tech companies you can think of are tall. They have only a handful of really critical, driving concepts in their data model.

Facebook really only has people, posts, and ads.

Netflix really only has accounts and shows.

Amazon (the product) really only has sellers, buyers, and products, with maybe a couple more behind the scene for logistics.

The reason for this is because tall applications are easy. Much, much easier than wide applications, which are often called "enterprise". Enterprise software is bad because it's hard. This is where the most unexplored territory is. This is where untold riches lie. The existing players in this space are abysmally bad at it (Oracle, etc.). You will be too, if you enter it with a tall mindset.

Advice like "never user joins" and "design around a single table" makes a lot of sense for tall applications. It's awful, terrible, very bad, no-good advice for wide applications. You see this occasionally when these very tall companies attempt to do literally anything other than their core competency: they fail miserably, because they're staffed with people who hold sacrosanct this kind of advice that does not translate to the vast space of "wide" applications. Just realize that: your advice is for companies doing easy things who are already successful and have run out of low-hanging fruit. Even tall applications that aren't yet victims of their own success do not need to think about butchering their data model in service of performance. Only those who are already vastly successful and are trying to squeeze out the last juices of performance. But those are the people who least need advice. This kind of tall-centered advice, justified with "FAANG is doing it so you should too" and "but what about when you have a billion users?" is poisoning the minds of people who set off to do something more interesting than serve ads to billions of people.


For Amazon, if we consider everything for website retail purchases, I would estimate tens of thousands of table schemas. This is counting:

- Sellers (Amazon, third party, retail store) - Inventory (forecasting, recommendation) - Customers (comments, ratings, returns, preferences) - Warehouses (5+ distinct types, filled with custom machines) - Transit Options (long haul, air, vans, cars, bikes, walking, boats) - Delivery Partners (DSP, Flex, Fedex, UPS - forecasting capacity here) - Routing (between warehouses, within warehouses, to specific homes) - skipping AWS - skipping billing - skipping advertising on amazon.com (bidding, attribution, etc)

There's optimizations and metrics collected and packages transition between all these layers. There's hundreds of "neat projects" running to special case different things; all them useful but adding complexity.

For example ordering prescriptions off Amazon pharmacy needs effectively its own website and permissions and integrations. Probably distinct sorting machines with supporting databases for them. Do you need to log repairs on those machines? Probably another table schema.

You want to normalize international addresses? And fine tune based on delivery status and logs and customer complaints and map data? Believe it not like 20 more tables. Oh this country has no clear addresses? Need to send experienced drivers to areas they already know. Need to track that in more tables.


But these aren't the website, right? Amazon runs "wide" enterprise systems in the back end for sure.


Oh yeah these are "wide" enterprise systems, and not for the website. I think we read the initial comment differently.


I apologize up front if I completely misunderstand your intent. However ...

> Amazon (the product) really only has sellers, buyers, and products, with maybe a couple more behind the scene for logistics.

Is a comically bad hot take that is so entirely divorced from reality. A full decade ago the item catalog (eg ASINs or items to purchase) alone had closer to 1,000 different subsystems/components/RPCs etc for a single query. I think you'd have to go back to circa 2000 before it could be optimistically described as a couple of databases for the item catalog.

DylanDmitri sibling comment is a hell of a lot closer to the truth, and I'd hazard is still orders of magnitude underestimating what it takes to go from viewing an item detail page to completing checkout, let alone picking or delivery. Theres a reason the service map diagram, again circa 2010, was called "the deathstar."

> "FAANG is doing it so you should too" and "but what about when you have a billion users?" is poisoning the minds of people

This part I completely agree with. And many individual components in those giant systems are dead simple. I dare say the best ones are simplistic even.


Ex-Amazonian here, and while I agree with the facts you present, I do think the "tall" vs "wide" debate is being misapplied here.

Amazon is extremely and perversely obsessed with, and good at, building decoupled systems at scale, which in essence means lots and lots of individual separate "tall" systems, instead of monolithic "wide" systems.

So IMO, Amazon subscribes to a "forest-of-'tall'-services" philosophy. And even at that meta level, I would say the forests are better off when they grow taller, rather than wider.


This kind of tall-centered advice, justified with "FAANG is doing it so you should too" and "but what about when you have a billion users?" is poisoning the minds of people

The world runs on success stories, not on technology. I wish “wide” thinking was default, for both un-delusion and better development in this area. But everyone is amazed with facebook (not the site, just money), so they have to imitate it, like those tribes who build jets out of wood.


I agree with the characterization of applications you've laid out and think everyone should consider whether they're working on a "tall" (most users use a narrow band of functionality) or a "wide" (most users use a mostly non-overlapping band of functionality) application.

I also agree with your take that tall applications are generally easier to build engineering-wise.

Where I disagree is that I think in general wide applications are failures in product design, even if profitable for a period of time. I've worked on a ton of wide applications, and each of them eventually became loathed by users and really hard to design features for. I think my advice would be to strive to build a tall application for as long as you can muster, because it means you understand your customers' problems better than anyone else.


> I've worked on a ton of wide applications, and each of them eventually became loathed by users and really hard to design features for.

Yes, I agree that this is the fate of most. But I refuse to believe it's inevitable; rather, I think it comes from systemic flaws in our design thinking. Most of what we learn in a college database course, most of what we read online, most all ideas in this space, transfer poorly to "wide" design. People don't realize this because those approaches do work well for tall applications, and because they're regarded religiously. This is why I call them so much harder.


> Yes, I agree that this is the fate of most. But I refuse to believe it's inevitable

Yes exactly. It is not inevitable, I’ve worked on several “enterprise” software suits that did not suffer from this problem. However! They all had that period in their history where they did, and this is why:

Early on in a companies history there will be a number of “big” customers from whom most of the revenue is coming. To keep those customers and money flowing, often bespoke features are added for these customers and these accumulate over time. This is equivalent in character to maintaining several forks of an OSS project. Long term no forward progress can be made due to all time ending up in maintenance.

The solution to this sorry state is to transition to an “all features must be general for the product” and ruthlessly enforce this. That will also mean freezing customer specific “branches” and there will be a temporary hit to revenue. Customers need to be conditioned to the “no bespoke features” and they need to be sold on the long term benefits and be brought along for the ride.

This then enables massive scaling benefits, and the end of all your time in maintenance.


Thanks I think this is a really interesting way to look at things.

What is the market for "wide" applications though? It seems like any particular business can only really support one or two of them, for some that will be SAP and for others it might be Salesforce (if they don't need much ERP), or (as you mentioned) some giant semi homebrewed Oracle thing.

Usually there is a legacy system which is failing but still runs the business, and a "next gen" system which is not ready yet (and might never be, because it only supports a small number of use cases from the old software and even with an army of BAs it's difficult to spec out all the things the old software is actually doing with any accuracy).

Or am I not quite getting the idea?


I think you're getting the idea -- both your points kinda highlight that this is something that companies want, but are not really getting.

As for the market, various sources have the "enterprise software market", whatever that means, at somewhere around $100 billion to $300 billion. We also see companies trying over and over to do this kind of thing. The demand is clearly there.

Certainly the mandate "help run the business" is a wide concern, and that's an OK working definition of "enterprise", and what most existing solutions are trying to do. There are hundreds of interconnected concerns, lots of things to coordinate, etc.

There are other wide concerns, though. Almost anything in engineering and science. Take, for example, the question "how can we reduce our greenhouse gas emissions?" which a lot of companies are asking (or being forced to ask). If you wanted to build a SAAS product for helping companies reduce their GHG, you've got a wide problem, because there are a thousand activities that can emit GHG, and any given company is going to be doing dozens of them at once. But each company is different. Each state and country thinks of things differently. You might not even have the same calculations state-to-state.

Hard problems in science and engineering are just naturally cross-disciplinary, meaning your system has to know a lot of things about a lot of subjects. There are just thousands of little complicating differences and factors. If you're trying to solve a problem like this, absolutely do not de-normalize your database.


Lotus notes is wide… I imagine their scope creep checker was just a sticky note that said Absolutely!!


I miss notes - it was really a better way to organize companies than anything later. Historical valuable data, records of why decisions were made, ephemeral email like things but for groups, user programmable if it didn't quite match your needs, robust encryption, it had it all.


oh I always nust assumed Lotus Notes was just lesser Outlook. can you give examples - such has how did it capture why decisions were made - that sounds ... hard or just "someone wrote it down"


It was a low/nocode environment; anyone (with enough rights) could knock up a simple app with rules/workflows and share it with the company. It made collecting, distributing and organising information easy if you knew what you were doing. It also created complex monsters as it was both too easy and too hard to use. I liked it a lot; we moved from Notes to Exchange and Sharepoint back in the day and it was awful for effiency. We required so much more people to do the same things. Luckily I left shortly after.


Oh.

I struggle with the value of low/no code vs learning to code and providing common libraries.


For your company you have a lot of smart people other than coders. And Notes had a rich collaborative set of intrinsically that you could hip out work flow applications like an accountant with spreadsheets. And built in security and auditing and all that. And since you had the ability to craft tools to fit the exact situation, automation of processes went so fast and was done by people familiar with the business side of the process. We did have a Notes team that would do apps for teams that couldn’t but also had a rich ecosystem of business line apps that were so much better than spreadsheet apps or Access apps.


Facebook was done by Zuckerberg without any specialized knowledge and people would like to go that route because it seems easier, making Twitter/FB/Instagram clone you don't really have to know anything about insurances or handling industrial waste. Then it is basically people joining based on other people

Nowadays there are bunch of regulations on handling user data that one cannot do without knowing but when these companies started that was not an issue.

My point is market for "wide" applications is huge but it is much more fragmented. Of Course SAP and Salesforce are taking cut in that by having "one app for everything"

To get contracts you have to have specialized knowledge in specific area that your SaaS app would provide more value than configuring some crappy version in SAP. So you cannot just make an app in your basement and watch people sign up, but you have to spend a lot of leg work getting customers. That is why it is not really "hot" area for startups, because there is a lot of good money there but not unicorn money and most likely you won't be able to have 2 or 3 different specialist niche products so you could diversify investment but you would have to commit to a niche which makes it also not really interesting for a lot of entrepreneurs who most likely would lie to jump to something more profitable when possible.


> What is the market for "wide" applications though?

Just my experience, but essentially these target industries, not necessarily consumers or singular entities. Hence the term "enterprise". As someone who worked on a fairly reasonable ERP for academic purposes, even just calculating a GPA is extremely complicated in the backend:

    * There are multiple schemes for calculating GPAs
    * Each scheme needs to support multiple grading types (A-F, pass/fail, etc)
    * Each scheme needs to support multiple rounding rules
    * Displays of GPAs will need to be scaled properly based on the output context
    * GPA values will need to be normalized for use in calculations in other parts of the system
    * State legislatures mandate state-specific usages of GPAs which must be honored for legal compliance
    * All GPA calculations must have historical context in case the rules changes so that old transcripts can be revived correctly
    * Institutions themselves will have custom rules (maybe across schools or departments) for calculations which must be incorporated into everything else
    * This pretty much has to work every time
I don't know exactly how many tables GPAs themselves took, but overall the system was over 4,000 tables and 10,000+ stored procedures/functions. Also, I worked in the State of Texas which has its own institution-supported entity performing customizations to this ERP for multiple universities that are installed separately but required for full compliant operation.

I would compare this to most modern "tall" applications which would more-than-likely offer you maybe up to 3 different GPA options with some basic data syncing or something. They might offer multiple rounding types if they thought that far. These apps are generally extremely niche and typically work for very basic workloads. They can capture a lot of easy value for entry-level stuff but immediately fail at everything else.


Your initial premise is flawed though. For example, as someone who worked on Facebook's database team, I can tell you that Facebook has thousands upon thousands of tables (distinct logical table definitions, i.e. not accounting for duplication from physical sharding or replication).

Some of these store things for user-facing product entities and associations between them -- you missed the vast majority of product functionality in your "people, posts, and ads" claim. Others are for internal purposes. Some workloads use joins, others do not.

Nothing about Facebook's database design is "tall", nor is it "easy". There are a lot of huge misconceptions out there about what Facebook's database architecture actually looks like!

Advice like "never user joins" and "design around a single table" is usually just bad advice for most applications. It has nothing to do with Facebook, and ditto for Amazon based on the sibling replies from Amazon folks.


Doesn't that also say something like, it's an easier road to success if you find the tall way to market and scale, scale, scale once you find it? What is the "wide" success story to take inspiration from?


That’s awesome …. I’ve gotta remember this when our PO’s want to add things that have absolutely no business being in our software…


I agree with your sentiment, but even enterprises work on multiple “tall” features.

If they didn’t then I’d change my advice to be simply multi tenant per customer and replicate into a column store for cross customer analytics.

What advice would you give for a “wide” application?


Very insightful, thank you for writing this.


Over the years I think I've encountered more pain from applications where the devs leaned on denormalization than from those that developed issues with join performance on large tables.

You can mash those big joins into a materialized view or ETL them into a column store or whatever you need to fix performance later on, but once someone has copied the `subtotal_cents` column onto the Order, Invoice, Payment, NotificationEmail, and UserProfileRecentOrders models, and they're referenced and/or updated in 296 different places...it's a long road back to sanity.


Can't say I've ever come across a scenario where a join itself was the performance bottleneck. If there's any single principle I have observed is "don't let a table get too big". More often than not it's historical-record type tables that are the issue - but the amount of data you need for day-to-day operations is usually a tiny fraction of what's actually in the table, and you're bound to start finding operations on massive tables get slow no matter what indexes you have (and even the act of adding more indexes becomes problematic. And just indexing all columns isn't enough for traditional RMDBSes at least - you have to index the right combinations of columns for them to be used. Might be different for DynamoDb).


don't let a table get too big

I'd amend that to "don't let your scan coverage get too big". Understanding how much data must be loaded in memory and compared is essential to writing performant database applications. And yes, those characteristics change over time as the data grows, so there may be a one-size-fits-all solution. But "table too large" can pretty much always be solved by adding better indexes or by partitioning the table, and making sure common queries (query's?) hit only one partition.

As a simple example: a lot of queries can be optimized to include "WHERE fiscal_year = $current". But you need to design your database and application up front to make use of such filtered indexes.


If your primary issue is read/query performance, then sure, well-designed indexes, partitioning and, as you say, carefully constructed WHERE clauses are often enough to maintain decent performance even with millions of records. But if you then to do deletions or bulk updates or schema transformations you're in for some serious downtime.


In enterprise clients you commonly run into issues where the company thinks you have to save all data forever. Very often this runs in a pattern where the application is lightly used a first then uptake increases over time. Then you run into the slowness issue. They typically expand DB sizing, bit eventually run into the problem where archiving is needed. This can be a huge problem when it's an after thought instead of a primary design. All kinds of fun when you have key relationships between different tablea.


It's more that once it gets to a certain size (say, 100s of 1000s of rows), doing anything with the table is painfully slow, often requiring you to take your application offline for considerable periods. Even deleting rows can take 10s of minutes at a time, and it can certainly take a very long time to work out what indexes need to be added and whether they're actually helping.

Yes, sometimes the pressure comes from management etc., but more often than not it would be premature optimisation to add the archiving, so it's a matter of finding a balance and "predicting" at what point the archiving needs to happen.

Table partitioning can help too but only so much.


Dynamo is quick for that, so long as you are picking good partition keys.

Instead, it'll throw you hot key throttling if you start querying one partition too much


> Design your application's hot path to never use joins.

Grab (uber of asia) did this religiously and it created a ton of friction within the company due to the way the teams were laid out. It always required one team to add some sort of API that another team could take advantage of. Since the first team was so busy always implementing their own features, it created roadblocks with other teams and everyone started pointing fingers at each other to the point that nothing ever got done on time.

Law of unintended consequences


Hard to follow the link. How would you join two tables between teams that don't communicate?


You don’t, that’s the problem.


yes, this is a fair point. there's no free lunch after all. without knowing more about what happened with Grab I'd say you could mitigate some of that with good management and access patterns, though.


All in all though, I don't think that 'never use joins' is a good solution either since it does create more developer work almost every way you slice it.

I think the op's solution of looking more closely at the hot paths and solving for those is a far better solution than re-architecting the application in ways that could, or can, create unintended consequences. People don't consider that enough, at all.

Don't forget that hot path resolution is the antithesis of 'premature optimization'.

> you could mitigate some of that with good management and access patterns

the CTO fired me for making those sorts of suggestions about better management, and then got fired himself a couple months later... ¯\_(ツ)_/¯... even with the macro events, their stock is down 72% since it opened, which doesn't surprise me in the least bit having been on the inside...


Grab was the company Steve Yegge left Google to go to. He quit during COVID when he could no longer travel to Asia, and his retrospective is glowing, including of the CTO Mark Porter(I think?): "frankly, the “Grab Way” of collaboration teaches you life skills, such as psychological safety and inclusiveness, which work everywhere else as well. [...] We US Grabbers made many mistakes on the journey towards becoming better Grabbers. Mark Porter led the charge on figuring it out, and we’ve learned a ton from him as he has evolved as a person before our eyes [...] I want to thank Mark especially for helping me grow as a leader and as a person. It takes humility to become a better person, and Grab is a humbling place. The core company value of humility comes straight from the CEO and cofounder Anthony Tan, who is one of the most humble, down-to-earth people you’ll ever meet.".

Is that, uhh, rose-tinted glasses?

2020: https://steve-yegge.medium.com/saying-goodbye-to-the-best-gi...

Previously, 2018: https://steve-yegge.medium.com/why-i-left-google-to-join-gra...

[I don't have any knowledge of Grab or the people involved other than reading these two blog posts, or of the parent commenter].


Sorry, I gave the wrong role, it was this guy... VP of Eng [0] that fired me and then left for 'personal reasons'... he was fired. Classic incompetent VP Eng manager who didn't know anything about computers. Note how he never really found another position of equal stature as Grab...

I came along as an aquihire of a Vietnamese team that I just happened to be managing at the time. Great team of people. I negotiated a stupid high salary, probably because they weren't used to someone negotiating at all (highest eng in all of Singapore) and that was part of why they were upset at me, I was making more than that VP of Eng and stirring the pot with comments about their poor architecture decisions.

Yegge was a good hire, but probably wrong company for him given the political differences. I think Yegge started a bit after I was fired. I remember thinking to myself that he's either not going to be very effective or he won't last long. To his credit, I think he lasted longer than I would have bet he would have. They had had another ex FB CTO much earlier before me, that was a train wreck [1] and ended up suing the company. They were without a CTO for a long time, probably thought Yegge could fill that roll and ended up hiring Mark instead.

There was a definite distinction between the Singapore and US/Seattle teams, at least the short time while I was there, they pretty much didn't talk to or like each other at all. It made getting those API calls almost impossible.

[0] https://www.techinasia.com/kumaravel-leaves-grab

[1] https://www.techinasia.com/wei-zhu-leaves-grabtaxi


My hot take: always use a materialized view or a stored procedure. Hide the actual, physical tables from the Application's account!

The application doesn't need to know how the data is physically stored in the database. They specify the logical view they need of the data. The DBAs create the materialized view/stored procedure that's needed to implement that logical view.

Since the application is never directly accessing the underlying physical data, it can be changed to make the retrieval more efficient without affecting any of the database's users. You're also getting the experts to create the required data access for you in the fastest, most efficient way possible.

We've been doing this for years now and it works great. It's alleviated so many headaches we used to have.


"Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)

> The application doesn't need to know how the data is physically stored in the database.

In all the applications that I've designed, the application and the database design are in sync. That's not say that you wouldn't use materialized views to deal with certain broad queries but I just don't see how this level of abstraction would make a big difference.


There's the physical data model, and there's the logical data model. The application(s) only deal with the logical data model. They don't need to worry about how the data is physically stored.

This allows for the forward-compatible evolution of the logical data model which may necessitate extreme changes to the physical data model to keep everything performant. The client application(s) aren't affected by all the changes.


The application usually has a logical data model in the form of objects representing the data, which, in turn stores that data to the physical model. This could be a separate middle tier layer or not. You're proposing another logical model in the database and I don't see the advantage.

These models shouldn't be significantly different from each other. How you store the data physically should be is very much how it's represented logically. And then how the UI represents the data model to the user.

If you have 2 tables -- a summary and a detail for example -- that's going to be similarly represented to the user in the UI. If you break that summary down into another table (because you need to allow, say, more addresses) you're going to bubble that right up the UI for saving and loading. For querying, you might have a materialized view that summarizes the detail data when viewing a bunch of summaries together but that's in addition to rest of the model.

I'm going to need some kind of example to understand the advantage of this.


Interface contracts and indirection FTW.

2011, "Materialized Views" by Rada Chirkova and Jun Yang, https://dsf.berkeley.edu/cs286/papers/mv-fntdb2012.pdf

> We cover three fundamental problems: (1) maintaining materialized views efficiently when the base tables change, (2) using materialized views effectively to improve performance and availability, and (3) selecting which views to materialize. We also point out their connections to a few other areas in database research, illustrate the benefit of cross-pollination of ideas with these areas, and identify several directions for research on materialized views.


That's how it was at AOL. But also, in general, people who wrote C code never designed SQL stuff. We'd come up with some requirements, meet with a DBA (who at the time got paid a lot more money, I always assumed because their work was inherently dull), they'd put together stored procedures for use to call, and then do whatever on the physical table side. They did sometimes change the physical table stuff without us having to change anything (not sharding tho, that was edit a TCL file and restart when they said to restart).


This doesn't work because DBAs are rarely on the dev team's sprint schedule. If the DBAs are blocking them devs can and will figure out how to route around the gatekeepers. In general, keep the logic in the app not the db.


Also, many companies don't even have DBAs these days. DBA is, at best, a part time job for a senior+ engineer.


We have sprints. We also have super responsive DBAs. Keeping the logic in the app is the path to unresponsive database calls. Been there, done that. Not going back to that crap.

In all seriousness, I won't work for an organization that works the way you describe. It's a red flag and a sign of organizational issues, personality issues, and ineffective management. Don't need to waste my time at a place like that.


Yeah, I like doing this too. Not _always_, but for a few things. I use it to emulate partial functions.

In a show hosting/ticket booking app for example, I never want in any case user facing search/by-id endpoints to serve a show from 2 months ago. So I create a view `select * from shows where time > now`. I can now use this as a 'table' and apply more filters and joins to this if I wish.


But for the saves the structure is visible?


You can update underlying data via a materialized view.


Heck, at least with SQL Server 2017, I've been able to write updates to a thin view, and so long as the columns referenced were not ambiguous, the database would handle the underlying tables and locks required.


Premature denomalization is expensive complexity. Denormalization is a great tool, maybe an under-used tool. But you should wait until there are hot paths before using it.


I agree. To be clear I'm not suggesting anyone start denormalizing everything. I'm saying if you're fortunate enough to be on a green project, you should design the schema around the access patterns which will surely be "denomarlized." as opposed to designing a normalized schema and designing your access patterns around those.


Thanks, I hate it.

This seems close to the territory of "why do I need a database? I just keep a bunch of text files with really long names that described exactly what I did to compute the file. They're all in various directories, so if you need to find one just do some greps and finds on the whole system"

I recognize there's a big gap, but boy howdy does what you're suggesting sound messy.


> Design your application's hot path to never use joins

I had a Chief Architect who decreed this.

So engineers wound up doing joins in application code, with far worse performance, filtering, memory caching, etc.


If you don't use joins, how do you associate records from two different tables when displaying the UI? Do you just join in the application? Or something else?


this has opinionated answers.

if you ask Amazon, they might suggest that you design around a single table (https://aws.amazon.com/blogs/compute/creating-a-single-table...).

in my opinion it's easier to use join tables. which are what are sometimes temporarily created when you do a join anyways. in this case, you permanently create table1, table2, and table1_join_table2, and keep all three in sync transactionally. when you need a join you just select on table1_join_table2. you might think this is a waste of space, but I'd argue storage is too cheap for you to be thinking about that.

that being said, you really have to design around your access patterns, don't design your application around your schema. most people do the latter because it seems more natural. what this might mean in practice is that you do mockups of all of the expected pages and what data is necessary on each one. then you design a schema that results in you never having to do joins on the majority, if not all, of them.


Would it be possible to simply use a materialized view for table1_join_table2?


The problem there would be how they fall out of date, and updating them is a heavy operation.

An alternative I've heard before is using triggers with regular tables, so updating one automatically updates the relevant other ones.


Materialized views allow insert/update mechanics on many platforms. Alternatively you can refresh them.


> what this might mean in practice is that you do mockups of all of the expected pages and what data is necessary on each one. then you design a schema that results in you never having to do joins on the majority, if not all, of them.

Great suggestion! I had a role where I helped a small team develop a full stack, data-heavy application. I felt pretty good about the individual layers but I felt we could have done a better job at achieving cohesion in the big picture. Do you have any resources where people think about these sorts of things deeply?


2001, "Denormalization effects on performance of RDBMS", by G. L. Sanders and Seungkyoon Shin, https://www.semanticscholar.org/paper/Denormalization-effect...

> We have suggested using denormalization as an intermediate step between logical and physical modeling, to be used as an analytic procedure for the design of the applications requirements criteria ... The guidelines and methodology presented are sufficiently general, and they can be applicable to most databases ... denormalization can enhance query performance when it is deployed with a complete understanding of application requirements.

PDF: https://web.archive.org/web/20171201030308/https://pdfs.sema...


yeah, exactly. in my experience the vast majority of access patterns are designed around a normalized schema, where it really should be that the schema is designed around the access patterns and generously "denormalize" (which doesn't make sense in this context of a new database) as necessary.


Single Table Design is the way forward here. I can highly recommend The DynamoDB Book [0] and anything (talks, blogs, etc) that Rick Houlihan has put out. In previous discussions the author shared a coupon code ("HACKERNEWS") that will take $20-$50 off the cost depending on the package you buy. It worked earlier this year for me when I bought the book. It was very helpful and I referred back to it a number of times. This github repo [1] is also a wealth of information (maintained by the same guy who wrote the book).

As an added data point I don't really like programming books but bought this since the data out there on Single Table Design was sparse or not well organized, it was worth every penny for me.

[0] https://www.dynamodbbook.com/

[1] https://github.com/alexdebrie/awesome-dynamodb


And if you don't want to spend money, you can get idea from this article:

https://www.alexdebrie.com/posts/dynamodb-single-table/

Im really curious about real life performance on different databases, especially in situation where RAM is smaller than database size.


That article didn't appear to be suggesting single-table design was appropriate for general purpose RMDBSes (or any database other than DynamoDb).


Normalization is not only about data storage but most importantly, data integrity.


Yes, but I assert that it's possible to use transactions to update everything consistently. Serializable transactions weren't really common when MySQL/Postgres first came out, but now that they're common in new DBs + ACID, I think it's not possible to do with reasonable difficulty. If you agree with this, than its easy to prove that denormalized tables performance increase is well worth the annoyance of updating everything to transactionally update the dependencies.

I won't say that it's trivial to update all of your business logic to do this, but I think it's definitely worth it for a new project at least.


Denormalized transactions are not trivial unless you are using serializable isolation level which will kill performance. If you don't use serializable isolation level, then you risk either running into deadlocks (which will kill performance) or inconsistency.

Decent SQL databases offer materialized views, which probably give you what you want without all the headache of maintaining denormalized tables yourself.


all fair points, but to be fair I don't necessary think this makes the most sense for an existing project for the reasons you state. I do think for a new project would best be able to design around the access patterns in a way that eliminate most of the downsides.


Transactions are not only (actually mainly not) about atomicity. Of course it’s possible to keep data integrity without normalisation, but that means you need to maintain the invariants yourself at application level and a big could result in data inconsistency. Normalisation isn’t there to make integrity possible, it’s there to make (some) non-integrity impossible.

Nobody says you have to have only one view of your data though. You can have a normalised view of your data to write, and another denormalised for fast reads (you usually have to, at scale). Something like event sourcing is another way (which is actually pushing invariants to application level, in a structured way)


> that means you need to maintain the invariants yourself at application level

Foreign Keys.

Of course, now you have a new set of problems, but referential integrity isn't one of them.


You always need to compare write vs read performance.

Turning a single table update into a 10 table one could tip your lock contention to the point where you are write bound or worse start hitting retries.

Certainly it makes sense to move rarely updated fields to where they are used makes sense.

Similarly "build your table against your queries not your ideal data model" is always sage advice.


Ten years ago when DB engines were not as good and servers were not as large, I did something similar -- set up a trigger on insert/updates to certain relations that auto updates the main record with a cache column. Back then it was comma separated, but today I would obviously use JSONB. Back then it reduced latency significantly. Today, I would probably not attempt it.


I'd say that probably depends on what your hot path is. If it's write-heavy, then you'll probably end up with performance issues when you need to write the same data to multiple tables in a single transaction. And if all of those columns are indexed, it'll be even worse.


Very hot take indeed. As with all things, it depends and use the query planner to measure what actually makes a difference.

In our application we have one important join that actually makes things a lot faster than the denormalized alternatieve. The main table has about 8 references to an organization table. To figure out what rows should be selected for a particular organization, you could either query on those 8 columns, making a very big where/or clause. As it turns out, PostgreSQL will usually end up doing a full table scan despite any index you would create.

Instead, there is an auxiliary table with two columns, one for organization and one reference to the main table. Joining on this table simplifies the query and also turns out to be much faster.


IME if a join is the problem, a join is not the problem.


> On this note, I have mixed feelings about Amazon's DynamoDB, but one things about it is to use it properly you need to plan your use first, and schema second. I think there's something you can take from this even with a RDBMS.

This captures the experience I've had with DynamoDB and document databases in general. They appear more flexible at first, but in truth they are much less flexible. You must get them right up front or your going to be paying thousands of dollars every month in AWS bills just for DynamoDB. The need to get things right up front is the opposite of flexibility.


> - Design your application's hot path to never use joins. Storage is cheap, denormalize everything and update it all in a transaction.

Hard disagree. You just re-implemented a database engine in your application code. Poorly.


As indeed a traditional HN remark, you are giving advice for applications that almost no one will ever need to build because you will never, ever see the type of traffic/users for it.

Also, doing these things ; dejoining, UUIDs and indexing all columns (really unsure about this one; why?), might be better later on, but at the start it will be a lot heavier.

Modern hardware and databases can take an incredible amount of traffic if you use them in the right and natural way without artificial tricks.


I’m wondering if indexes and materialized views can be used to do basically the same thing? That is, assuming they contain all the columns you want.


There's always money in the banana sta...materialized views. Materialized views will get you quite a ways on read heavy workloads.


As long as you're okay with the reads being a little out of date after writes occur.


The issue is writes, not reads.


I wish for a DB that lets me write a completely normalized scheme, and then lets me specify how it should denormalize the scheme for actual storage. There is no reason manual updates to denormalized DBs need to be hand-rolled every time. They are easy to automatically deduce.


These are called material views.


*updatable views.

Not every material view is updatable, and a view doesn't need to be materialized to be updatable.


>Design your application's hot path to never use joins. Storage is cheap, denormalize everything and update it all in a transaction. It's truly amazing how much faster everything is when you eliminate joins.

Anybody has documentation about this with examples?


See "Single Table Design" which I talked about in this comment above: https://news.ycombinator.com/item?id=37093357


And if you don't want to spend money, you can get basic idea from this article: https://www.alexdebrie.com/posts/dynamodb-single-table/


Duplicate data to avoid joins, use serializable transactions to update all the duplicated data.


> index all columns

check out MariaDB "ColumnStore". it recently got merged into the upstream binary, and i started reading about it. ngl i was salivating a bit.


Yes I like the zero joins on hot paths approach. It can be hard to sell people on it. It’s a great decision for scaling though.


When your queries become slow because of joins, you havent designed your table structures properly.


Materialized views provide a way to have joins that are accessed like tables!


Performance isn't king. The business is king.


I would rather just use a cache.


Squeeze what you've got, as hard as you can, then realize after squeezing for a while that if you squeezed here, here, and also... here, by changing how you think about a problem, suddenly you've got a lot left to get.

I spent two or so months optimizing the crap out of a majestic monolith and went from under 2K RPS when the PM thought, and the team repeatedly reported, that everything had been squeezed as much as it could, then changing the hardware, which got us to less than 3200 RPS, then to 4K RPS after just a few days of tinkering, to 10K RPS with a bit more effort, to 40K RPS a week or so later. "Oh that's, enough, we don't need to go further." I then changed "quite a bit of stuff" which then jumped us to 2M+ RPS, and then a month later, a consistent 40M+ RPS with low latency on a single box and there is still some juice left in the box should we want to go a little harder.

Right now we're not even touching 5% of the capacity of what we can pull from, it was that much of a change, simply by changing how we think about the problems. Moving from the old server to the new server let us jump from around 1800 RPS to a hair over 3000 RPS. Adding more hardware didn't fix our underlying problems. Adding more complexity was just punting the problem down the road. But changing how to think about the problem? _That_ changed the problem. And changed our answer to the problem.


Very curious to learn more about what the monolith was doing so incredibly poorly that you managed to squeeze that much performance out of it. Poorly written queries? Too many queries? Lack of any caching? Doing things synchronously when they could've been done concurrently?


Some of that, some other bad practices. Lots of low-hanging fruit, then more esoteric changes.

https://justinlloyd.li/blog/how-much-cache-you-got-on-you/


He cached everything and delayed writes too. It's easy to make a system fast when it's not realtime.


Incredibly dismissive of somebody's work, aren't ya? I regret breaking my own self-imposed rule of never answering follow-up questions on HN because there's always somebody willing to hand-wave away six months of my life and 40 years of real-world experience with a flippant comment of "oh, but that's easy if you don't have too..."


Fragile ego too.


And you sir, are fucking toxic, just looking at your comment history I can tell that. So you calling me "fragile" carries about as much weight as the next idiot with an opinion.


Toxicity is in the eye of the beholder.


Quick google yields very good examples in huge improvements on a single algorithm level: https://youtu.be/c33AZBnRHks Easy to imagine how even smaller improvements across 10-100 steps of data processing, can become even better.


+1 I'd like to know as well



Could you share what's the content/protocol of these requests? And what's the average payload size (for req and response each)?


You gotta share more lol. These are insane gains.


I cannot go too deep. It was some client work.

https://justinlloyd.li/blog/how-much-cache-you-got-on-you/


Thank you! Even that was insightful.


> Split up the monolith into multiple interconnected services, each with its own data store that could be scaled on its own terms.

Just to note: you don't have to split out all the possible microservices at this junction. You can ask, "what split would have the most impact?"

In my case, we split out some timeseries data from Mongo into Cassandra. Cass's table structure was a much better fit — that dataset had a well defined schema, so Cass could pack the data much more efficiently; for that subset, we didn't need the flexibility of JSON docs. And it was the bulk of our data, and so Mongo was quite happy after that. Only a single split was required. (And technically, we were a monolith before and after: the same service just ended up writing to two databases.)

Ironically, later, an airchair architect wanted to merge all the data into a JSON document store, which resulted in numerous "we've been down that road, and we know where it goes" type discussions.


Is interesting that the idea of micro services is throw like a obvious "solution".

Is not.

"Scale-up" MUST be the "obvious" solution. What is missed by many, and this article touch (despite saying that micro-services is a "solid" choice) is that "Scale-up" is "scale-out" without breaking the consistency of the DB.

Is a lot you can do to squeeze, and is rare you need to ignore join, data validations and other anti-patterns that are normally trow casual when problems of performance happens.


I don't know what to tell you other than I've seen vertical scaling hit its ceiling, several times. The OP lists "scale vertically first" as a given; to an extent, I agree with it, and the comment you're responding to is made with that as a base assumption.

There are sometimes diminishing returns to simple scaling; e.g., in my current job, each new disk we add adds 1/n disks' worth of capacity. Each scaling step happens quicker and quicker (assuming growth of the underlying system). Eventually, you hit the wall in the OP, in that you need design level changes, not just quick fixes.

The situation I mention in my comment was one of those: we'd about reached the limits of what was possible with the setup we had. We were hitting things such as bringing in new nodes was difficult: the time for the replica to replicate was getting too long, and Mongo, at the time, had some bug that caused like a ~30% chance that the replica would SIGSEGV and need to restart the replication from scratch. Operationally, it was a headache, and the split moved a lot of data out that made these cuts not so bad. (Cassandra did bring its own challenges, but the sum of the new state was that it was better than where we were.)

Consistency is something you must pay attention to. In our case, the old foreign key between the two systems was the user ID, and we had specific checks to ensure consistency of it.


In a way, in the article they also did a split: specific heavy select queries were offloaded to a replica.


They could probably squeeze more depending on their workload patterns. RDBMS' typically optimize for fast/convenient writes. If your write load would be fine with a small increase in latency then you can do a lot of de-normalization so that your reads can avoid using tonnes of joins, aggregates, windows, etc at read-time. Update write path so that you update all of the de-normalized views at write time.

Depending on your read load and application structure you can get a lot more scale with caching.

Decent article.


Funny enough I frequently have the opposite problem, justifying repeatedly why Cassandra is a bad fit for relatively short lived, frequently updated data (tombstones, baby).


I'd agree with you there.

The specific data that went into Cassandra in our case was basically immutable. (And somehow, IIRC, we still had issues around tombstones. I am not a fan of them.) Cassandra's tooling left much to be desired around inspecting the exact state of tombstones within the cluster.


The other thing worth noting is a server can read from 2 datastores - which as a sibling comment says, they ended up doing with a read replica. There's nothing preventing you from reading from Postgres and Redis in the same process!


Loads of over-engineering decisions would be avoided if devs understood how to read EXPLAIN/ANALYZE results and do the proper indexing/query optimization.

Log queries, filter our the ones that are very frequent or take loads of time to execute, cache the frequent ones, optimize the fat ones, do this systematically and your system will be healthier.

Things that help massively from my experience: - APM - slow query log - DB read/write replicas - partitioning and sharding


Simply understanding how to read explain output can be quite a task in itself though, databases are a whole other thing, especially if you barely do any SQL yourself.

Tools like https://explainmysql.com that make it clearer what you actually need to optimise are an easier system for Devs with enough database knowledge to set stuff up, but not enough to understand how it's used.

I assume someone's already working on an AI system that takes schema and logs and returns the SQL needed to magically improve things. Not sure I'd trust that, but I'd bet many companies would rather use that then get a full DBA.


Understanding explain output is usually very simple. 1 Look for any occurrence of “table scan”. 2. Add index on those queried fields or limit the query by filtering on another already indexed field.

This should unclog the most low hanging fruit. Then there is of course more advanced scenarios, especially with joins.

That’s not to say that the UX for explaining (hah) this doesn’t have a lot of room for improvement.


Do you know of any good resources to understand sql explain plan. In my current project, we are facing a lot of issues related to query performance on MS SQL server. Do we need to always specify index hint with queries. Sometimes index exists but query does not seem to be using the index. I am thinking using sql execution plan could help us understand this issue better. tia.


I recommend https://use-the-index-luke.com/ or even better do the training with Markus Winand, it'll change how you view databases.


I was recommended this video once, but I haven't watched it: https://youtu.be/sGkSOvuaPs4

Use the Index Luke (also recommended by cocoflunchy) was one of my go to resources once.

Also, Tobias Petry does a really good job by covering many advanced topics on a Twitter and his books: https://twitter.com/tobias_petry


Can't even count how many "next Gen architecture" sessions I've been at which certainly could've been replaced with due diligence on the current implementation.

You don't fix bad coding with a new architecture. That just puts the problem off by some time.


I’m reminded of one of my favorite sayings:

You go to war with the army you have, not the army you might want or wish to have at a later time.

You may want to ignore that this this comes from Donald Rumsfeld (he has some great ones though: “unknown unknowns …”, etc.)

I think about this a lot when working on teams. Everyone is not perfectly agreeable or has the same understanding or collective goals. Some may be suboptimal or prone to doing things you don’t prefer. But having a team is better than no team, so find the best way to accomplish goals with the one you have.

It applies to systems well too.


Rumsfeld's got some great quotes, most of which were delivered in the context of explaining how the Iraq war turned into such a clusterfuck, and boy could that whole situation have used the kind of leadership Donald Rumsfeld's quotes would lead you to believe the man could've provided.


JFYI, the "Unknown Unknowns" quote is from before the invasion (2002-02-12). It was deflection on whether there was evidence of Iraq building WMDs or of cooperating with e.g Al Qaeda.

> Q: Could I follow up, Mr. Secretary, on what you just said, please? In regard to Iraq weapons of mass destruction and terrorists, is there any evidence to indicate that Iraq has attempted to or is willing to supply terrorists with weapons of mass destruction? Because there are reports that there is no evidence of a direct link between Baghdad and some of these terrorist organizations.

> Rumsfeld: Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

> And so people who have the omniscience that they can say with high certainty that something has not happened or is not being tried, have capabilities that are ...

https://archive.ph/20180320091111/http://archive.defense.gov...


I haven't read the transcript of this conversation in a long time, but thank you for sharing it.

The sophistry of his argument is extreme.

Yes, of course there are always "unknown unknowns"-- but the statement "there is no evidence that Iraq is supplying WMDs to terrorists" is not a statement made in a vacuum, in which all permutations of known/unknown are equally likely.


It really is jarring that this is about the actual justification for a war, and his response basically boils down to "well we don't know they haven't done it or if they might in the future".


If I remember it correctly (it was a long time ago), he never fully supported the war. It didn't take a genius to notice that the goals set by the presidency were (literally) impossible and not the kind of thing you do achieve a war.

But whatever position he had, Iraq turning into a clusterfuck wasn't a sign of bad leadership by his part. It was a sign of bad ethics, but not leadership. His options were all of getting out of his position, disobeying the people above him, or leading the US into a clusterfuck.


Rumsfeld personally advanced the de-baathification directive - the lynchpin of the clusterfuckery - all on his own, and he certainly would have known to expect the 'unexpected' results to be similar to de-nazification. This was absolutely his choice. Another point you have (unintentionally?) brought up is the dignified resignation option. While it is often a naive, self-serving gesture, we can reasonably imagine that the Defense Secretary publicly resigning over opposition to a war during the public consideration of that war, might have had some effect on whether that war was started. I want to like him too, with his grandfatherly demeanor and genuine funnyness ("My god, were there so many vases?!") but, come on.


I don't think you remember correctly:

> In the first emergency meeting of the National Security Council on the day of the attacks, Rumsfeld asked, "Why shouldn't we go against Iraq, not just al-Qaeda?" with his deputy Paul Wolfowitz adding that Iraq was a "brittle, oppressive regime that might break easily—it was doable," and, according to John Kampfner, "from that moment on, he and Wolfowitz used every available opportunity to press the case."

https://en.wikipedia.org/wiki/Donald_Rumsfeld#Military_decis...


As history, this is completely incorrect, but beyond that, if you don’t believe in the mission of the President in committing an act of war, you have a responsibility to resign, and it can’t be bracketed as “bad ethics”.

Anyway, another historical point besides what the other commenters have said is that Rumsfeld believed in “transformation” which meant you could do more with less in modern war. He was totally wrong about it.

It wasn’t his fault Turkey didn’t let the US attack from the north, but other than that, the fuck up is his responsibility, among others.


There's a distinction missing here. Rumsfeld's Transformation idea was correct with regard to the invasion, which was one of the most successful invasions in history, period. No one has ever taken over such a large country, so far away, so fast, with so few troops, before or since.

The occupation afterward was where the clusterfuck came in, and (somehow) none of the preparation had been directed toward that.


Funny enough, if you go by what's written in the US constitution, their president can't actually go to war.

But thanks to enough loop holes, they can get into 'special military operations' (to misappropriate a recent term).


At no point in US history was it ever the case that every military operation was considered a war, nor that a war declaration would be necessary and/or appropriate to conduct such operations. What's more, Congress has frequently and explicitly given the President authority to conduct large scale military operations and held oversight hearings, etc. of the execution of those operations - all without formal declarations of war; Vietnam, Iraq, and Afghanistan being prime examples of this.

Calling them "special military operations," poor taste aside, especially with a focus on the formal process used, ignores the fact that all of those conflicts were entered and conducted with the full knowledge and involvement of Congresss.


> Rumsfeld's got some great quotes, most of which were delivered in the context of explaining how the Iraq war turned into such a clusterfuck

If by “explaining how” you mean “deflecting (often preemptively) responsibility for”, yes.


> If by “explaining how” you mean “deflecting (often preemptively) responsibility for”, yes.

There's no reason to think you can't do both of those with the same statement.


Abstractly, sure.

I’m characterizing what I recall Rumsfeld concretely doing, not what is abstractly possible for one to do.


> could've

If someone is 83.7% likely to provide good leadership, how would you evaluate the choice to hire that person as a leader in the hindsight that the person failed to provide good leadership -- was it a bad choice, or was it a good choice that was unlucky?

(Likelihood was selected arbitrarily.)


Like everything in politics, I think this is a function of what team you cheer for. If your goal was to come up with an excuse to invade Iraq, that person was an excellent choice. If you’re on the other team, what a clusterfuck.

Then you add in a party system and it gets more complicated. Realistically, you don’t get to be the United States Secretary of Defense (twice) if you’re the kind of person who will ignore the will of the party and whoever is President.


Is that number (publicly) known when you hire the person?

If yes, you just evaluate the choice based on that probability (and other things you knew at the time), not on the actual outcome.

Prediction markets are one way to make these kinds numbers known.


No, the likelihood is unknown, but the hiring process includes a model to estimate it. Of a sort.


>quotes would lead you to believe


Could've at least given them some motivational quotes.


I like to remind myself that very few people reach positions of great power after mediocre lives. Rather there’s a thread of talent that runs through government.

Once they’re in, the predilections that led to power often rear their dark long tails. But they’re all (even the ones I disagree with) talented.


They're talented at getting into power, and may be talented at any number of other things.

They're not always talented at the things we may want them to be, unfortunately. And that's true of both the ones I agree and disagree with.


>I like to remind myself that very few people reach positions of great power after mediocre lives.

You'd be surprised.

"Reaching positions of great power after mediocre lives" is the very art of career politics.


Politics is fundamentally the art of convincing people of things - usually “vote for me.” That is the only skill that acquisition of high office is evidence of. Many politicians have more skills than just than that, but the mere fact of having acquired high office tells you nothing more about a person than that they’re particularly good at politics.


Every time I hear the name Rumsfeld, I am reminded of the time when, for over 10 minutes, he refused to deny being a lizard:

https://www.youtube.com/watch?v=XH_34tqxAjA


Haha. Thanks for sharing that. Rumsfeld definitely has a sense of humor.

He’s also one of the best candidates for that type of conspiracy theory. His career history is flabbergasting.

Check out https://en.wikipedia.org/wiki/Donald_Rumsfeld#Corporate_conn...

In addition to all the Bohemian Club, RAND corp, defense and government posts, in the 70s the guy was a CEO in the pharmaceuticals and electronics industries, was a director in aerospace, media and tech.

Definitely the type of resume that lets the imagination run wild with, “… wait, was he a lizard person …?”


"No battle plan survives contact with the enemy."

https://www.google.com/search?q=no+battle+plan+survives


https://en.m.wikipedia.org/wiki/Helmuth_von_Moltke_the_Elder

Moltke's thesis was that military strategy had to be understood as a system of options, since it was possible to plan only the beginning of a military operation. As a result, he considered the main task of military leaders to consist in the extensive preparation of all possible outcomes.[3] His thesis can be summed up by two statements, one famous and one less so, translated into English as "No plan of operations extends with certainty beyond the first encounter with the enemy's main strength" (or "no plan survives contact with the enemy") and "Strategy is a system of expedients".[18][8] Right before the Austro-Prussian War, Moltke was promoted to General of the Infantry.[8]


I’m reminded of the Eisenhower line: “plans are worthless, but planning is everything.”


Good one indeed.

Apropos of Eisenhower, there is an incredible (fiction) book by Larry Collins, called Fall from Grace. It is about a brilliant long term plan and actions by British and French secret services to deceive the Germans about where the final Allied invasion would happen on the shores of France near the end of WWII. According to the novel, the ruse helped win the war.

Interwoven with a doomed romance.

https://www.google.com/search?q=fall+from+grace+larry+collin...

I read the full book some years ago, and it was gripping, though slow in parts.


“No battle was ever won according to plan, but no battle was ever won without one”

I heard it in this form.


Also Moltke: “no plan survives contact with the enemy”


Recursion.

   ...

         Recursion


Classic.


Mike Tyson said it more simply: "Everybody has a plan until you get hit in the face."


Nope.

Let's juxtapose them and see:

Von Moltke:

"No battle plan survives contact with the enemy."

Tyson:

"Everybody has a plan until you get hit in the face."

Pretty much the same meaning, and Von Moltke's quote is three words shorter, so no, Tyson's quote is not simpler.

Also, Tyson was ungrammatical, IMO:

"Everybody" vs. "you" in the same sentence, referring to the same entity.

Grammar experts, correct me if I am wrong.


It was midnight and a few beers after celebrating a birthday. I'm sorry I offended your grammatical sensibilities. But you really did go full orange site there, didn't you! I will admit to misquoting Mike Tyson; "Everyone has a plan until they get punched in the mouth.", which I hope goes someway to restoring peace and order over a tiny, drunken grammatical slip-up.


Oh, I wasn't offended at all. I am not one of these Oxford comma type of people (I think that was a trend on Twitter a while ago). I have no idea what that means, except maybe it is about grammatical correctness, and I am not going to google it. :)

I was just being a little pedantic for fun. I don't do that often.

So peace and order was not even disturbed, at least for me.

Enjoy.


BTW, looks like you could deploy my new SaaS at the end of your above comment, for great good:

https://news.ycombinator.com/item?id=37064183


BTW, what is "orange site"?

I googled it and at least the top few links don't seem relevant.


This site's default theme is orange.


Thanks, but I knew that, and it does not seem to answer my question, unless I didn't get what you meant.

My question was in reply to the comment above by sbuk, excerpted below:

>It was midnight and a few beers after celebrating a birthday. I'm sorry I offended your grammatical sensibilities. But you really did go full orange site there, didn't you!


This site is the “orange site”. The comment is saying that the gray text comment behaved like some people on this site behave; a behavior which people associate with this site. I think it can be described as overly nit-picky, but that’s an incomplete description.


Simplicity isn't a function of number of words alone.

> Also, Tyson was ungrammatical, IMO:

> "Everybody" vs. "you" in the same sentence, referring to the same entity.

Seems perfectly understandable to English speakers. (And that's pretty close to how English grammar is defined by descriptive linguists.)

Have a look at http://fine.me.uk/Emonds/ for an exploration of these kinds of concepts.


Oh, it was perfectly understandable to me too, even though I am not a native English speaker (but I have been told by native speakers that my English is quite good).

"Everybody" seems to be in the third person and "you" is in the second person, so I thought it was a mismatch (since in the same sentence, etc.), and so was ungrammatical.

Let anyone tell me if I am wrong, would like to know.

And see my reply to sbuk, it was just in fun.


Real world grammar is a lot more fluid and flexible, than just blindly following a bunch of fixed, mechanical rules.

The mechanical rules are just an imperfect attempt at capturing parts of the richness of real world language, or more precisely: language variants of different dialects and speakers.

Of course, there's a whole world of class markers overlaid here as well. If you want to sound middle-class educated in most of the English speaking world, you have to avoid "ain't" and say things like "It is I" or "Bob and I went shopping.", instead of the more natural "It's me!" or "Bob and me went shopping." That's what Emond calls 'Grammatically Deviant Prestige Constructions'. The whole point is that they aren't part of a naturally learnable variant of English, so they can only be acquired by schooling.

Most people who speak prestige-English over-generalise, and also say things like "She likes Bob and I.".

See fine.me.uk/Emonds/ for details.


>See fine.me.uk/Emonds/ for details.

Nope. Not important to me.


Most places have "they" instead of "you" in the quote.


That works, too.


It's more like that's the right form, not just "works too", for the reason I said about third and second person forms.


And anyway, simplicity, although I favor it a lot in my work, is not a virtue in itself. As for many, if not most issues, the answer is "It all depends.".


[flagged]


If you want to express your disagreement with what fuzztester said, please say so.

There's no need to attempt a mental diagnosis of people over the Internet. (Nor is there any need to equate thing you don't like with certain mental disorders. No need to be rude to autistic people like that.)


Mattis "the enemy gets a vote" is another good reminder of reality, although people get very angry about it. Useful in terms of security, privacy, DRM, etc.


I work in an area with particularly clever and motivated users, and this quote pops to mind now and again when I learn about some of the hacks they’re using to get around some of the more optimistically designed systems they’ve been provided.


Product management outside the box.


I like a similar quote from Steven Pressfield:

“The athlete knows the day will never come when he wakes up pain-free. He has to play hurt.”

This applies to ourselves more than our systems though.


I think it’s as applicable to systems. They are all imperfect, they all have flaws and broken parts that need fixing. And we have to use them.


Great point about working on teams. For the vast majority of tasks, people are only marginally better or worse than each other. A few people with decent communication will outpace a "star" any day of the week.

I try to remind myself of this fact when I'm frustrated with other people. A bit of humility and gratitude go a long way.


Hmmmm, I really don't think this is true all (or even most of) the time. It probably depends on the task at hand, but if leading small teams of all kinds has taught me anything, it's that I'd prefer a tiny team (or even one person) who is at least above average competence, and is reflective of the work they are doing, than several people of average or below-average competence.

It's eye opening how many people are outright lazy with thought, don't care about the joy of doing something well (apart from whatever extrinsic rewards are attached to the work). Many team members can actually produce negative value.

It seems that people who are really capable of (or care about) conscientious, original thought in problem solving and driving projects forward are few. Count yourself lucky if you get to manage one of these people, they can produce incredible value when well directed.


Btw, excellent communication can also be the skill that makes a 'star'.


> Great point about working on teams. For the vast majority of tasks, people are only marginally better or worse than each other. A few people with decent communication will outpace a "star" any day of the week.

Depends on what you are working on. Btw, good communication can also make someone a 'star' and elevate the whole team.

> I try to remind myself of this fact when I'm frustrated with other people. A bit of humility and gratitude go a long way.

That's good advice for most situations.


I think about this whenever a product lead talks about planning something, dumping it on the dev team, and saying it's the dev team's responsibility to figure out how to implement it. No wonder the part of the company that does this has a very contentious relationship with their product team and is overly oriented around metrics, having to constantly fight for resources and prove they don't have the bandwidth to take on projects.

Meanwhile our side of the org has a much more collaborative relationship with our product team. We have our issues for sure, but our relationships are sound. The feedback loop is tight and product pushes back on things as much as the dev team does. Product works with the dev team to figure out what we can do and stays with us to the end. There's much less tossing things over the fence and everybody seems happier.


I’m not sure who came up with it first but the nautical expression is, “you sail with the crew you have”


Yeah I should have left the Rumsfeld part out because the conversation naturally got distracted. It isn’t accurate to attribute it to him. His was perhaps the most prominent recent version , but he was definitely paraphrasing an existing adage.


I'm thinking about this quote for a while but have a hard time squeezing the meaning, or really the actionable part out of it.

The unknown unknowns quote brings the concept that however confident you are in a plan you absolutely need margin. The other quote thought...what do you do differently when understanding that your team is not perfect ?

On one side, outside of VC backed startups I don't see companies trying to reinvent linux whith a team of 4 new graduates. On the other side companies with really big goals will hire a bunch until they feel comfortable with their talent before "going to war". You'll see recruiting posts seeking specialists in a field before a company bets the farm on that specific field (imagine Facebook renaming itself to Meta before owning Oculus...nobody does that[0])

Edit: sorry, I forgot some guy actually just did that 2 weeks ago with a major social platform. And I kinda wanted to forget about it I think.


This however is a retelling of centuries old proverbs and quotes (all the way to Roosevel's "do what you can, with what you have, where you are"), and "unknown unknowns" was a concept already familiar in epistemology, but also fields like systems theory, risk management, etc.


That's the right attitude for an employee. If management says something like that, look for a new job. It's not sustainable to compete with fewer resources than your opposition. There's a reason college sports is going through a passionate realignment right now.


The way to win with fewer resources than your competition is to convince them it's not a competition. Or even better, to not let them know you exist.


Sounds like he may have been a Shania Twain fan - "Dance with the One That Brought You" is the similar phrase I've heard



That was Donald Rumsfeld!? I always assumed this came from some techie or agile guru given how much it's used as a concept in project planning.


As a military officer who was watching CNN live from inside an aircraft carrier (moored) when he said that, being in charge of anti-terrorism on the ship at the time, it was absolutely foundational to my approach to so many things after that. Here's the actual footage: https://www.youtube.com/watch?v=REWeBzGuzCc

Rumsfeld was complicated, but there's no doubt he was very effective at leading the Department. I think most people fail to realize how sophisticated the Office of the Secretary of Defense is. Their resources reel the mind, most of all the human capital, many with PhDs, many very savvy political operators with stunning operational experiences. As a small example, as I recall, Google's hallowed SRE system was developed by an engineer who had come up through the ranks of Navy nuclear power. That's but one small component reporting into OSD.

Not a Rumsfeld apologist, by any means. Errol Morris did a good job showing the man for who he is, and it's not pretty (1). But reading HN comments opining about the leadership qualities of a Navy fighter pilot who was both the youngest and oldest SECDEF makes me realize how the Internet lets people indulge in a Dunning-Kruger situation the likes of which humanity has never seen.

https://www.amazon.com/Known-Donald-Rumsfeld/dp/B00JGMJ914


> Google's hallowed SRE system was developed by an engineer who had come up through the ranks of Navy nuclear power

Wait, really? That makes _so much sense._ It also makes me upset that all of my attempts to sway other SRE orgs over to Nuclear Navy practices have been met with doubt.

- ex nuke submariner


Amazing. Just looked it up. This document references the nuclear navy and civil nuclear power multiple times: https://sre.google/sre-book/lessons-learned/


I’ve read much of that book, but apparently not that far. Thanks for linking.


I'll support you there. In any sensible reading of Nuremberg, they all deserve to hang from the neck until dead. But the central moral failure was Bush. Letting Cheney hijack the vp search, and then pairing him up with Rumsfeld was a bad move, and obviously bad at the time. Those two had established themselves as brilliant but paranoid kooks with their Team B fantasies in the 70s, and should never have been allowed free rein.


Indeed, it is very well possible to be both brilliant and an ethically completely unhinged individual.


> [...] the Internet lets people indulge in a Dunning-Kruger situation the likes of which humanity has never seen.

While we are at it, that infamous Dunning-Kruger study showed didn't even claim what people like to pretend it claimed. In addition the more nuanced claim they did make is not supported by the evidence they collected and presented in their paper. (Their statistics are pretty much useless, and as with any social science study, it has a small 'n' and it doesn't replicate.)

But the mythology 'Dunning-Kruger effect' is too good to pass up in Internet discussions, so it survives as a meme.


I didn't know the names of Dunning or Kruger. I was a medical student who surveyed my classmates on their study habits and also asked them which quintile of the class they believed they stood in. My response rate was high enough that it was impossible to believe so few people from the bottom quintile had responded, and the upper 2 and 3 quintiles were impossibly overpopulated. That's how I learned about the effect. I didn't learn about Dunning and Kruger for several years after that, but when I did, oh boy, did the lights come one.

So, the current fashion of denouncing Dunning and Kruger doesn't jive with me. It was too obvious to discount and I had no idea of the concept when I saw it my own data. I think the misunderstanding has to do with the idea that it's about dumb people being dumb. It's about all of us. We all get it wrong. Even the smart ones. Paradoxically, the smart ones just get it wrong in the less desirable direction.

I think that academic fashionistas may be too clever by half here. Unless you have original data to back up a claim, the internet points aren't worth it. Focus on getting things right.


It's from a post WWII psychological theory the 'Johari Window'. Rumsfeld brought the phrase into wider consciousness.

https://en.m.wikipedia.org/wiki/Johari_window


That it came from Donald Rumsfeld in the context of what we know now and what he surely knew then is why it's such a good quote. The words basically say nothing but are also true about everything. So it can implicit be a warning that there is probably some bullshit going on or someone has a sense of humor and is also warning people while also avoiding the subject - of course just my opinion. How people actually use it will depend what the audience agrees it to mean.


The common use I'm referring to is similar to the OP, which is using it as a framework for assessing risk. In particular, aligning a team on the "known unknowns" is critical to building the confidence and alignment needed as a group to be able to deal with unquantifiable/inestimable risk.


I just took a look at that wiki article for Rumsfeld's usage of the "There are unknown unknowns" and I had no idea that he barrowed this phrase to frame his arguments and I was only familiar with that context, unfortunately.


And unknown unknowns is a great way to communicate with stakeholders too


Žižek has a followup to that quote:

"What he forgot to add was the crucial fourth term: the "unknown knowns," the things we don't know that we know."

I've found it's really critical during the project planning phase to get to not just where the boundaries of our knowledge are, but also where are the things we're either tacitly assuming or not even aware that we've assumed. An awful lot of postmortems I've been a part of have come down to "It didn't occur to us that could happen."


I really enjoy the concept of unknown knowns, but I don’t agree with your example, which is an unknown unknown.

To me the corporate version of the unknown known is when a a project is certainly doomed, for reasons everyone on the ground knows about, yet nobody wants to say anything and be the messenger that inevitably gets killed, as long as paycheck keeps clearing. An exec ten thousand feet from the ground sets a “vision” which can’t be blown off course by minor details such as reality, until the day it does.

Theranos is a famous example of this but I’ve had less extreme versions happen to me many times throughout my career.

Another example of unknown knowns might be the conflict between companies stated values (Focus on the User) and the unstated values that are often much more important (Make Lots of Money)


Yeah, you’re right. I was reaching for an example, but yours are much better and better capture the idea.


In the case of the Iraq war, the unknown knowns were probably key...


I think unknown knowns are more easy to spot when teaching newcomers how the system works. Their questions will sometimes be about things we hadn't even considerered (at least in some time) to be the case, but when you have to spell everything out it is indeed the case. In terms of teaching unknown knowns are critical to identify and instead make known knowns so that everyone can end up with a mostly equal playing field.

As an example, there are a lot of unknown knowns that you accumulate over the years in certain lower level languages that need to be spelled out more clearly to someone who is coming at it as a later endeavor. It's entirely possible to spend all your time in a completely managed language nowadays and the concept of the stack, heap, etc., will be largely alien to you. These ideas and their limitations need to be spelled out clearly in order for someone to build the same knowledge base and intuition.

Unknown knowns are essentially endless in nature and extremely hard to find unless you have someone who simply doesn't know to basically fall into traps and guide you toward finding your hidden knowledge.


> An awful lot of postmortems I've been a part of have come down to "It didn't occur to us that could happen."

Would that not be an unknown unknown?


Usually there's a tacit assumption of how the system works, how the users are using the system, or something else about the system or the environment that causes that - it's not that the answer wasn't known, it's that it was assumed to be something it wasn't and nobody realized that was an assumption and not a fact.


That's just an unknown unknown masquerading as a known known.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: