I do this about once every six months, and every single time I have to go look up the flags. So I recorded it.

The flow is: create the database, populate it with pgbench -i -s 10 (the -s 10 gives you ten “scale factors” of data, which is about a million rows on the accounts table), then run the built-in TPC-B-ish workload with -c 4 -j 2 -T 30 -P 5. Four clients, two threads, 30 seconds, progress every 5 seconds.

Notice that the init phase is slower than you’d expect for a million rows — that’s the client-side data generator writing over the wire. You can pass -I dtGvp on modern postgres to change which phases run and skip the vacuum if you don’t care.

Notice also the progress lines during the run. The tps is stable around 1880 on this box, but the latency stddev is 14ms — that’s the long-tail noise you always get on a laptop. The lat 2.1 ms average is the number I’d report if someone asked, but stddev is the one I actually care about.

At the end, the summary is what matters. tps = 1881 on a laptop with default settings is fine. The commented-out fsync=off line at the bottom is a note to my future self: yes, that trick makes the number go up, no, you absolutely must not run any production workload with fsync off.

Pgbench is not a good benchmark for most real workloads — the schema is nothing like yours — but it’s an excellent “does this postgres work and is the storage sane” smoke test. That’s the use I put it to here.