You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
you measure time from pg_stat_statements total_plan_time and total_exec_time but they do not account for the parsing and rewrite phases. The parsing phase is the most important here with thousands of literal values
You can get elapsed the time from pgbench, the difference is minimal:
postgres=# \! cd /data/tmp/test/articles/unnest/ ; pgbench -n --file sensors_insert_values_1000.sql --file sensors_insert_unnest_1000.sql -t 1000
pgbench (16.2, server 17.1 (Debian 17.1-1.pgdg120+1))
transaction type: multiple scripts
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
maximum number of tries: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
number of failed transactions: 0 (0.000%)
latency average = 5.182 ms
initial connection time = 5.459 ms
tps = 192.965259 (without initial connection time)
SQL script 1: sensors_insert_values_1000.sql
- weight: 1 (targets 50.0% of total)
- 496 transactions (49.6% of total, tps = 95.710768)
- number of failed transactions: 0 (0.000%)
- latency average = 5.963 ms
- latency stddev = 0.805 ms
SQL script 2: sensors_insert_unnest_1000.sql
- weight: 1 (targets 50.0% of total)
- 504 transactions (50.4% of total, tps = 97.254490)
- number of failed transactions: 0 (0.000%)
- latency average = 4.413 ms
- latency stddev = 1.103 ms
the article shows queries using parameters but the code uses literal values. Passing literals is not recommended for large multi-row INSERT (use COPY for that).
A pgbench -M prepared with parameters should be more representative of the statements from an application. Multi-value inserts could be similar here, and easier to generate from JDBC batching for example.
The text was updated successfully, but these errors were encountered:
Hi, I was reading Boosting Postgres INSERT Performance by 2x With UNNEST and have a few comments.
total_plan_time
andtotal_exec_time
but they do not account for the parsing and rewrite phases. The parsing phase is the most important here with thousands of literal valuesYou can get elapsed the time from
pgbench
, the difference is minimal:A
pgbench -M prepared
with parameters should be more representative of the statements from an application. Multi-value inserts could be similar here, and easier to generate from JDBC batching for example.The text was updated successfully, but these errors were encountered: