TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The fastest Postgres inserts

58 pointsby abelanger4 days ago

9 comments

hinkley3 days ago
&gt; On my local machine, we saturate the usefulness of more connections at around 20 connections<p>What is your local machine? Max parallelism is often a function of CPU count, so knowing how many core the box has is useful, and might explain needing to do more tests between 20 and 30.<p>There’s an old rule of thumb that tasks that alternate between cpu and IO usage, such as for instance compiler jobs, often benefit from running 2 tasks per core as the second task can make about 50% more progress per second. Your results are awfully similar to that rule of thumb so I’m wondering is this an 8 core box? 10? 12?<p>&gt; If the buffer has reached its maximum size , it blocks writes until the buffer has been flushed, to properly exert backpressure on the application.<p>I cannot think of a situation where I could convince my team to allow 100k inserts to be pooled up waiting in memory for a process to crash or a deployment to wipe it out, but having a single buffer is also a mistake. io_uring is a refinement of a very old idea of having a read and a write buffer that swap back and forth between two threads or tasks.<p>You write to one for a while, and then you hand it off to someone to process it and clear it, and you write to another while that is happening. You only block when both are full. I suspect you’re seeing some of the fruits of this in the COPY example, if you’re creating a new file for each batch.
评论 #44007955 未加载
frollogaston3 days ago
A long time ago, I was working on a team that sorta misused Postgres as a data processing pipeline, so a lot of temp tables were involved. As the article said, COPY was by far the fastest way to get data in from outside. But the next problem after that was INSERT (SELECT ...) statements going from one stage to another.<p>These queries had many joins and sometimes aggregates on huge tables, which took hours. Surprisingly, for most queries, I got nearly a linear speedup wrt core count (on a 32-core server) by running a separate query in parallel for separate ranges of pkeys. It was best to pick a table to shard that the query planner picked as the inner or outermost when you EXPLAIN the original query, cause that wouldn&#x27;t change the query plan.<p>Postgres at the time had no kind of parallelization for a single query. It later got it, but limited to seq scans and other things later. So this method might still be advantageous today. It&#x27;s not too useful for typical app backends where you want to query your data at one consistent timestamp, but I wonder if Postgres could build some feature that uses parallel queries at the same MVCC timestamp.
ndriscoll3 days ago
There&#x27;s no need to wait for the flush interval to flush a full batch. Just do it. You&#x27;ll get backpressure when your workers are all busy. Also, set your flush interval closer to your throughput for your batch size. e.g. if you can do 100k&#x2F;s, plan to flush a batch of 100 every ~2 ms. Then when you&#x27;re not at peak load, you don&#x27;t really pay for any extra latency for batching. As you approach peak load, you&#x27;ll end up with full batches.<p>IIRC I got something like 70k rows&#x2F;second on reddit dump data for an extended (couple hours) run using a similar approach with Scala on my i5-6600k (4 cores) with a JSON API that accepted 1 post at a time and batched internally. I think I used something like 2-4 workers, batch size 128, max flush of 5 ms. No COPY (the library I was using didn&#x27;t have support for that, though I think it&#x27;s not too bad to do manually with jdbc? I don&#x27;t remember)
pwmtr4 days ago
Really appreciated the authors persistence on keeping to use PostgreSQL. There are many specialized solutions out there, but at the end they usually lack PostgreSQL&#x27;s verstatility and battle testedness.
atombender3 days ago
I&#x27;ve been optimizing inserts recently, and I&#x27;ve already reached the limit here (i.e. already batching in parallel etc.).<p>The final thing I was considering doing was to implement a dynamic batch&#x2F;concurrency scheduler. Something like hill climbing or AIMD, where the system tries to increase the batch size and concurrency until a sweet spot is found. It seems crazily unscientific to me for a system to hard-code the concurrency and batch size when said system may be running under different load at any time, not to mention on different hardware in the future.<p>Has anyone here found a nice algorithm for this? There seem to be a number of options.<p>Something else not mentioned in the article is that you can significantly increase insert performance by dropping indexes first and creating them after. That&#x27;s normally not possible, of course, unless you are creating a table&#x2F;partition just for the one insert job. I wish Postgres had a batch mode where you could say &quot;here are all my rows, insert them and only build indexes when I commit&quot;.
评论 #44009508 未加载
xzzx3 days ago
Would be interesting to see these comparisons with pipeline mode, since it seems like it would address a lot of the issues covered in this post
评论 #44008946 未加载
natebrennand3 days ago
None of these tests seem to run for longer than a minute? Drawing conclusions from experiments like this is fraught.
评论 #44007855 未加载
wewewedxfgdf3 days ago
Nothing impacts Postgres insert performance more than index updates during insert.<p>If it is possible to take your table offlkine for the insert then you&#x27;ll get 10 to 100 X speedup by deleting all the indexes first then recreating them.<p>You should also ensure you set all the Postgres session variables to have enough memory and workers on the job.<p>WAL writes during inserts have a massive impact on insert performance too so you can look at setti8ngs like synchronous_commit and checkpoint_timeout. Creating the table as UNLOGGED will also disable WAL writes for that table giving a massive speedup.<p>Also, recreating your indexes sequentially can be faster than doing them in parallel and look out for CONCURRENTLY in index creation - its positive is it allows DB operations during index creation but its negative is it is much slower and risks failure.<p>Probably something along the lines of this - which also show how to set default workers for a table so you don&#x27;t need your queries to keep setting that Postgres session variable.<p><pre><code> DO $$ DECLARE table_name TEXT := &#x27;your_table_name&#x27;; -- Replace with your table name schema_name TEXT := &#x27;public&#x27;; -- Replace with your schema data_file TEXT := &#x27;&#x2F;path&#x2F;to&#x2F;your&#x2F;data.csv&#x27;; -- Replace with your data file path index_info RECORD; index_sql TEXT; BEGIN -- 1. Store existing indexes for later recreation CREATE TEMP TABLE index_definitions AS SELECT indexname, indexdef FROM pg_indexes WHERE schemaname = schema_name AND tablename = table_name; -- 2. Drop all existing indexes (except primary key) FOR index_info IN SELECT indexname FROM pg_indexes WHERE schemaname = schema_name AND tablename = table_name AND indexdef NOT LIKE &#x27;%PRIMARY KEY%&#x27; LOOP EXECUTE &#x27;DROP INDEX &#x27; || schema_name || &#x27;.&#x27; || index_info.indexname; RAISE NOTICE &#x27;Dropped index: %&#x27;, index_info.indexname; END LOOP; -- 3. Optimize PostgreSQL for bulk loading (non-sysadmin settings only) -- Memory settings SET maintenance_work_mem = &#x27;1GB&#x27;; -- Increase for faster index creation SET work_mem = &#x27;256MB&#x27;; -- Increase for better sort performance -- WAL and checkpoint settings SET synchronous_commit = OFF; -- Delay WAL writes as requested SET checkpoint_timeout = &#x27;30min&#x27;; -- Less frequent checkpoints during load -- Worker&#x2F;parallel settings SET max_parallel_workers_per_gather = 8; -- Increase parallel workers SET max_parallel_workers = 16; -- Maximum parallel workers SET effective_io_concurrency = 200; -- Better IO performance for SSDs SET random_page_cost = 1.1; -- Optimize for SSD storage -- 4. Set parallel workers on the target table EXECUTE &#x27;ALTER TABLE &#x27; || schema_name || &#x27;.&#x27; || table_name || &#x27; SET (parallel_workers = 8)&#x27;; -- 5. Perform the COPY operation EXECUTE &#x27;COPY &#x27; || schema_name || &#x27;.&#x27; || table_name || &#x27; FROM &#x27;&#x27;&#x27; || data_file || &#x27;&#x27;&#x27; WITH (FORMAT CSV, HEADER true)&#x27;; -- 6. Rebuild all indexes (using the stored definitions) FOR index_info IN SELECT * FROM index_definitions LOOP index_sql := index_info.indexdef; RAISE NOTICE &#x27;Recreating index: %&#x27;, index_info.indexname; EXECUTE index_sql; END LOOP; -- 7. Drop temporary table DROP TABLE index_definitions; RAISE NOTICE &#x27;Data loading completed successfully&#x27;; END $$;</code></pre>
评论 #44012431 未加载
SoftTalker3 days ago
If this is representative of your database:<p><pre><code> CREATE TABLE tasks ( id BIGINT GENERATED ALWAYS AS IDENTITY, created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP, args JSONB, PRIMARY KEY (id) ); </code></pre> Then Postgres (or any relational DB) is the wrong choice.<p>Is this how developers view a database in 2025? A place to store a JSON blob?
评论 #44008648 未加载
评论 #44008534 未加载
评论 #44008868 未加载
评论 #44010023 未加载
评论 #44009328 未加载