<i>should someone literally pull the plug on your Postgres database server, your data is 100% safe, except for unlogged tables, in which your data is 100% gone - by design!</i><p>This omits a crucial fact: your data is not gone until Postgresql has gone through its startup recovery phase. If you really need to recover unlogged table data (to which every proper database administrator will rightly say "then WhyTH did you make it unlogged"), you should capture the table files <i>before</i> starting the database server again. And then pay through the nose for a data recovery specialist.<p><i>However, a crash will truncate it.</i><p>So this isn't exactly true. A crash <i>recovery</i> will truncate it.
Very well written and detailed article, with the caveat that they never mention a use case they consider legitimate. Does anyone here have any uses? I could imagine some sort of ETL type tasks which are transient could make sense. Thoughts?
Does COPY FROM sidestep the WAL? My (perhaps incorrect) understanding is that pg writes such data to a table file then uses a small transaction to make it live.
can we have "best of both worlds" for inserting a lot of data quickly, to have an unlogged table for performance and a trigger to copy data to another regular table for permanence?