I had a case where I needed to build some denormalized redundant models to handle certain queries. I also had a write-heavy, append-only event log portion of my data model. I figured it would scale very differently, and could be partitioned very differently than the rest of the data, so I wanted to try putting it in a separate DB and using a dblink/foreign data wrappers. I use DB triggers and dblink to produce the denormalized data from the event stream, and I consume the denormalized data from my application using pg notify. So, even if I do an insert or update with a SQL statement directly to the DB, it will produce the denormalized data without relying on application logic/hooks.<p>It's definitely air tight, and nice knowing that no version of my data can currently be inconsistent, but there are definitely costs. For one, it's just slow. Bear in mind, I'm on commodity hardware, like t2, but it still seems slow, as some writes take up to 1-2s. It's also not great that it's basically magic, I can't really make alterations to change or improve what it's doing. Lastly, it's hard to version control. I have the triggers checked in with my application code, but it would be easy to not even be aware of their existence. On application startup,I override existing triggers with the ones from the code, just to make sure they're in sync, and I explicitly log out this process, but I could still see this whole process happening without it being at all clear to another developer.<p>Would I use these features again? Probably not.