Nice work, however, I am structurally dubious of putting too much functionality onto a classical centralized RDBMS since it can't be scaled out if performance becomes a problem. It's CPU load and it's tying up a connection which is a large memory load (as implemented in postgres, connections are "expensive") and since this occurs inside a transaction it's holding locks/etc as well. I know it's all compiled native code so it's about as fast as it can be, but, it's just a question of whether it's the right place to do that as a general concern.<p>I'd strongly prefer to have the application layer do generic json-schema validation since you can spawn arbitrary containers to spread the load. Obviously some things are unavoidable if you want to maintain foreign-key constraints or db-level check constraints/etc but people frown on check constraints sometimes as well. Semantic validity should be checked before it gets to the DB.<p>I was exploring a project with JSON generation views inside the database for coupling the DB directly to SOLR for direct data import, and while it worked fine (and performed fine with toy problems) that was just always my concern... even there where it's not holding write locks/etc, how much harder are you hitting the DB for stuff that, ultimately, can. be done slower but more scalably in an application container?<p>YAGNI, I know, cross the bridge when it comes, butjust as a blanket architectural concern that's not really where it belongs imo.<p>In my case at least, probably it's something that could be pushed off to followers in a leader-follower cluster as a kind of read replica, but I dunno if that's how it's implemented or not. "Read replicas" are something that are a lot more fleshed out in Citus, Enterprise, and the other commercial offerings built on raw Postgres iirc.