Contrary to many other "expose a RDBMS schema as an API" solutions, this one is interesting due to its very close tie-in with postgres. It even uses postgres users for authorization and it relies on the postgres stats collector for caching headers.<p>I also very much liked the idea of using `Range` headers for pagination (which should be out-of-band but rarely is).<p>I'm not convinced that this is the future of web development, but it's a nice refreshing view that contains a few very practical ideas.<p>Even if you don't care about this at all, spend the 12 minutes to watch the introductory presentation.
This is good work and if I ever did web development, it would be like this. Why people in the web world don't use stored procedures and constraints is a mystery to me. That this approach is seen as novel is in itself fascinating.<p>It's like all those web framework inventors didn't read past chapter 2 of their database manuals. So they wrote a whole pile of code that forces you to add semantics in another language elsewhere in your code in a language that makes impedance stark. PostgreSQL is advanced technology. Whatever you might consider doing in your CRUD software, PostgreSQL has a neat solution. You can extend SQL, add new types, use PL/SQL in a bunch of different languages, background workers, triggers, constraints, permissions. Obviously there are limits but you don't reinvent web servers because Apache doesn't transcode video on the fly. Well, you do if you're whoever makes Rubby on Rails.<p>The argument that you don't want to write any code that locks you to a database is some stunning lack of awareness, as you decide to lock yourself into the tsunami of unpredictability that is web frameworks to ward off the evil of being locked into a 20 year database product built on some pretty sound theoretical foundations.<p>Web developers really took the whole "let's make more work for ourselves" idea and ran with it all the way to the bank.<p>You'd have to pay me a million dollars a year to do web development.
I'm sorry but why would I go through HTTP to query data? Why can't I just hit the database directly without the overhead of HTTP? Does a cleaner and being more standards-compliant worth the overhead of passing through HTTP?<p>And what happens when you start applying complex business rules that needs to scale? So many questions about this approach...
What is the use case of wrapping Postgres with REST? I can't think of many apps that don't require custom logic between receiving an API request and persisting something to the database. Is PostgREST trying to replace ORM by wrapping Postgres in REST? Or am I missing something. When would one use this tool. My naive perspective needs some enlightening.
Could maybe somebody of the older experienced people comment whether this is a good idea?<p>I find it intriguing, but maybe I am just one generation behind and you were to say:<p>"Been there done that. This strong dependency on the database was really not a good idea in the long run because... "
How about <a href="http://pgre.st/" rel="nofollow">http://pgre.st/</a> ?<p>it does same kinda stuff + capable of loading Node.js modules, compatible with MongoLab's REST API and Firebase's real-time API
What about when changes are made to the schema, wont the API just be changed in that case?<p>Wont this lock you in with very hard coupling between your db schema and public REST API?
Looks really cool. I was first thinking it saves the JSON with the new Postgres JSON support, but saving it as relational data is even more impressive!<p>I'd say if the OPTIONS would return a JSON Schema (+ RAML/Swagger) instead of the json-fied DDL, it would be even more awesome. With a bit of code generation this would be super-quick to integrate in the frontend then.
"It provides a cleaner, more standards-compliant, faster API than you are likely to write from scratch."<p>If you are using this as a web server persistence backend, I would agree with the first, more or less accept the second and reject the third. HTTP + JSON serialisation are way slower for that kind of job.<p>If you are just exposing the database using only the Postgres, in that case is interesting, however, I have concerns about how more complex business logics would work with such a CRUD view.
APIs require more than database access, security, and nice routes. Those are all necessary but a good API also includes flows linking things together so you can progress through higher order processes and workflows. You need to make sure that you're actually providing user value.<p>CRUD over HTTP (or an "access API") should be a first step, not your end goal.
With Data Virtualization providers like Denodo you can create a REST web service with any relational database very easily..<p><a href="https://community.denodo.com/tutorials/browse/dataservices/2rest" rel="nofollow">https://community.denodo.com/tutorials/browse/dataservices/2...</a>
Between this (yes, I know it's 3rd party) and the support for JSON, PostgreSQL seems to be eating into the market of the NoSQL databases every day. I like that. I like that because the fewer new things I must learn, the more time I can spend on the things I find interesting.
Wow, there is a lot of contention in this thread. So first off I want to say congratulations to the author of PostgREST. Getting 2k req/s out of a Heroku free tier is just awesome ontop of all the overhead convenience you provide. Great job, great documentation, all around looking fantastic. You deserve to be on HN homepage.<p>Second, I'm an author of a distributed database (VC backed, open-source), so I'd like to respond to some of opinions on databases voiced in this thread - particularly in the branched discussions. If you aren't interested in those responses, you can ignore the rest of my comment.<p>- "You'd have to pay me a million dollars a year to do web development." Don't worry, most webdev jobs are about a tenth of that. If inflation goes up even a little bit...<p>- "The problem is scaling your database", I can confirm that this is my experience as well. But there is a very specific reason for that. Most databases are designed to be Strongly Consistent (of the CAP Theorem) and thus use Master-Slave architecture. This ultimately requires having a centralized server to handle all your writes, and this becomes extraordinarily prone to failure. To solve this, I looked into Master-Master (or Peer-to-Peer / Decentralized) algorithms for my <a href="http://gunDB.io/" rel="nofollow">http://gunDB.io/</a> database. Point being, I'm siding with @3pt14159 in this thread.<p>- "Sorry but databases are just a hole to put your shit in when you want it out of memory", I write a database and... uh, I unfortunately kind of have to agree, probably at the cost of making fun of my own product. You see, the reason why is because most databases now a days are doing the same thing - they keep the active data set in memory and then have some fancy flush mechanism to a journal on disk and then do some cleanup/compression/reorganizing of the disk snapshot with some cool Fractal Tree or whatever. But it does not matter how well you optimize your Big O queries... if the data isn't in memory, it is going to be slow (to see why, zoom in on this photo <a href="http://i.imgur.com/X1Hi1.gif" rel="nofollow">http://i.imgur.com/X1Hi1.gif</a> ). You just can't get the performance (or scale) without preloading things into RAM, so if your database doesn't do that... well what @batou said.<p>Overall, I urge you to listen to @3pt14159 and @batou. PostgreSQL is undeniably awesome, but please don't fanboy yourself into ignorance. Machines and systems have their limitations, and you can't get around them by throwing more black boxes at it - your app will still break and so will your fanboyness.
Our Restya stack (open source) is similar to this with tech agnostic approach. We used it to build Restyaboard <a href="http://restya.com/board/" rel="nofollow">http://restya.com/board/</a> (open source trello alternative/clone)
I see currently only "flat" urls are supported. are there any plans (and is it even possible in postgresql) to add dynamic views? so that `/users/1/projects` is a dynamic view, dependent on the $user_id ? . That'd be rad
Is the JSON JSON API [1] compliant, perchance?<p>[1]: <a href="http://jsonapi.org/" rel="nofollow">http://jsonapi.org/</a>
The comments are unbelievably negative considering the quality and the range of features this offers. This is extremely useful because I won't have to spend time writing out REST api in order to expose the Postgre data. Often a client just wants to access the data with REST api and to write an entire stack just to serve a few doesn't make sense. There's no expectation that this is going to serve a gazillion requests per minute out of the box, and that's totally fine with me since you shouldn't rely on off the shelf solutions anyways if you were building an architecture of that size, but really question if you are going to have that many requests per second. It reminds me of the customer who claims 'I need this done in node.js to support 10,000 concurrent users' and when asked how many users he has now he replies 'none, but I hope I can reach the number', solving problems he doesn't have yet and complaining that 'php is too slow'.<p>Some of the best ideas and tools on HN are met with so much negativity it reminds me of Reddit, where the small percentage of people who get off on putting others down so they can feel good about themselves dominate the comments.<p>Good on you cdjk, this is exactly what I was looking for. Thank you!
Would be cool to put Kong [1] on top of the API to handle JWT or CORS [2] out of the box.<p>[1] <a href="https://github.com/mashape/kong" rel="nofollow">https://github.com/mashape/kong</a><p>[2] <a href="http://getkong.org/plugins/" rel="nofollow">http://getkong.org/plugins/</a>
You should be aware that this is a _bad_ pattern for anything more serious than a university homework. Instead of exposing functionality that you can guarantee and that's required by the clients, you expose your database schema, essentially tightly coupling the DB with the clients.<p>I know it's tempting to do that, but spend some time thinking of your data and what do you want to expose.
Example is broken. It's returning a JSON doc, so if you leave it then return, some browsers will just return the cached JSON (as text).<p>Should add some header to say that it's JSON, or add a .json file extension for the main page data.<p>Very interesting project though.