To answer the title: what we called FaaS back in the day. And back in those days, the suspender-wearing UNIX sysadmins said that it was just a more restricted version of inetd.<p>Wonder what the next generation will call it?<p>And yes, I am being sarcastic. There are differences. But do recall that when AWS Lambda came out, it had the exact same limitations w.r.t. one process per call, needing one fresh connection to a DB per request to handle, etc.
This makes me nostalgic. Spent inordinate amounts of time writing terrible, unmaintainable Perl scripts and FTPing them to cgi-bin on free 5MB hosting plans on F2S and Netfirms. Most hosts wouldn't provide error logs, so there was nothing more than a 500 page from httpd to tell you that something had gone wrong. Debugging anything was an endeavor. Then of course, PHP came along and killed cgi-bin.<p>If I had to pick one thing to represent the CGI era, it would be Matt's FormMail [1].<p>[1] <a href="https://www.scriptarchive.com/formmail.html" rel="nofollow">https://www.scriptarchive.com/formmail.html</a>
CGI is still a reasonable choice for low query rate and simple pages, like an AJAX responder.<p>Especially when there won't framework already in use on this particular server.<p>Why add <framework of the day> to your setup when every 10 minutes or so, it's time to read the temperature sensor and return a string with the reading.<p>Also, since it's served by a transient process, there is no problem of memory leaks, file handle leaks, other resource capture by long running processes.<p>Per Murphy's Law, of course, everything is fine until one day the internal app goes public and suddenly there are 500 queries a second.
One of the bits of CGI (I wrote a lot of perl back in the day) is that parts of it are still there, under the covers.<p>Looking at RFC 3875 <a href="https://tools.ietf.org/html/rfc3875" rel="nofollow">https://tools.ietf.org/html/rfc3875</a> - you see things like PATH_INFO, PATH_TRANSLATED, QUERY_STRING...<p>Pull up Java HttpServletRequest, <a href="https://javaee.github.io/javaee-spec/javadocs/javax/servlet/http/HttpServletRequest.html" rel="nofollow">https://javaee.github.io/javaee-spec/javadocs/javax/servlet/...</a> and there is getPathInfo(), getPathTranslated(), and getQueryString() along with many of the other parameters that would be familiar to someone writing a CGI.<p>You can find them in C# - <a href="https://docs.microsoft.com/en-us/dotnet/api/system.web.httprequest.pathinfo?view=netframework-4.8" rel="nofollow">https://docs.microsoft.com/en-us/dotnet/api/system.web.httpr...</a> - PathInfo, QueryString and the like.<p>You can find them in Haskell - <a href="https://hackage.haskell.org/package/happstack-server-7.5.1.1/docs/Happstack-Server-Routing.html" rel="nofollow">https://hackage.haskell.org/package/happstack-server-7.5.1.1...</a> Route by pathInfo and the QUERY_STRING is clear in <a href="https://hackage.haskell.org/package/happstack-server-7.5.1.1/docs/Happstack-Server-Internal-Types.html#t:Request" rel="nofollow">https://hackage.haskell.org/package/happstack-server-7.5.1.1...</a><p>CGI scripts aren't dead... they just got better plumbing.
OP does a poor job explaining what CGI scripts were.<p>From the RFC linked in the OP: <a href="https://tools.ietf.org/html/rfc3875#page-23" rel="nofollow">https://tools.ietf.org/html/rfc3875#page-23</a><p>CGI defined an interface between a web server and an executable that would provide a response.<p>- Request meta-data i.e. path, query string, and other headers passed as environment variables.<p>- Request body passed via stdin<p>- Response header and body passed via stdout<p>In this way, a webserver like Apache could provide a platform for a wide array of languages. Yes there were security and scaling concerns, but it also was an opportunity to rapidly release and iterate on a product.
I’m really grateful I had to cut my teeth on cgi because it forced me to understand the whole http request/response cycle in detail. There were libraries to help (CGI.pm, anyone?) but they stayed down at a pretty low level (helping parse params from an url or POST, for example). To learn how to implement a web login form, I had to understand how cookies worked, how to set and then read a cookie header, how to send the right headers with the response, how to encode the cookie payload, how to store the password locally.<p>Today, a web framework will do this grunt work for you. Which honestly, if you’re handling passwords, is in many ways a good thing. But people are less likely to learn the basics of how their app talks to a browser.
CGI and derived protocols actually got one thing right compared to the reverse-http-proxying of today: they pass request headers in protocol variables, not the other way around. In CGI, no amount of accidental misconfiguration would permit a request to overwrite custom request variables on the way from the server to the script.<p>I mean, it's probably possible to configure a server to escape the original request headers as ‘HTTP-<Header-Name>: value’ and add custom ones on top, but I haven't seen it done, and frameworks depend on the headers being there intact.
Written as if it's a 25 year old tech no one uses now?<p>I still use cgi scripts. Though these aren't "scripts", rather compiled binaries written in C.<p>That made some of the pages with heaviest calculations, load in under a second, which were taking more than 10 seconds in PHP.
I built a billing site for my software business and had to decide which tech to use. SPA? React + Rails? Elixir? Keep in mind we are talking about single digit req/min.<p>I went with CGI. It has some drawbacks but consider the advantages:<p><pre><code> * Requires nothing but Apache running, minimizes attack surface area
* Deploy is a simple `git pull`, no services to restart
* No app server running 24/7 so I don't have to monitor memory usage or anything else.
</code></pre>
I love it. Takes little to no maintenance because it never changes. Runs on a $5/mo droplet.<p><a href="https://www.mikeperham.com/2015/01/05/cgi-rubys-bare-metal/" rel="nofollow">https://www.mikeperham.com/2015/01/05/cgi-rubys-bare-metal/</a>
CGI scripts don't have to run with web server privileges. Nor should they. They should be set-UID to some other user.<p>I still use FCGI with Go programs. FCGI launches a service process when there's a request, but keeps it alive for a while, for later requests. It can fire up multiple copies of the service process if there's sufficient demand. If there are no requests for a while, the service processes are told to exit. Until you get big enough to need multiple machines and load balancers, that's enough.
Python 3.8 is expected to include PEP 594 "Removing dead batteries from the standard library"... one of the modules schedules to be deprecated is cgi.<p><a href="https://www.python.org/dev/peps/pep-0594/" rel="nofollow">https://www.python.org/dev/peps/pep-0594/</a><p>As out of date as the module is, reading the PEP made me nostalgic for the days of hammering out a quick CGI script, and I've probably got a few of those scripts still chugging away.
It's worth noting that the RHCE (Red Hat Certified Engineer) used to require to at least be able to write a simple CGI script and make it available via Apache.<p>I haven't checked how this changed in the last RHCE update, but still.<p>I had the opportunity/necessity to write a couple of CGI script. While it's not comfortable at all, the nice thing is that the HTTP server makes not assumption on what language/runtime you're using. Literally, any out of the box apache will be able to run cgi scripts, no matter what language you used to write them.<p>If for any reason you cannot install other runtimes, you can still use CGI.<p>Whether you should, that's a different matter.
I still do a fair amount of CGI; it's just fine for low traffic simple web services. Startup time for the script isn't awesome, to fix that you want FCGI or SCGI or whatever. Apache's default MPM these days is event (or worker) which scales CGI pretty well. Beware that it's not thread safe, so if your CGI script is doing something multithreaded it will break. (Related: if you're doing something multithreaded, it's time to graduate from CGI).
Author is speaking past tense, showing lack of knowledge. CGI programs are in use, nothing "passed". All major web servers have CGI support, and it has so much use in web applications. It is standard protocol that is not going to go away.
CGI is still "widely" (I don't know how to quantify) used in web interfaces for IoT devices that did not drink the Lua kool-aid. That's <i>a lot</i> of CGI out there. Not dead yet.
One of my most popular sites was CGI until 2014. Now it's just static files, rendered from the same cgi program. And the reason it changed wasn't related to performance.
I cut my web-teeth on CGI. I was a TA instructing physics undergrads in C (I think around 2007). I only really knew C and python myself, and was tasked with building a page which students could upload programs and results to. It was the perfect abstraction given the tools I had and my primitive understanding HTTP at the time.<p>It took a few yesrs to realise I was hooked, and while I don't use CGI nowadays I'm really glad I started out with it.
I'd like to point out that CGI (and its descendents) can be used with any language, not just C (e.g. there are various false dichotomies in these comments between PHP vs CGI+C).<p>There are also many fast, compiled languages which, unlike C, are memory-safe, make string handling easier, are higher-level, have stronger types, etc. In particular, ML-family languages like Rust/Ocaml/ReasonML/StandardML/Haskell are really good as meta-languages for safely generating other languages like HTML (that's what the "ML" stands for ;) )<p>It would be a shame if an inexperienced developer got the impression that their pages would load faster if they taught themselves C. Yes, it's possible to write reasonably-safe C; no, an Internet-facing CGI application isn't a good idea for a first C project.
Anyone else noticed that the page is constantly doing some requests to log the users behavior, e.g., on scroll? Also some of these requests (always?) fail.<p>Was annoying as the page did some jumps on these requests and the icon indicated the loading activity. If you have to do - do it in the background.
> since the server was directly executing the script, security issues could easily creep in (the script shares the permissions of the HTTP server).<p>This is still the case. PHP often runs as www-data, Rails or Django or Node or whatever often run as a normal user (usual guess, Ubuntu user id 1000) with read/write access to all the files in that user home directory. Running in a container gives some isolation now.<p>Anyway, writing my first CGI script in C back in 1994 was quite a hell (not a very convenient language for string processing), then Perl and CGI.pm got the upper hand for a while.
What <i>were</i> cgi script? Not sure that people have stopped using them. Old things don't always need replacing; some times, they still work fine (though you have to watch your security).
Acting like CGI is dead was the reason I (feel like I) wasted three years with PHP. It's not, and if the thought of "programming" webpages in C/C++ sounds appealing to you, you should definitely check it out.<p>The obvious advantages: Total (binary) control of data streams and structures and execution as well as (nigh-)zero runtime overhead.<p>The obvious disadvantages: Static and rigid, not well suited for rapidly changing requirements (unless they are coded in), longer development times and increased complexity.
This makes me ask the question: How much knowledge are we going to forget in the next 50 years...<p>---<p>We have forgotten so much. But our mental velocity has been so vast in the last ~100 years... we are going to foget key knowledge soon....
I wonder -- are there benchmarks out there of fastcgi vs reverse proxy via http server for various languages? I'd expect that fastcgi would be faster, but everyone uses reverse proxying nowadays.
What would be the main benefits of today's serverless against classic CGI? I can think of auto scaling, but I'm not sure it wasn't possible (or even used?) in the past.
"Were?"<p>CGI is still the best choice for this. It's easier to guarantee your server's Perl installation has all the latest security patches than it is to guarantee your users' browsers all do. AJAXy nonsense is mostly too error-prone, too insecure, and too slow.
I miss the days when I had shell on a unix box and could just drop a .pl file into my ~public_web directory and have random people on the internet run the script with the permission of the webserver user. Back before there was any concern with doing such a thing.
I wonder if we've reached the point where many of the people who would have been using CGI scripts (if they had not been succeeded by newer technologies to provide dynamic functionality) no longer know what it is.
CGI scripts sound like a very simple idea when described here, but I avoided learning about them at the time because the name was so arcane. Is there a good reason why these weren’t simply called “executable pages”, or “response programs”, etc.?