Communicating Sequential Processes (Hoare),
The Next 700 Programming Languages (Landin),
As We May Think (Bush),
Can Programming Be Liberated from the von Neumann Style (Backus)<p>And this seems to be a cool course: <a href="https://canvas.harvard.edu/courses/34992/assignments/syllabus" rel="nofollow">https://canvas.harvard.edu/courses/34992/assignments/syllabu...</a>
> This course examines papers every computer scientist should have read, from the 1930s to the present. It is meant to be a synthesizing experience for advanced students in computer science: a way for them to see the field as a whole, not through a survey, but by reliving the experience of its creation. The idea is to create a unified view of the field of computer science, for students who already know something about it, by replaying its entire evolution at an accelerated frame rate.
I actually found this to be an odd mix. Are we selecting papers that had an influence on computer science (as in, the theory of computation), or that had an impact on technology? Or are we just using CS as a catch-all for "all things computer"?<p>The Turing paper is foundational for CS, but without it, would the technology have evolved differently? Probably not. Most software engineers have not read it. Conversely, the IP standard is a technological cornerstone, but there's hardly any science in it. It's just a specification of a fairly simple protocol that you need to know when doing almost anything network-adjacent.
Great list of papers.<p>I've read five of of the seven papers on the list. The two I haven't read are Cerf and Kahn's, and Berner-Lee's.<p>Turing's paper on computability was particularly hard to follow, for me, because he used these gothic-font upper-chase characters to name all sorts of objects, and all those characters looked kinda the same to me! I had to use auxiliary materials to be able to make my way through the paper. Today, I would recommend reading it with Charles Petzold's easy-to-follow book on the paper: <a href="https://www.amazon.com/Annotated-Turing-Through-Historic-Computability/dp/0470229055" rel="nofollow">https://www.amazon.com/Annotated-Turing-Through-Historic-Com...</a><p>Cook's paper on NP-completeness was also hard to follow (again, for me). As with Turing's paper, I had to use auxiliary materials to make my way. Today, I would recommend reading instead an introductory book on computational complexity that works through Cook's proof.<p>Shannon's paper is a work of art, clearly articulated and beautifully written. It's just not casual reading, to put it mildly.<p>Brin and Page's paper, and Codd's paper, are not hard to follow, at least as I remember them, but understanding Brin and Page's work requires some knowledge of Linear Algebra.<p>Thank you for sharing this on HN.
Oh man, if you think Shannon's A Mathematical Theory of Communication is his most foundational contribution to computer science, you haven't seen his master's thesis from a decade prior.<p><a href="https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_and_Switching_Circuits" rel="nofollow">https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_a...</a><p>He outlined how you could use switching elements in circuits (read: transistors) to define boolean logic.<p>(That's not to downplay the importance of, well, establishing the foundations of the entire field of information theory.)
> He sketches out a hypothetical “Turing Machine,” proving that, if something is computable at all, a machine (in principle) can handle it.<p>That's not what Turing proved. Instead, what he proved in his paper was that there are some problems which aren't solvable by Turing Machines (and therefore presumably by any machine). That's the <i>Entscheidungsproblem</i> (decision problem) referenced in the title.<p>What TFA references is the so-called Church-Turing-Thesis, which is exactly that, a thesis. It can't really be proven although we have very strong reason to believe it given that in almost 100 years nobody has found a system of computation more powerful than Turing Machines.
Nice work!<p>I was actually doing something similar on my own, so I might recommend some papers<p>- RSA: A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (1978)<p>- PageRank: The PageRank Citation Ranking: Bringing Order to the Web (1999)<p>- MapReduce: MapReduce: simplified data processing on large clusters (2008)<p>- Bitcoin: Bitcoin: A Peer-to-Peer Electronic Cash System (2008)<p>- BackProp: Learning representations by back-propagating errors (1986)<p>- Hoare Logic: An Axiomatic Basis for Computer Programming (1969)
Where's the infamous "Evolution of Unix time-sharing systems" by Dennis Ritchie?<p><a href="https://www.bell-labs.com/usr/dmr/www/cacm.pdf" rel="nofollow">https://www.bell-labs.com/usr/dmr/www/cacm.pdf</a>
Since everyone likes chiming in with their own additions to the list, here's mine:<p>While Cook was the first to introduce NP-completeness, Karp's paper presenting 21 problems that could be reduced polynomially to 3SAT was also an enormeous cornerstone that helped kick off a more general interest in Cook's theory.<p><a href="https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems" rel="nofollow">https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_proble...</a>
It's not papers but I would give special mention to Why Software Is Eating the World by Marc Andreessen and Amazon's original 1997 letter to shareholders.<p>"Companies in every industry need to assume that a software revolution is coming. This includes even industries that are software-based today."<p><a href="https://a16z.com/why-software-is-eating-the-world/" rel="nofollow">https://a16z.com/why-software-is-eating-the-world/</a><p>"But this is Day 1 for the Internet and, if we execute well, for Amazon.com."<p><a href="https://www.aboutamazon.com/news/company-news/amazons-original-1997-letter-to-shareholders" rel="nofollow">https://www.aboutamazon.com/news/company-news/amazons-origin...</a>
Just wrote a blog about the explosion of papers titled after Attention Is All You Need [1]. Also figured out the name probably didn’t originate from one of the authors.<p>[1]<a href="https://open.substack.com/pub/0x7f1/p/is-all-you-need-is-all-you-need?r=h4ewo&utm_medium=ios" rel="nofollow">https://open.substack.com/pub/0x7f1/p/is-all-you-need-is-all...</a>
>Go To Statement Considered Harmful” (1968) – Edsger Dijkstra<p>This is outdated and does not apply to modern goto.<p>It is often misunderstood which causes people to avoid goto even when it is very valid, even better than alternatives solution
Solid list. Two that influenced me personally were:<p>Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE transactions on evolutionary computation, 1(1), 67-82.<p>And the corresponding search paper. Got me started in search and optimization (and Prolog).<p>Licklider, J. C. (1960). Man-computer symbiosis. IRE transactions on human factors in electronics, (1), 4-11.<p>More of a philosophical outlook but the thought of man-computer symbiosis instead of "computer solves it" has stuck with me (and is quite relevant in this day and age).
I would also add J. Ziv and A. Lempel, "A Universal Algorithm for Sequential Data Compression", 1977 [1]. LZ (Lempel-Ziv) is the foundation of many data compression algorithms that are still in use today.<p>[1] <a href="https://courses.cs.duke.edu/spring03/cps296.5/papers/ziv_lempel_1977_universal_algorithm.pdf" rel="nofollow">https://courses.cs.duke.edu/spring03/cps296.5/papers/ziv_lem...</a>
Love this, agree w/ all.<p>Probably just needs to be a bigger list.<p>Unix paper.<p>Hinton on deep learning (pick one).<p>Map Reduce + GFS from Google.<p>Paxos from dist systems.<p>PGP paper; RSA paper
It would be interesting to see the opposite of this; which papers are really interesting and look useful, but did not end up having a significant impact?
I read Shannon’s paper every year and it gives me new insights every single time. It transcends computer science. And Claude was one of the coolest “nerds” I’ve met.
<a href="http://www-formal.stanford.edu/jmc/recursive.html" rel="nofollow">http://www-formal.stanford.edu/jmc/recursive.html</a><p><a href="https://paulgraham.com/rootsoflisp.html" rel="nofollow">https://paulgraham.com/rootsoflisp.html</a><p>This is the arch-rival of Unix+C, and also one of his best CS friends (GNU Emacs it's still there, and it was widely used on Unixen, among reusing GNU (Unix clone) tools).
If this is up your alley, Ideas That Created the Future[1] has like 40 more, decent introductions to their ideas, and provides a sort of thematic throughline from Aristotle to the modern computer science.<p>[1]: <a href="https://mitpress.mit.edu/9780262045308/ideas-that-created-the-future/" rel="nofollow">https://mitpress.mit.edu/9780262045308/ideas-that-created-th...</a>
Ivan Sutherland's 1963 PhD thesis really was seminal for the graphical user interface.<p>"Sketchpad: A Man-machine Graphical Communication System."
I guess if we all add our favourite papers, we’ll end up with a very long list indeed. My own additions:
As we may think by Vannevar Bush, and then something on Project Xanadu. There doesn’t seem to be any foundational paper on the latter, but there is Ted Nelson’s book Literary Machines.
For pratical purposes, perhaps this:<p>Andrew Tridgell and Paul Mackerras, <i>The rsync algorithm</i>, June 1996, <a href="https://www.andrew.cmu.edu/course/15-749/READINGS/required/cas/tridgell96.pdf" rel="nofollow">https://www.andrew.cmu.edu/course/15-749/READINGS/required/c...</a>
I´m pretty fond of ¨Ending the anomaly¨: <a href="https://www.usenix.org/system/files/conference/atc17/atc17-hoiland-jorgensen.pdf" rel="nofollow">https://www.usenix.org/system/files/conference/atc17/atc17-h...</a>
Ken Thompson's talk (later a paper) "reflections on trusting trust."<p><a href="https://dl.acm.org/doi/pdf/10.1145/358198.358210" rel="nofollow">https://dl.acm.org/doi/pdf/10.1145/358198.358210</a>
Especially the Codd paper (what if Codd had died in World War 2?) makes me wonder: What are the non-obvious (kind of arbitrary even) yet extremely helpful concepts that no one has thought of or that didn't find the resonance that they should have?
Nice list!<p>I would add "On the Criteria to Be Used in Decomposing Systems into Modules" (1972, ~7800 citations) by Parnas, though more software engineering than computer science in a strict sense.
Not a single woman?
No Barbara Liskov - Programming with abstract data types?
No Grace Hopper - The education of a computer?
No Frances Allen - Program optimization?
See also the book or the course by Harry R. Lewis at Harvard:<p>- Ideas That Created the Future <a href="https://www.amazon.com/Ideas-That-Created-Future-Computer/dp/0262045303" rel="nofollow">https://www.amazon.com/Ideas-That-Created-Future-Computer/dp...</a><p>- "Classics of CS" <a href="https://canvas.harvard.edu/courses/34992/assignments/syllabus" rel="nofollow">https://canvas.harvard.edu/courses/34992/assignments/syllabu...</a>