Note that this paper (Knuth's “P69”) was reprinted as Chapter 6 of <i>Selected Papers on Analysis of Algorithms</i>, and this material is covered (in compressed form) in Pre-Fascicle 5B of <i>The Art of Computer Programming</i>.[1] This will be part of Fascicle 5, due to be published later this year, and eventually the first one-third of Volume 4B (the middle third, namely Fascicle 6, has already been published).<p>Pre-Fascicle 5B is[2] entirely about (what Knuth considers an “introduction to”) backtrack programming, and Pre-Fascicle 5C follows it up with a special low-level technique for making backtracking more efficient, namely Dancing Links. And Fascicle 6 is about Satisfiability (aka SAT), the premier instance of a problem solved by “backtracking”. All of these are already available!<p>If you're interested in algorithms for combinatorial problems you should check out whatever Knuth has published since he started working on Volume 4; there's a lot of fresh material and he has a unique perspective of his own.<p>Edit: What I mean by unique perspective: For example, unlike mainstream Theory-CS, Knuth is not primarily interested in the asymptotic complexity of problems (despite being the “father” of analysis of algorithms and the popularizer of Big-O notation in computer science), but is interested rather in real programs that can be executed on real computers, and in the mathematical analysis thereof. Given a problem, what I can imagine a mainstream theoretical computer scientist doing is coming up with a suitable abstraction that is mathematically elegant, reducing the problem to/from other known problems, getting asymptotic upper and lower bounds on complexity, not minding if one ends up with a “galactic algorithm”, etc. What I can imagine Knuth doing is writing an assembly program (perhaps in MIX/MMIX, his best approximation to the wide variety of actual machine languages in existence), then doing a no-holds-barred mathematical analysis of its running time (not just Big-O but including the constants for the leading terms: here Big-O is used only to suppress less-useful information), then comparing this running time against empirical estimates derived from writing and running a C (or CWEB) program. See some talks on Sedgewick's site for examples of the difference (sorry can't find the exact reference right now).<p>[1]: <a href="https://cs.stanford.edu/~knuth/news19.html#targetText=5b" rel="nofollow">https://cs.stanford.edu/~knuth/news19.html#targetText=5b</a> (search for “5b” on the page)<p>[2]: <a href="https://en.wikipedia.org/w/index.php?title=The_Art_of_Computer_Programming&oldid=919657342#Volume_4B,_4C,_4D_–_Combinatorial_Algorithms" rel="nofollow">https://en.wikipedia.org/w/index.php?title=The_Art_of_Comput...</a>
Knuth mentions this paper in <i>Things a Computer Scientist Rarely Talks About</i>. He said that the success of randomization for quantitative problems led him to use similar techniques for qualitative problems -- like studying the Bible by studying only a "random" sample of verses. He ended up choosing to focus on verse 3:16 of each book (actually, the 16th verse after the start of the 3rd chapter, provided such a verse exists). Questions about whether this was an appropirate sample and whether this was a good idea at all are discussed extensively in TACSRTA.
Perfect timing, I just realized this evening that I might need backtracking <a href="https://gist.github.com/cellularmitosis/aa46e1e88c3bcd2db83c01f9af076270" rel="nofollow">https://gist.github.com/cellularmitosis/aa46e1e88c3bcd2db83c...</a>