Just looked at the "Course Description" of
the "Syllabus" as at<p><a href="https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2020/pages/syllabus/" rel="nofollow">https://ocw.mit.edu/courses/6-006-introduction-to-algorithms...</a><p>The course takes itself very seriously and
seems to ask each student to devote a lot
of time to the course.<p>My summary reaction is that it would be a
shame to devote that much time to what is
basically so little material.<p>Yes, the "Syllabus" mentions<p><i>Introduction to Algorithms</i>, Cormen,
Leiserson, Rivest, and Stein, CLRS.<p>Years ago I downloaded a PDF and didn't
see much beyond what I'd gotten from
Knuth, etc.<p>I can give a fast overview here:<p>There I see their lists of topics:<p>dynamic arrays, heaps, balanced binary
search trees, hash tables<p>and<p>sorting, graph searching, dynamic
programming<p>I'm a little surprised at how old these
topics are: I first learned several of
the topics almost entirely from<p>Donald E. Knuth, <i>The Art of Computer
Programming, Volume 3, Sorting and
Searching</i>, 1973.<p>(1) Dynamic Array<p>A glance at how it works explains why I
never heard of it:<p>Can see<p>"Dynamic array"<p>at<p><a href="https://en.wikipedia.org/wiki/Dynamic_array" rel="nofollow">https://en.wikipedia.org/wiki/Dynamic_array</a><p>So, if have an array A and need more
space, then allocate a larger array B and
copy over the contents of array A to array
B and continue with array B.<p>A guess is that in nearly all cases a
better solution would be a tree where the
array subscripts are used as keys and the
array elements, as leaves.<p>Maybe the main reason to include dynamic
arrays is to do some applied math to
analyze by how much bigger array B should
be than array A.<p>(2) Heaps<p>I like heaps.<p>At one point in the software of my
startup, I have to search through maybe 1
million numbers and end up with the, say,
20 largest. For that I programmed a heap,
and it has worked out great.<p>There are versions of the heap algorithm
that are better on locality of reference
and when the heap is carried mostly on
slow, <i>secondary</i> storage.<p>(3) Balanced Binary Search Trees<p>AVL (Adelson-Velskii, Landis) trees are in
the Knuth reference, and they are
terrific. An alternative is red-black
trees. One of those two is likely the key
to .NET <i>collection classes</i>, and my
startup uses two instances for a simple,
light, fast <i>key-value</i> store instead of
Redis.<p>(4) Hash Tables<p>Those are also in Knuth. Hashing usually
leaves me in doubt due to its various
possible problems. But better still
sometimes is <i>perfect hashing</i> as I recall
also in Knuth.<p>For hashing in general, a good step
forward is in<p>Ronald Fagin, Jurg Nievergelt, Nicholas
Pippenger, H. Raymond Strong, <i>Extendible
hashing-a fast access method for dynamic
files</i>, "ACM Transactions on Database
Systems", ISSN 0362-5915, Volume 4, Issue
3, September 1979, Pages: 315 - 344.<p>We used that in an AI (artificial
intelligence) product we shipped.<p>(5) Sorting<p>Knuth covers heap sort and shows that it
meets the Gleason bound for sorting by
comparing pairs of keys.<p>(6) Graph Searching<p>Looking at their lecture notes, it appears
that they mean versions of shortest paths
on networks.<p>These are all fairly simple except for
minimum cost single commodity network
flows where each arc has a maximum flow
and also a cost per unit of flow. The
problem is linear programming, and the
classic simplex algorithm applies and
takes on an especially simple form -- a
basic solution corresponds to a spanning
tree of arcs.<p>Some good news is that if the arc
capacities are all integers and if start
the algorithm with an integer solution,
then the simplex algorithm maintains an
integer solution and will terminate with
one.<p>It is tempting to see that "good news" as
a case of progress in NP-complete integer
linear programming.<p>For such content, I recommend<p>Mokhtar S. Bazaraa and John J. Jarvis,
<i>Linear Programming and Network Flows</i>,
ISBN 0-471-06015-1, John Wiley and Sons,
New York, 1977.<p>(7) Dynamic Programming<p>That can be a big subject but does not
have to be. I got a good introduction
from an expert in about 90 seconds while
my cab was waiting to take me to the
airport. I ended up writing my Ph.D.
dissertation in dynamic programming. For
an easy introduction, can have fun, like
eating from the appetizer plate at
Thanksgiving, say, a half hour at a bite
from<p>Stuart E. Dreyfus and Averill M. Law, <i>The
Art and Theory of Dynamic Programming</i>,
ISBN 0-12-221860-4, Academic Press, New
York, 1977.<p>One of the amazing advantages of dynamic
programming is how well it handles
randomness -- then have stochastic optimal
control, Markov decision theory, etc.<p>The flavor of dynamic programming in
computer science can be a bit different,
e.g., applied to search for some string A
in some string B.<p>Maybe another purpose of the course is to
get everyone all wound up and fired up
about the question of<p>P versus NP<p>My recommendation is to try quite hard to
ignore that question.