With so many methods, it'd be interesting to get some feel for what percentage of random graphs, or graphs of a particular type perhaps, can be optimally colored using each technique.<p>The subject reminds me of a YACC-like parser generator I wrote back in the early 80's... Using graph coloring to compress the parser tables.<p>My generator was more general than YACC, accepting LR(1) grammars rather than LALR(1) ones, a feat which was considered impractical at that time (said Aho & Ullman's "dragon book"). Certainly a canonical Knuth parser would be huge, but what made it possible was to use an at the time obscure state-combining technique invented by Prof. Pager. However, the resulting parse tables (2-D array, indexed by parser state and current symbol) were still huge, although sparse. One solution was to represent the tables as linked lists, but I wanted to instead compress them to keep the speed of array access.<p>The solution I used was to convert the sparse 2-D array to a graph, color the graph, then convert it back to a compressed array where non-conflicting (sparse) rows and columns had been combined.<p>The idea was to:<p>1) Convert each row of the 2-D array to a graph node, and connect that node to all other nodes who's corresponding row had a conflicting column entry.<p>2) Color the resulting graph (i.e. assign colors to nodes, such that connected nodes have different colors, using fewest possible colors). The simple but effective coloring heuristic I used was just to assign colors to the "hardest" (i.e. most constrained) nodes first - those with the most connections.<p>3) Convert the colored graph back to a 2-D array by combining all rows (nodes) of the same color, which by construction meant rows not having any conflicting entries would be combined, and the array would be compressed in accordance to how few colors had been used.<p>4) Repeat steps 1-3 for the columns.<p>What I liked about this algorithm was that any future research into optimal graph coloring could be applied to it to achieve better compression, although this was just a side project and I never did revisit it.<p>The resulting parsers (I implemented both K&R C and Modula-2) were indeed stupid fast due to just using array access. I don't remember how big the compressed tables were for these languages, but they happily ran on the Acorn 32016 2nd processor I was using for development (I did this while working for Acorn Computers).<p>Incidentally, the "do the hardest/most-constrained bits first" is a useful heuristic for many problems - e.g. I remember also using it so solve the "knight's tour" chessboard problem (have a knight visit each square on a chessboard exactly once).