People play these tricks to publish papers - have a goal in mind (compiling on GPU) and weaken the hypotheses sufficiently to make it work. In this case I suppose it's the LLP grammar which admits a parallelizable parser.<p>In general you can't compile on GPU because program analysis is holistic at basically every level after lexing. Of course if you resign yourself to peephole optimization then sure maybe you can build a "compiler" that's parallelizable. But that's a crappy compiler that goes really fast and I have no idea what a compelling use case would be for such a thing.