I don't mean to be anti-intellectual. But as someone who doesn't know much Haskell and likes to hand roll a precedence climber, my reaction to this is "so what?". But really, what are the implications of this? What does this mean for the code that does the actual parsing, how does this transform how you specify the rules of the grammar in code?
This might not help much with practical compiler writing, but there are some great math-y games here as well: <a href="https://ubavic.rs/snake/snake.html" rel="nofollow noreferrer">https://ubavic.rs/snake/snake.html</a>
For those of us who don’t stare at parsers on a regular basis, this seems odd to me:<p><pre><code> newtype Parser a = P (String -> Maybe (String, a))
</code></pre>
So, on a successful parse, it returns the result and the rest of the string. That’s seems error prone and rather weak. What ensures that the “rest of the string” is, in fact, a suffix of the input? I assume the purpose of this design is for compatibility with lazy input sequences, where the “rest of the string” could still be lazy. But surely some Haskell type magic could ensure that the supposed lazy suffix is actually a suffix.<p>If I were writing a serious parser intended for human use with nice error messages, I would want to know the <i>position and length</i> of the parsed output, which I could infer from actual strings with known length, but which I may not be able to efficiently infer from lazy sequences.<p>So I’m confused. Why is this a good design?
No, a parser is not an functor, an applicative or god forbid a monad.<p>The ratio of the effort academics spend thinking formally about parsers relatively to how often you need to write a parser is probably 50:1. Not as bad as sorting algorithms, where they take up 50% of a CS curriculum, but still pretty bad.<p>A parser reads an array of tokens and spits out a tree. You can write one for any relatively sane grammar by hand in two hours if you need to, infix operators & precedence rules included, with the added benefits of decent performance, the ability to have good errors and the peace of mind that you're not confined by what can be neatly expressed by parser combinators.