While I have only skimmed the article, I have not noticed:<p>* an explanation how to test the multiple heterogenous out-transitions of a state efficiently,<p>* the reason for the silent assumption that at most one out-transition of a state is followed, i.e. that this is not an NFA [0],<p>* most importantly, a comparison with Binary Decision Diagrams [1].<p>[0] <a href="https://en.wikipedia.org/wiki/Nondeterministic_finite_automaton" rel="nofollow">https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...</a><p>[1] <a href="https://en.wikipedia.org/wiki/Binary_decision_diagram" rel="nofollow">https://en.wikipedia.org/wiki/Binary_decision_diagram</a>
Clojure has something like this in clojure.core.match - with a good bibliography by the end of the page. <a href="https://github.com/clojure/core.match/wiki/Overview" rel="nofollow">https://github.com/clojure/core.match/wiki/Overview</a>
In general I notice a trend in cybersecurity: it is very hard to do right, and it is very business critical.<p>This has led to a cottage industry of bad tech and bad science, whereby people peddle easy security solutions where none exist.<p>On what architecture would evaluating this abstract data structure be faster than a simple opcode?
This approach seems to be simply implementing short-circuit boolean operators e.g. as soon as any AND condition is false, then false; as soon as any OR condition is true return true. The first example of evaluates every condition in a bottom up approach. While it's implemented as a FSM it doesn't need to be.<p>I think using ROBDDs (binary decision diagrams) or unrolling the AST into opcodes would be a more standard way of implementing optimisations.