Loeb's theorem says that you can prove a statement S if you can prove that S is implied by its own provability. It's actually a generalization of Goedel's second incompleteness theorem (let S be a false statement like <i>0 = 1</i>). I can see something in the article that looks like Loeb's theorem (<i>f (f a -> a) -> f a</i>, reading <i>f</i> as <i>it can be proved that</i>), but I don't know Haskell, so I'm completely lost after that. Can someone explain in non-Haskell terms what's going on?
I wrote an article based on this one: <a href="https://colourcoding.net/2018/03/29/marvellous-moeb/" rel="nofollow">https://colourcoding.net/2018/03/29/marvellous-moeb/</a><p>I think one thing it’s super good for is constructing cyclic graphs from flat representations: you don’t need to do cycle detection. (The example in the code, sadly, does need cycle detection and doesn’t have it. I keep meaning to write a follow-up.)
When (from an engineering, not personal curiosity perspective) is it worthwhile to use loeb?<p>After reading the article for about 15 minutes, I finally understand loeb and it's pretty cool! It's an easy way to actually run a list of lazy-loading statements (functions) in a way that all of them resolve to their eventual values.<p>That being said, it feels like the function is both too abstract to understand, yet not effective as an abstraction that I can apply to other problems. For a few contrasting examples, reverse_string is not a very abstract function, as it operates on a very concrete operand and produces an effect that's intuitive outside of mathematical theory. On the other end, a hash table is very abstract, but we choose to learn it because it's such an useful solution.<p>As for loeb, I only understand how to use it to create a spreadsheet, yet it's clearly meant for more than that since it's given such an abstract name. It's also not called evaluate_lazy_functions_on_operands either so it must be more than that too! Is there something I'm missing?