I see two downsides. Looking at this snippet:<p><pre><code> my_function (): Unit can AllErrors =
x = LibraryA.foo ()
y = LibraryB.bar ()
</code></pre>
The first thing to note is that there is no indication that foo or bar can fail. You have to lookup their type signature (or at least hover over them in your IDE) to discover that these calls might invoke an error handler.<p>The second thing to note is that, once you ascertain that foo and bar can fail, how do you find the code that will run when they <i>do</i> fail? You would have to traverse the callstack upwards until you find a 'with' expression, then descend into the handler. And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.<p>I do think this is a really neat concept, but I have major reservations about the readability/debuggability of the resulting code.
> You can think of algebraic effects essentially as exceptions that you can resume.<p>How is this substantively different than using an ApplicativeError or MonadError[0] type class?<p>> You can “throw” an effect by calling the function, and the function you’re in must declare it can use that effect similar to checked exceptions ...<p>This would be the declared error type in one of the above type classes along with its `raiseError` method.<p>> And you can “catch” effects with a handle expression (think of these as try/catch expressions)<p>That is <i>literally</i> what these type classes provide, with a "handle expression" using `handleError` or `handleErrorWith` (depending on need).<p>> Algebraic effects1 (a.k.a. effect handlers) are a very useful up-and-coming feature that I personally think will see a huge surge in popularity in the programming languages of tomorrow.<p>Not only will "algebraic effects" have popularity "in the programming languages of tomorrow", they <i>actually</i> enjoy popularity in programming languages today.<p><a href="https://typelevel.org/cats/typeclasses/applicativemonaderror.html" rel="nofollow">https://typelevel.org/cats/typeclasses/applicativemonaderror...</a>
Algebraic effects seem very interesting. I have heard about this idea before, but assumed that it somehow belonged into the territory of static type systems. I am not a fan of static type systems, so I didn't look further into the idea.<p>But I found these two articles [1] about an earlier <i>dynamic</i> version of Eff (the new version is statically typed), which explains the idea nicely without introducing types or categories (well, they use "free algebra" and "unique homomorphism", just think "terms" and "evaluation" instead). I find it particularly intriguing that what Andrej Bauer describes there as "parameterised operation with generalised arity", I would just call an abstraction of shape [0, 1] (see [2]). So this might be helpful for using concepts from algebraic effects to turn abstraction algebra into a programming language.<p>[1] <a href="https://math.andrej.com/2010/09/27/programming-with-effects-i-theory/" rel="nofollow">https://math.andrej.com/2010/09/27/programming-with-effects-...</a><p>[2] <a href="http://abstractionlogic.com" rel="nofollow">http://abstractionlogic.com</a>
AE (algebraic effect) are very interesting! Great article, thank you.<p>Reading through, I have some concerns about usability in larger projects, mainly because of "jumping around".<p>> Algebraic effects can also make designing cleaner APIs easier.<p>This is debatable. It adds a layer of indirection (which I concede is present in many real non-AE codebases).<p>My main concern is: When I put a breakpoint in code, how do I figure out where the object I work with was created?
With explicit passing, I can go up and down the stack trace, and can find it.
But with AE composition, it can be hard to find the instantiation source -- you have to jump around, leading to yo-yo problem [1].<p>I don't have personal experience with AE, but with python generators, which the article says they are the same (resp. AE can be used to implement generators). Working through large complex generator expressions was very tedious and error-prone in my experience.<p>> And we can use this to help clean up code that uses one or more context objects.<p>The functions involved still need to write `can Use Strings` in their signature. From practical point of view, I fail to see the difference between explicitly passing strings and adding the `can Use Strings` signature -- when you want add passing extra context to existing functions, you still need to go to all of them and add the appropriate plumbing.<p>---<p>As I understand it, AE on low level is implemented as a longjmp instruction with register handling (so you can resume).
Given this, it is likely inevitable that in a code base where you have lots of AE, composing in various ways, you can get to a severe yo-yo problem, and getting really lost in what is the code doing.
This is probably not so severe on a single-person project, but in larger teams where you don't have the codebase in your head, this can be huge efficiency problem.<p>Btw. if someone understands how AE deal with memory allocations for resuming, I'd be very interested in a good link for reading, thank you!<p>[1]: <a href="https://en.wikipedia.org/wiki/Yo-yo_problem" rel="nofollow">https://en.wikipedia.org/wiki/Yo-yo_problem</a>
This Ante "pseudocode" is wonderful! It's like Haskell with Elixir's expressiveness, flavor and practicality. A Haskell for developers. Waiting for the compiler to mature. I would love to develop apps in Ante.
I did protohackers in ocaml 5 alpha a couple of years ago with effects. It was fun, but the toolchain was a lil clunky back then. This looks and feels very similar. Looking forward to seeing it progressing.
> You can think of algebraic effects essentially as exceptions that you can resume.<p>So conditions in Common Lisp? I do love the endless cycle of renaming old ideas
This doesn't give a focused explaination on why. I don't see how dependency injection is a benefit when languages without algebraic effects also have dependency injection. It doesn't explain if this dependency injections is faster to execute or compile or what.
I often see the claim that AE generalizes control flow, so you can (for example) implement coroutines. But the most obvious way I would implement AE in a language runtime is with coroutines, where effects are just syntactic sugar around yield/resume.<p>What am I missing?
When I see a new (for me) idea coming from (presumably) category theory I wonder if it really will land in any mainstream language. In my experience having cohesion on the philosophical level of the language is the reason why it is nice to work with it in a team of programmers who are adept in both programming and in the business context. A set of programming patterns to solve a problem usually can be replaced with a possibly disjunct set of patterns where both solutions have all the same ilities in the code and solve the business problem.<p>My question is - can a mainstream language adopt the algebraic effects (handlers?) without creating deep confusion or a new language should be built from the ground up building on top of these abstractions in some form.
What once was old is new again.<p><a href="https://lisp-docs.github.io/cl-language-reference/chap-9/j-b-condition-system-concepts" rel="nofollow">https://lisp-docs.github.io/cl-language-reference/chap-9/j-b...</a><p><a href="https://jacek.zlydach.pl/blog/2019-07-24-algebraic-effects-you-can-touch-this.html" rel="nofollow">https://jacek.zlydach.pl/blog/2019-07-24-algebraic-effects-y...</a>
I might be a bit dense but I didn't quite get it and the examples didn't help me.
For instance, the first example SayMessage.
Is it supposed to be an effect? Why? From the function signature it could well be a noop and we wouldn't know the difference.
Or is it arbitrarily decided?
Is this all about notations for side-effectful operations?
After spending a lot if time in Prolog, I want a nice way to implement and compose nondeterministic functions and also have a compile time type check. I’m eyeing all of these languages as a result. I’ll watch Ante as well. (Don’t forget developer tools like an LSP, tree-sitter or other editor plugins).
Great blog post! I only have a theoretical understanding of algebraic effects so far and have yet to use them in practice, so please excuse this likely dumb question: In functional/declarative programming languages variable declarations can be evaluated in any order, as long as they don't depend on each other. Now if both declarations involve calls to functions with side effects, what determines the order in which these effects will be executed?
I like the idea of Algebraic effects but I'm a little skeptical of the amount of extra syntax.<p>Let's say I'm building a web server, my endpoint handler now needs to declare that it can call the database, call the s3, throw x, y and z... And same story for most of the functions it calls itself. You solved the "coloration problem" at the cost of adding a thousand colors.<p>Checked exceptions are the ideal error handling (imho) but no one uses them properly because it's a hassle declaring every error types a function may return. And adding an exception to a function means you need to add/handle it in many of its callers, and their callers in turn, etc.
Have you thought of using generators as a closest example to compare effects to? I think they are much closer to effects than exceptions are. Great explainer anyway, it was tge first time I have read about this idea and it was immediately obvious
Maybe I'm too archaic but I do not share the author's hope that algebraic effects will ever become prevalently used. They certainly can be useful now and then, but the similitude with dynamic scoping brings too many painful memories.
This is neat, but you don’t need a new language to leverage these concepts. <a href="https://effect.website/" rel="nofollow">https://effect.website/</a> The effect library brings all of this goodness to typescript (and then some) and is robust and production ready. I hate writing typescript without it these days.
The state effects example seems unlike the others - the examples avoid syntax for indentation, omit polymorphic effect mention and use minimal syntax for functions - but for state effects you need to repeat "can Use Strings" each function? Presumably one may want to group those under type Strings or can Use Strings, at which point you have a namespace of sorts...
It feels powerful. I think the effects in return types could be inferred.<p>But I share the concerns of others about the downsides of dependency injection. And this is DI on steroids.<p>For testing, I much prefer to “override” (mock) the single concrete implementation in the test environment, rather than to lose the static caller -> callee relationship in non-test code.
With AE you get for free: generators, stackful coroutines, dependency injection, "dynamic" variables (as in anti-lexical ones), resumable exceptions, advanced error handling and much more. all packaged neatly into ONE concept. I dream of TS and Effekt some day merging :)
What’s the advantage here of using effects over monads? It seems to me that all the proposed benefits of effects are reproducible/reproduced already by monads. Is it simply to get stateful actions while still being pure in a <i>dynamic</i> type system rather than static?
As I understand it this was the inspiration for React's hooks model. The compiler won't give you the same assurances but in practice hooks do at least allow to inject effects into components.
First time in a long while where I’ve read the intro to a piece about new programming languages and not recognized any of the examples given at all even vaguely. How times change!
As a coding abstraction I really like this (not sure I'm completely understanding it, but it sounds handy.) I wonder if that's because I spent a couple of years doing kernel programming at Sun? Its nice to be able to write sleep(foo) and know that when your code starts running again after that it's because foo woke you up. That saves a ton of time wiring up control flow and trying to cover all the edge cases. Caveat the memory locality question, it would be fun to initialize all your functions waiting to catch a unit to work on and then writing your algorithm explicitly in unit mutations.