I think monads highlight something underappreciated about programming, which is that different people regard very different things as "intuitive". It almost seems that modalities of thinking about programming are bigger, external things than programming itself. Certainly they're barriers to learning.<p>Like Lisp, there seems to be about 10% of the programmer population who think "ah this is obviously the clearest way to do it" and the remaining 90% who go "huh?", and the 10% are really bad at explaining it in a way the others can grasp.<p>The two monad explainers that really resonated with me were:<p>- How do you deal with state in a language that would prefer to be stateless? Answer: wrap <i>the entire external universe</i> and all its messy state up into an object, then pass that down a chain of functions which can return "universe, but with some bytes written to the network" (IO monad)<p>- If you have a set of objects with the same mutually pluggable connectors on both ends, you can daisy-chain them in any order like extension cables or toy train tracks.<p>(It's a joke, but people need to recognise why "A monad is just a monoid in the category of endofunctors" is a bad explanation 99% of the time and understand how to produce better explanations)
I think the programming pattern paradigm is the right way to explain monads (as you can tell from my own monad explanation: <a href="https://kybernetikos.com/2012/07/10/design-pattern-wrapper-with-composable-actions/" rel="nofollow">https://kybernetikos.com/2012/07/10/design-pattern-wrapper-w...</a>) The category theory language around it is off-putting to working programmers, and many of the ways people explain it is by trying to introduce yet more terminology rather than just working with the perfectly adequate terminology that working programmers already have.<p>I think part of it is that lots of languages don't have sufficient abstraction ability to encapsulate the monad pattern in their type system, and those that do tend to be academic focused. That doesn't mean you can't (and do) use monads in the other languages, it's just that you can't describe the whole pattern in their type systems.<p>I was pretty sad that the discussion around javascript promises sidelined the monad pattern.
This seems a pretty good introduction to monads.<p>There is a cliche that no-one can write a good introduction to monads. I don't think that is true. My opinion is more that monads were so far from the average programmer's experience they could not grok them. I think as more people experience particular instances of monads (mostly Futures / Promises) the mystique will wear off and eventually they will be a widely known part of the programmer's toolkit. I've seen this happen already with other language constructs such as first-class functions. ("The future is here just not evenly distributed.")
I think for newbies there are two separate aspects to explain: first an intro to algebraic structures perhaps using groups as an example, then monads in particular.<p>It’s important to emphasize that algebraic structures are abstractions or “interfaces” that let you reason with a small set of axioms, like proving stuff about all groups and writing functions polymorphic for all monads.<p>With monads in particular I think the pure/map/join presentation is great. First explain taking “a” to “m a” and “a -> b” to “m a -> m b” and then “m (m a)” to “m a”. The examples of IO, Maybe, and [a] are great.<p>You can also mention how JavaScript promises don’t work as monads because they have an implicit join semantics as a practical compromise.
I get the intention but it's even harder to understand with this Java/C# syntax. I feel like if you're gonna talk about FP you should probably highlight along the Haskell or Scala code (or similar) and provide OOP stuff for reference in case it's not clear. It ends up being so verbose that I don't many people see the 'point'.
This seems pretty good; the only thing on my mental checklist of "common monad discussion failures" is only a half-point off, because I'd suggest for:<p>"A monad is a type that wraps an object of another type. There is no direct way to get that ‘inside’ object. Instead you ask the monad to act on it for you."<p>that you want to add some emphasis that the monad interface itself provides no way to reach in and get the innards out, but that does not prevent <i>specific</i> implementations of the monad interface from providing ways of getting the insides. Obviously, Maybe X lets you get the value out if there is one, for instance. This can at least be inferred from the rest of the content in the post, since it uses types that can clearly be extracted from. It is not a <i>requirement</i> of implementing the monad interface on a particular type/class/whatever that there be no way to reach inside and manipulate the contents.<p>But otherwise pretty good.<p>(I think this was commonly screwed up in Haskell discussions because the IO monad looms so large, and does have that characteristic where you can never simply extract the inside, give or take unsafe calls. People who end up coming away from these discussions with the impression that the monads literally never let you extract the values come away with the question "What's the use of such a thing then?", to which the correct answer is indeed, yes, that's pretty useless. However, specific implementations always have some way of getting values out, be it via IO in the case of IO, direct querying in the case of List/Maybe/Option/Either, or other fancy things in the fancier implementations like STM. Controlling the extraction is generally how they implement their guarantees, if any, like for IO and STM.)
I think it's important to separate the issue of what monads are/how they're used from the question when they <i>should</i> be used at all. While monads are very useful for working with various streams/sequences even in imperative languages, they are used in Haskell for what amounts for effects in pure-FP, and <i>that</i> use ("Primise" in the article) has a much better alternative in imperative languages. Arguably, it has a better alternative even in pure-FP languages (linear types).<p>Here's a recent talk I gave on the subject: <a href="https://youtu.be/r6P0_FDr53Q" rel="nofollow">https://youtu.be/r6P0_FDr53Q</a>
The problem with monads is they are horrible without some form of syntax sugar. I like the metaphor of "programmable semicolon", but in languages without some built-in support, the "semicolon" becomes repetitive boilerplate which is more code than the actual operations happening in the monad.
It's hard to explain 20 years ago, but today if someone has used enough something like Reactive extension, Promise, LinQ, async/await, Optional, etc. There's a great chance to make one wonder about the similar pattern behind this, and then he can understand the abstraction very easily.
I once googled "functional programming for category theorists" and obviously got instead to "category theory for programmers" (incidentally I use Milewski's book in this reverse way).<p>I still have a rudimentary understanding of functional programming (apart from the canonical "it's just an implementation of lambda calculus"). And I have to say that without exercise and training one grabs at wisps and mist. In mathematics it's also like this, you often have your favorite prototypical monad, adjunction, group, set, etc. (e.g.: The adjunction Set->Set^op by powerset is a strong contender.) And I view axiomatic systems in essence as sort of list of computational rules (e.g.: transitive closure).<p>I haven't found some idiosyncratic project to code in Haskell yet though...
I guess I don't get all this "monad" stuff. This article talks about 3 types of monad. An optional, a list, and a future.<p>However an optional is really just a list constrained to size 0 or 1. And a future is often called "not truly a monad."<p>So I question the value of explaining this abstraction in great detail over so many articles when people struggle to come up with more than 1 concrete example of it (Lists), an example that engineers have already understood since our first month coding.<p>Maybe somebody can speak to this more.
Another really good functional (as in related to how they work) explanation of monads:<p><a href="http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html" rel="nofollow">http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...</a><p>It's somewhat more than a monad explanation -- it covers functors and applicatives, and is somewhat haskell specific, but it was one of the guides that really clicked for me when I was trying to grok monads
wow nice post. I can also recommend Mark Seemanns take on monads: <a href="https://blog.ploeh.dk/2019/02/04/how-to-get-the-value-out-of-the-monad/" rel="nofollow">https://blog.ploeh.dk/2019/02/04/how-to-get-the-value-out-of...</a>
This recalled me a joke on monads: "A monad is just a monoid in the category of endofunctors, what's the problem?" [0]<p>[0]: <a href="http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html" rel="nofollow">http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...</a>, 1990 Haskell entry