You might have invented monads... over and over.

May 17, 2016

In this article, I hope to show that eventually you will invent several monads. As monads are very powerful and helpful they may demand support at the language level in the form of syntactic sugar or keywords. Without realising the similarities at a higher level, you will end up with different constructs for each individal monad. An approach that clearly does not scale.

As an example, I will look at Task<T>’s: the C# equivalent of Futures, a way to handle asynchronous computations. Tasks are monads, but do not support standard monadic operations. As monads are designed to chain computations sequentially, the lack of these makes it hard to combine several asynchronous computations without additional keywords being added to the language.

Learn to recognize monads. Or you will reinvent them.

One of the better known articles on monads in functional programming languages is You could have invented monads. The author tells how monads are often considered an esoteric topic while they are not. His argument is that programmers, unknowningly, discover monads themselves:

But all of these introduce monads as something esoteric in need of explanation. But what I want to argue is that they aren’t esoteric at all. In fact, faced with various problems in functional programming you would have been led, inexorably, to certain solutions, all of which are examples of monads. In fact, I hope to get you to invent them now if you haven’t already. It’s then a small step to notice that all of these solutions are in fact the same solution in disguise. And after reading this, you might be in a better position to understand other documents on monads because you’ll recognise everything you see as something you’ve already invented.

The author mentions functional programming languages explicitly. This is a shame because his observation also applies to imperative languages. In fact, I regularly use monads while working on a large C# codebase. These monads weren’t invented by my predecessors. They are a core part modern day C# programming. The problem is that they do not always (directly) implement the canoncial monadic operations.

For a proper monad, we need three things. In C# terms these are:

  1. A generic class, e.g. Monad<T>
  2. An operation to generate a Monad<T> from a value of type T
  3. An operation to chain computations, i.e. a function with a signature like:

.NET is unique in its naming convention, prefering SQL-inspired names such as Select, SelectMany and Where over map, flatMap/bind/concatMap, and filter respectively.

The .NET framework provides implementations of SelectMany for IEnumerable<T> and IQueryable<T>. The latter technically doesn’t adhere to the required signature for a monadic SelectMany. This means that are some caveats to using IQueryable’s SelectMany – but that is something for another blog post.

However IEnumerable<T> is a perfectly passable monad (and functor). Unfortunately, not everywhere are monads recognized as such, making their use more difficult than needed.

Futur… I mean tasks.

Let’s take a look at Futures. Suppose you have a method that does a complicated computation eventually resulting in a value, say an integer. Rather than blocking the application, the method immediately returns a Future<int> and starts calculating the value asynchronously. The caller can then continue doing other work, and only when it needs the result of the asynchronous computation, it awaits the future until it’s completed. Futures are known by a different name in the .NET framework, namely tasks.

Let us look at an example program that uses Task’s interface explicitly (rather than implicityly through the relatively new async and await keywords)

So how do Tasks behave as monads? Clearly, Task<T> is a generic class. That’s one tick of our list. Next, FromResult method takes a value of type T and returns a Task<T>. Check! So that leaves us just with the SelectMany-operation. ContinueWith serves this goal, but its signature is not quite right…

Now let’s think of the continuation function as a method. Its declaration would look like:

This reads as if ContinuationFunction accepts a potentially uncompleted asynchronous computation and returns a value… synchronously. Surely, if it were designed to work asynchronously , it would return a Task<TNewResult>. Nothing in its signature suggests ContinuationFunction is designed to be run in an asynchronous context.

I would never write a general purpose method with such a signature. And you won’t find them in the .NET framework either. What you will find, are methods of the form

Now that signature makes sense! DoSomething takes an argument and is designed to perform a computation asynchronously. Methods like this are common in the .NET framework, but the Task class cannot use them directly! You’d have to wrap them.

Now having to wrap each of these general purpose methods would be clumsy. So Microsoft introduced two new keywords to the language specification: async and await. Methods marked with async are translated into a tasks and continuations.

So how do these keywords affect the flow of your program? Well, here is a picture taken straight from MSDN:

Application flow with async and await
Application flow with async and await

Arguably, the author of this article may just have used a very convoluted explanation inadvertently, but wow. Notice how the picture has more arrows than lines of code. Microsoft claims that async and await make it easier to write asynchronous code that looks like standard synchronous code. I can’t help but feel that with a implementation of SelectMany and universal syntactic sugar, this goal could have been achieved as well and in a more straightforward manner.

Also, with await and async it’s hard to determine up front, what code will run on the main thread and what will execute some place else. Let’s look at the following code:

The Delay in GetNumberAsync ensures that the task returned is still running by the time it is awaited in DoSomethingAsync. You’ll get output along the lines of

1: DSA
2: DSA

Change the delay to something like 1 millisecond and you might see the following output:

1: DSA
2: DSA
3: DSA
4: DSA

This means whether an part of an async method executes on the main thread depends on the behaviour of other async methods. But you should probably never count on this in any case.

How nice the world could have been

If Task were designed with an idiomatic bind operation, methods such as DoSomething could be chained easily:

The program flow in the above listing is immediately apparent.

The presence of the monadic operation SelectMany does not exclude the existince of methods such as ContinueWith. Indeed, the continuationFunction passed to ContinueWith can implement more complicated logic by handling failed tasks. However, one could also argue that it is too much of a burden having to handle potential failures in every continuationFunction. It’s not as we programmers are always so conscientious as to check for nulls, and this is the same.

It would be better if SelectMany did the checking for us and short-circuit in case of a failure. Those who do need the extra bit of power could still rely on ContinueWith, but for the most part we could happily rely on the logic in the imaginary SelectMany method.

Wrapping up

Rather than adding an additonal method to Task<T>’s interface. New keywords were added to the language. These were introduced in version 5.0. And it wasn’t the first time the language was extended to accommodate a specific monad; C# 3.0 introduced syntactic sugar to work with IEnumerables and IQueryables. The approach of extending the language for every monad clearly does not scale.

Scala solved this problem with a monad-aware for keyword. Typically, for is used for enumerating a sequence. The language designers realised that sequences are a monad, and generalized the for construct to handle any monad in a way that is still familiar to an object-oriented or imperative programmer.

C# is actively being developed and hopefully some day it will receive general support for monads.