# Trampolines and Coroutines

My research has lead me to a phenomenal discovery in problems prevalent to systems that use cooperative multitasking (or non-preemptive multitasking).

First, cooperative multitasking, as the name states, allows processes and programs to voluntarily yield control to one another enabling them to solve tasks together.

Such as, of course, an event loop. 🍩

Coroutines are software components that generalize this style of multitasking to subroutines. Let's take a look at an example — remember (hopefully not) WinMain, and GetMessage in the WinAPI?

Suppose we implemented it like so:

function MessageLoop(Window: HWND): void {
let message: LPMSG = yield GetMessage(Window);
yield HandleMessage(Window, message);
}

function HandleMessage(Window: HWND, message: LPMSG): void {
match message {
W_OPEN  => HandleOpenEvent(Window);
W_CLICK => HandleClickEvent(Window);
W_EXIT  => HandleExitEvent(Window);
}
yield MessageLoop();
}

function Main(): void {
MessageLoop(CreateWindow());
}


Here, we trivialize a very complex system that now scales concurrently as many windows and widgets may cooperatively work together for messages in an event loop. 😵

Many languages, however, do not give us this luxury, and instead provide a more limited semicoroutine (generator) that does not control where it continues execution; that is, they always return to their caller instead of possibly another routine.

# So, what can we do?

There is a remarkable way around that. Let me explain.

I want to briefly touch on Trampolines. Stay with me, Take a look at this straightforward definition in Haskell:

data Trampoline a = More (() -> Trampoline a)
| Done a

runT :: Trampoline a -> a
runT (Done a) = a
runT (More k) = runT (k ())


That is, Trampolines are a way to 'capture' steps of computation, additionally providing a sort of 'stackless recursion'. In our definition above, running runT on a trampoline will 'bounce' More on k until it's Done and gives us the final value.

credit: http://bartdesmet.net/

By capturing and solving computation in a continuation-passing style on a trampoline, we can, in fact, interleave steps and thus implement coroutines!

# Neat, so...

Right. So in many ubiquitous languages like Python and Javascript, We don't have some of these things. But what looks very similar to the pattern above, and very similar to 'capturing computation'. Hmm.

🍿

Generators and Promises! Holy crap. Suppose we used a generator as a dispatcher (or, essentially, a trampoline) on a map of child generators that called each other back and forth in continuation? Let's revisit our MessageLoop program above, in Javascript, assuming it had native async generators:

async function* messageLoop(window) {
let message = yield* getMessage(window);
yield* HandleMessage(Window, message);
}
}

async function* handleMessage(window, message) {
try {
switch (message) {
case W_CLICK: await HandleClickEvent(window);
}
} catch(e) {
throw new MessageException(message);
}
yield* MessageLoop()
});
}

function main() {
const map = {
// initialize the iterators using the generator function references
[messageLoop]: messageLoop(window),
[getMessage]: getMessage(),
[handleMessage]: handleMessage()
}
let current = messageLoop;
while(!map[current].done) {
current = map[current].next();
}
}


We could go further, as creed has done, into a dispatch factory that generalizes contexts and allows us to trampoline an entire systematic process!

This is incredible. By simulating a trampoline on an generator dispatch, we have essentially implemented a system of coroutines, thereby gaining the many extraordinary things coroutines provides.

# Like what?

Since our dispatcher provides multiple entry points in our program for cooperative tasks to solve, we can now build precise stack traces at any point in the event loop. We may now, also, at a single place deal with all uncaught exceptions by yielding to a non-preemptive global handler (such as unhandledRejection or rejectionHandled) events. Lastly, as above, implement our own efficient cooperative event loops and state machines.

This is huge. It solves many of the issues that many single-threaded, low-feature languages have — such as playing popcorn with exceptions and dealing with poor dependency stack traces — and allows large applications to scale with little worry at all.