My research has lead me to a phenomenal discovery in problems prevalent to systems that use cooperative multitasking (or non-preemptive multitasking).

First, cooperative multitasking, as the name states, allows processes and programs to voluntarily yield control (that is, suspend and resume) to one another enabling them to solve tasks together.

Such as, of course, an event loop.

Coroutines are software components that generalize this style of multitasking to subroutines. Let's take a look at an example — remember *(hopefully not)* `WinMain`

, and `GetMessage`

in the WinAPI?

Suppose we implemented it like so:

```
void MessageLoop(HWND Window) {
let LPMSG message = yield GetMessage(Window);
yield HandleMessage(Window, message);
}
void HandleMessage(HWND Window, LPMSG message) async* {
match message {
W_OPEN => HandleOpenEvent(Window);
W_CLICK => HandleClickEvent(Window);
W_EXIT => HandleExitEvent(Window);
}
yield MessageLoop();
}
// ...
void main() {
MessageLoop(CreateWindow());
}
```

Here, we have a designed a system that now scales concurrently as many windows and widgets may *cooperatively* work together for messages in an event loop.

Many languages, however, do not give us this luxury, and instead provide a more limited `semicoroutine`

(generator) that does not control where it continues execution; that is, they always return to their caller instead of possibly another routine.

There is a remarkable way around that. Let me explain.

I want to briefly touch on `Trampolines`

. Stay with me, Take a look at this straightforward definition in Haskell:

```
data Trampoline a = More (() -> Trampoline a)
| Done a
runT :: Trampoline a -> a
runT (Done a) = a
runT (More k) = runT (k ())
```

That is, Trampolines are a way to 'capture' steps of computation, additionally providing a sort of 'stackless recursion'. In our definition above, running `runT`

on a trampoline will 'bounce' `More`

on `k`

until it's `Done`

and gives us the final value.

credit: http://bartdesmet.net/

By capturing and solving computation in a continuation-passing style on a trampoline, we can, in fact, interleave steps and thus implement coroutines!

Right. So in many ubiquitous languages like Python and Javascript, We don't have some of these things. But what looks **very** similar to the pattern above, and **very** similar to 'capturing computation'. Hmm.

🍿

Generators and Promises! Holy crap. Suppose we used a generator as a `dispatcher`

(or, essentially, a trampoline) on a map of child generators that called each other back and forth in continuation? Let's revisit our MessageLoop program above, in Javascript, assuming it had native `async generators`

:

```
async function* messageLoop(window) {
let message = yield* getMessage(window);
yield* HandleMessage(Window, message);
}
}
async function* handleMessage(window, message) {
try {
switch (message) {
case W_CLICK: await HandleClickEvent(window);
}
} catch(e) {
throw new MessageException(message);
}
yield* MessageLoop()
});
}
function main(): void {
const map = {
// initialize the iterators using the generator function references
[messageLoop]: messageLoop(window),
[getMessage]: getMessage(),
[handleMessage]: handleMessage()
}
let current = messageLoop;
while(!map[current].done) {
current = map[current].next();
}
}
```

We could go further, as creed has done, into a dispatch factory that generalizes contexts and allows us to trampoline an entire systematic process!

This is incredible. By simulating a trampoline on an generator dispatch, we have essentially implemented a system of coroutines, thereby gaining the many extraordinary things coroutines provides.

Since our dispatcher provides multiple `entry points`

in our program for cooperative tasks to solve, we can *now build precise stack traces at any point in the event loop*. We may now, also, at a single place deal with **all uncaught exceptions** by `yielding`

to a non-preemptive global handler (such as `unhandledRejection`

or `rejectionHandled`

) events. Lastly, as above, implement our own **efficient** cooperative event loops and state machines.

This is huge. It solves many of the issues that many single-threaded, low-feature languages have — such as playing popcorn with exceptions and dealing with poor dependency stack traces — and allows large applications to scale **with little worry at all**.

The last couple of years I have been researching solutions to 'deal-breakers' of engines like `libChromiumContent`

as a native platform. The biggest one being the bottleneck of operations on the DOM. Although originally things like *scrolling* caused an **entire** repaint, it doesn't have to anymore.

There are two reasons why I'm strongly behind them as a platform. The first one, is that Web Assembly is now available (and C/C++, Rust is already available with it). The second reason is the vigorous list of APIs readily available for powerhouse applications. WebGL, Geolocation, Threads, Thread-Safe Typed Buffers, ServiceWorkers, Thread-Safe B+ Tree, Thread-Safe Image Processing, Channel Messaging; the list goes on.

No, really. If it weren't for the prospects of WASM, I'd probably be doing Java and/or Swift.

I want to talk about repaint segmentation of algorithms.

Suppose you have a large set of elements in your application. Say, thousands of backlog lines in a chat program. You represent every line as a `<p>`

. And every new line is an appended `<p>`

. It sounds like it wouldn't work, doesn't it? Any modern engine would freeze, as the expensiveness would be too much — much less changing the color or sorting those lines!

Google actually gives us a couple solutions to dealing with visual changes. In conjunction with reducing paint complexity, you can solve scenarios like the one above. I took it a step further. Let's take a look at a sample Google gives us for breaking large tasks into smaller, per frame availability.

Immediately when I saw this, I thought about divide-and-conquer algorithms and how this could be generalized for a class of problems *(with easing!)*.

Applying this segmentation to algorithms of $\mathcal{O}(n)$ time is trivial. Let's take a look at quadratic, which initially may seem confusing. I took a look at bubble sort, being one of the sorts I know very well.

Here's a Codepen of it working. *(source code)*

Although

`swap`

here is only swapping the height, the result is the same. It causes a repaint. Moreover, some element's height may be the same, causing the visual variation illusion.

The code is beautifully straightforward. For $\mathcal{O}(n^{2})$ (and, by extension, $\mathcal{O}(n^{3})$) segment the algorithm at the operation when enough time is available, here being `swap`

, calling `requestAnimationFrame`

at each 'depth'. It scales, and we can add easing or delay.

This approach works for **ANY** `n`

as it always runs the nth-depth operation when enough time on a frame is available. Mindblowing.

→🦉

Next, I took a look at `quicksort`

, also straightforward to implement. The generalization works for all logarithmic and combinatoric problems.

To demonstrate how powerful this method is, here is the

`quicksort`

withbuffered segmentation. It does not complete, however, as pivot step atomicity is necessary.

Cool. Alright, let's break this down. First off, we notice `quicksort`

is async. The operation we segment is the partition, being the base operation for all recursive calls. Because we segment `partition`

on available frames, we want it to take its time. So I await a `Promise`

to the next `pivot`

in the sort — thus scaling the algorithm for all `n`

. The segmentation here isn't as fast as our quadratic solution. The atomicity of our `pivot`

operation cant be buffered into a time frame. **But the algorithm does complete for all n smoothly**.

If there were many base-case operations, or 0th and 1st, we'd segment them together — scaling into recursion.

Some game-changing stuff here. I am completely confident I could build **very large** applications with close-to-native (if not fully native) speed. The research mentioned above has removed many problems regarding DOM applications at scale. If you have questions or thoughts send me a message on Twitter.

— Jahan

]]>This holiday weekend has enlightened me on some important topics in functional programming. It has persuaded me to take a real look at category theory.

It all began with an intuition of Applicative Functors. They really are such exquisite things.

I am by no means authoritative on the topic, my

]]>This holiday weekend has enlightened me on some important topics in functional programming. It has persuaded me to take a real look at category theory.

It all began with an intuition of Applicative Functors. They really are such exquisite things.

I am by no means authoritative on the topic, my study has inspired me to try and write about it in a way that is hopefully understandable.

Alright, prologue aside, let's dive in. First off, let's quickly revisit `Functor`

, HKTs, their implications and how they're important. The language I will be using is Haskell *(for now)*, for obvious reasons, but will try to explain it well.

'Behind free monads, applicatives may be #1 pattern in FP, IMO'

A Functor is defined over types of kind `* -> *`

, or of *unary type constructors*. You've probably come across them quite a bit, as they include `List`

or `[]`

and `Maybe`

. Where types of kind `*`

are what we call **concrete types** (like `Int`

).

As an informal side note. In other languages they are quite literally similar to generic types such as

`List<T>`

,`Option<T>`

— using`Option<f64>`

, for example, in Rust. What wereallywant is higher-kinded polymorphism.

Let's take a look at the Functor typeclass (I'll explain)

```
class Functor f where
fmap :: (a -> b) -> f a -> f b
```

If you're unfamiliar with haskell syntax, that's okay. This essentially means for all instances of `Functor`

it must implement a function `fmap`

that takes a function of type `(a -> b)`

and a `f a`

*(where f is the type constructor or 'context': that is a type of kind * -> * such as Maybe or List)* and returns a

`f b`

.The best way to understand this concretely for someone just starting is to take a look at `List`

or `[]`

. Remember `map`

? Its signature looks like:

```
map :: (a -> b) -> [a] -> [b]
```

Did something just click there?!? What about the `Functor`

instance for `List`

?

```
instance Functor [] where
fmap = map
```

I'm sure most of you have mapped over lists. But for completeness, An example is like

`map (*2) [1..3]`

which gives us`[2,4,6]`

That's right! Indeed `List`

is a `Functor`

, but there are in fact many types which are `Functor`

! We won't go over that here. The point is functors are things that can be mapped over.

Again, for completeness, Functors (Applicatives, and Monads) implementations must obey laws. I will omit proofs or anything rigorous here but be weary the instances should abide by the laws.

Time for the juicy insides. Let's take a look at the definition:

```
class (Functor f) => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
```

From our `Functor`

recap, some things here should be immediately clear. Also, for one, `Applicatives`

are indeed themselves also `Functors`

. Cool. So we see `Applicatives`

implement a `pure`

function and a `(<*>)`

(an infix function here, it's also called apply in other languages).

Is it obvious what `pure`

does here? It should be! if `f`

is some functorial context, then `a -> f a`

**lifts** `a`

into the Applicative Functor.

What about `(<*>)`

? It looks quite similar to our `fmap`

above, doesn't it? Except we notice the `f (a -> b)`

. Hmm. So it takes a `Functor`

that has a function in it and applies that function to a second `Functor`

. If you 'erase' the `f`

from the type signature, you get normal function application. Let's take a look at examples.

Holy Moly! See what's happening here? For one, In the first example `[ (2 *) ]`

is already a function in a `List`

*( List is also an Applicative Functor!)*.

`2 *`

function `[5]`

In the second example, we *lift* the binary `(*)`

function into the applicative context and then *apply* it to our applicatives.

First off, suppose `<$>`

is the *infix* version of `fmap`

. What if we wanted to map a function **over** a collection of Applicative Functors?

Note that

`(*)`

is a binary infix function that takes two parameters a la`2 * 2.`

Wait one second here... You're telling me I can take a function and `fmap`

it over a collection of Applicative Functors? **Inside their context**?

Note that we only use

`(*)`

here which takes 2 parameters. The real magic here is that we could use`n-ary`

functions and apply it over`n`

applicatives!

To understand how powerful this is. Think about `Future`

s, `Either`

or `Maybe`

and other very useful types. Suppose you have defined a `Router`

or `Middleware`

over a collection of routes that is an `Applicative Functor`

. How incredible and effective it would be to `fmap`

a function over **and determine or compose independent effects for all routes**.

When I realized how powerful this truly is, I was moved to order a category theory book straight away. Monads do have their shine, and for good reason! But to touch briefly on the difference, they *are dependent on the value and context a -> f b*. Applicatives are less restrictive, and have many uses

`f a`

.Until next time.

— Jahan

]]>