Calico

Calico is a UI library for the Typelevel.js ecosystem. It leverages the abstractions provided by Cats Effect and FS2 to provide a fluent DSL for building web applications that are composable, reactive, and safe. If you enjoy working with Cats Effect and FS2 then I hope that you will like Calico as well.

Acknowledgements

Calico was inspired by Laminar. I have yet only had time to plagiarize the DSL and a few of the examples ;) Thanks to @raquo for sharing their wisdom.

I am very grateful to @SystemFw who gave me a tutorial on all things Functional Reactive Programming shortly before I embarked on this project.

Try it!

With special thanks to @yurique, you can now try Calico right in your browser at scribble.ninja!

libraryDependencies += "com.armanbilge" %%% "calico" % "0.1.2"

Please open issues (and PRs!) for anything and everything :)

Integrations

Core concepts

Components and resource management

The most important idea behind Calico is that each component of your app (and in fact your app itself) should be expressed as a Resource[IO, HTMLElement].

import cats.effect.*
import org.scalajs.dom.*
// note: no calico import yet!

val component: Resource[IO, HTMLElement] = ???

// or more generally:
def component[F[_]: Async]: Resource[F, HTMLElement] = ???

This Resource completely manages the lifecycle of that element and its children. When the Resource is allocated, it will create an instance of the HTMLElement and any supporting resources, such as background Fibers or WebSocket connections. In kind, when the Resource is closed, these Fibers and connections are canceled and released.

Because Resource[IO, HTMLElement] is referentially-transparent, it naturally behaves as a "builder". Your component can be re-used in multiple places in your application as well as un-mounted and re-mounted without worrying about crossed-wires or leaked resources. This makes it easy to compose components.

So far, none of this is specific to Calico: we get all of this for free from Cats Effect. Calico steps in with a friendly DSL to cut down the boilerplate.

import calico.dsl.io.*
import cats.effect.*
import org.scalajs.dom.*

val component: Resource[IO, HTMLElement] = div(i("hello"), " ", b("world"))

Yes, in this very unexciting example i("hello") and b("world") are both Resources that monadically compose with div(...) to create yet another Resource! There are no other resources involved in this very simple snippet. Also note that we have not yet created any HTMLElements, we have merely created a Resource that describes how to make one.

A more interesting example is this interactive Hello World demo.

import calico.*
import calico.dsl.io.*
import calico.syntax.*
import cats.effect.*
import cats.effect.syntax.all.*
import fs2.*
import fs2.concurrent.*

val component = SignallingRef[IO].of("world").toResource.flatMap { name =>
  div(
    label("Your name: "),
    input(
      placeholder := "Enter your name here",
      // here, input events are run through the given Pipe
      // this starts background fibers within the lifecycle of the <input> element
      onInput --> (_.mapToTargetValue.foreach(name.set))
    ),
    span(
      " Hello, ",
      // here, a Stream is rendered into the HTML
      // this starts background fibers within the life cycle of the <span> element
      name.discrete.map(_.toUpperCase)
    )
  )
}

The ideas are very much the same as the prior example.

  1. input(...) is a Resource that creates an <input> element and also manages Fibers that handle input events.
  2. span(...) is a Resource that creates a <span> element and also manages Fibers that handle rendering of the name.
  3. div(...) is a Resource composed of the input(...) and span(...) Resources, and therefore (indirectly) manages the Fibers of its child components.

And there we have it: a self-contained component consisting of non-trivial resources, that can be safely used, reused, and torn down.

Task scheduling and glitch-free rendering

A JavaScript webapp typically has a flow like:

  1. An event fires. Examples:
    • a user event, such a button click
    • a scheduled timer event
    • an I/O event, such as an HTTP response or WebSocket message
  2. An event handler is triggered, starting (potentially concurrent) tasks to update the application state and the UI. These tasks may also setup new event emitters, for example by scheduling timers or initiating an HTTP request.
  3. The UI re-renders.

Calico is highly-optimized for this pattern and by default schedules all tasks as so-called microtasks. These microtasks have very high-priority: while there is still work to be done, the UI will not re-render and no further events will be processed. Only once all microtasks are complete, will the UI re-render and events will start being processed again.

Notice that this scheduling strategy guarantees glitch-free rendering. Because all tasks triggered by an event must complete before the view re-renders, the user will never see inconsistent state in the UI.

However, there are certain situations where running a task with high-priority may not be desirable and you would prefer that it runs in the "background" while your application continues to be responsive. This is relevant only if you are doing an expensive calculation or processing task; for example, there is no need to explicitly background I/O tasks since they operate via the event-driven flow described above.

In these cases, you should break that expensive task into smaller steps and schedule it as a so-called macrotask:

import calico.unsafe.MacrotaskExecutor

val expensiveTask = IO(smallStep1()) *> IO(smallStep2()) *> IO(smallStep3()) *> ...
expensiveTask.evalOn(MacrotaskExecutor)

The MacrotaskExecutor schedules macrotasks with equal priority to event processing and UI rendering. Conceptually, it is somewhat analogous to using IO.blocking(...) on the JVM, in that running these tasks on a separate ExecutionContext preserves fairness and enables your application to continue responding to incoming events. Conversely, forgetting to use .evalOn(MacrotaskExecutor) or IO.blocking(...) could cause your application to become unresponsive.

However, I suspect situations where you need to use the MacrotaskExecutor in webapp are rare. If you truly have a long-running, compute-intensive task that you do not want to compromise the responsiveness of your application, you should consider running it in a background thread via a WebWorker instead.

To learn more about microtasks and macrotasks I recommend this article about the JavaScript event loop.