ylliX - Online Advertising Network
The Hype Around Signals — Smashing Magazine

The Hype Around Signals — Smashing Magazine


The groundwork for what we call today “signals” dates as early as the 1970s. Based on this work, they became popular with different fields of computer science, defining them more specifically around the 90s and the early 2000s.

In Web Development, they first made a run for it with KnockoutJS, and shortly after, signals took a backseat in (most of) our brains. Some years ago, multiple similar implementations came to be.

With different names and implementation details, those approaches are similar enough to be wrapped in a category we know today as Fine-Grained Reactivity, even if they have different levels of “fine” x “coarse” updates — we’ll get more into what this means soon enough.

To summarize the history: Even being an older technology, signals started a revolution in how we thought about interactivity and data in our UIs at the time. And since then, every UI library (SolidJS, Marko, Vue.js, Qwik, Angular, Svelte, Wiz, Preact, etc) has adopted some kind of implementation of them (except for React).

Typically, a signal is composed of an accessor (getter) and a setter. The setter establishes an update to the value held by the signal and triggers all dependent effects. While an accessor pulls the value from the source and is run by effects every time a change happens upstream.

const [signal, setSignal] = createSignal("initial value");

setSignal("new value");

console.log(signal()); // "new value"

In order to understand the reason for that, we need to dig a little deeper into what API Architectures and Fine-Grained Reactivity actually mean.

API Architectures

There are two basic ways of defining systems based on how they handle their data. Each of these approaches has its pros and cons.

  • Pull: The consumer pings the source for updates.
  • Push: The source sends the updates as soon as they are available.

Pull systems need to handle polling or some other way of maintaining their data up-to-date. They also need to guarantee that all consumers of the data get torn down and recreated once new data arrives to avoid state tearing.

State Tearing occurs when different parts of the same UI are at different stages of the state. For example, when your header shows 8 posts available, but the list has 10.

Push systems don’t need to worry about maintaining their data up-to-date. Nevertheless, the source is unaware of whether the consumer is ready to receive the updates. This can cause backpressure. A data source may send too many updates in a shorter amount of time than the consumer is capable of handling. If the update flux is too intense for the receiver, it can cause loss of data packages (leading to state tearing once again) and, in more serious cases, even crash the client.

In pull systems, the accepted tradeoff is that data is unaware of where it’s being used; this causes the receiving end to create precautions around maintaining all their components up-to-date. That’s how systems like React work with their teardown/re-render mechanism around updates and reconciliation.

In push systems, the accepted tradeoff is that the receiving end needs to be able to deal with the update stream in a way that won’t cause it to crash while maintaining all consuming nodes in a synchronized state. In web development, RxJS is the most popular example of a push system.

The attentive reader may have noticed the tradeoffs on each system are at the opposite ends of the spectrum: while pull systems are good at scheduling the updates efficiently, in push architectures, the data knows where it’s being used — allows for more granular control. That’s what makes a great opportunity for a hybrid model.

Push-Pull Architectures

In Push-Pull systems, the state has a list of subscribers, which can then trigger for re-fetching data once there is an update. The way it differs from traditional push is that the update itself isn’t sent to the subscribers — just a notification that they’re now stale.

Once the subscriber is aware its current state has become stale, it will then fetch the new data at a proper time, avoiding any kind of backpressure and behaving similarly to the pull mechanism. The difference is that this only happens when the subscriber is certain there is new data to be fetched.

We call these data signals, and the way those subscribers are triggered to update are called effects. Not to confuse with useEffect, which is a similar name for a completely different thing.

Fine-Grained Reactivity

Once we establish the two-way interaction between the data source and data consumer, we will have a reactive system.

A reactive system only exists when data can notify the consumer it has changed, and the consumer can apply those changes.

Now, to make it fine-grained there are two fundamental requirements that need to be met:

  1. Efficiency: The system only executes the minimum amount of computations necessary.
  2. Glitch-Free: No intermediary states are shown in the process of updating a state.

Efficiency In UIs

To really understand how signals can achieve high levels of efficiency, one needs to understand what it means to have an accessor. In broad strokes, they behave as getter functions. Having an accessor means the value does not exist within the boundaries of our component — what our templates receive is a getter for that value, and every time their effects run, they will bring an up-to-date new value. This is why signals are functions and not simple variables. For example, in Solid:

import { createSignal } from 'solid-js'

function ReactiveComponent() {
  const [signal, setSignal] = createSignal()
  
  return (
    <h1>Hello, {signal()}</h1>
  )
}

The part that is relevant to performance (and efficiency) in the snippet above is that considering signal() is a getter, it does not need to re-run the whole ReactiveComponent() function to update the rendered artifact; only the signal is re-run, guaranteeing no extra computation will run.

Glitch-Free UIs

Non-reactive systems avoid intermediary states by having a teardown/re-render mechanism. They toss away the artifacts with possibly stale data and recreate everything from scratch. That works well and consistently but at the expense of efficiency.

In order to understand how reactive systems handle this problem, we need to talk about the Diamond Challenge. This is a quick problem to describe but a tough one to solve. Take a look at the diagram below:

Diamond diagram
(Large preview)

Pay attention to the E node. It depends on D and B, but only D depends on C.

If your reactive system is too eager to update, it can receive the update from B while D is still stale. That will cause E to show an intermediary state that should not exist.

It’s easy and intuitive to have A trigger its children for updates as soon as new data arrives and let it cascade down. But if this happens, E receives the data from B while D is stale. If B is able to trigger an update from E, E will show an intermediate state.

Each implementation adopts different update strategies to solve this challenge. They can be grouped into two categories:

  1. Lazy Signals
    Where a scheduler defines the order within which the updates will occur. (A, then B and C, then D, and finally E).
  2. Eager Signals
    When signals are aware if their parents are stale, checking, or clean. In this approach, when E receives the update from B, it will trigger a check/update on D, which will climb up until it can ensure to be back in a clean state, allowing E to finally update.

Back To Our UIs

After this dive into what fine-grained reactivity means, it’s time to take a step back and look at our websites and apps. Let’s analyze what it means to our daily work.

DX And UX

When the code we wrote is easier to reason about, we can then focus on the things that really matter: the features we deliver to our users. Naturally, tools that require less work to operate will deliver less maintenance and overhead for the craftsperson.

A system that is glitch-free and efficient by default will get out of the developer’s way when it’s time to build with it. It will also enforce a higher connection to the platform via a thinner abstraction layer.

When it comes to Developer Experience, there is also something to be said about known territory. People are more productive within the mental models and paradigms they are used to. Naturally, solutions that have been around for longer and have solved a larger quantity of challenges are easier to work with, but that is at odds with innovation. It was a cognitive exercise when JSX came around and replaced imperative DOM updates with jQuery. In the same way, a new paradigm to handle rendering will cause a similar discomfort until it becomes common.

Going Deeper

We will talk further about this in the next article, where we’re looking more closely into different implementations of signals (lazy, eager, hybrid), scheduled updates, interacting with the DOM, and debugging your own code!

Meanwhile, you can find me in the comments section below, on 𝕏 (Twitter), LinkedIn, BlueSky, or even youtube. I’m always happy to chat, and if you tell me what you want to know, I’ll make sure to include it in the next article! See ya!

Smashing Editorial
(yk)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *