Designing Simple Interfaces

My last post presented the merits of simple and portable interfaces. The main takeaway was that simplicity and portability allow artists to focus on what they are making instead of how, where the net effect is that they produce a larger volume of creative work while feeling more confident in the final product.

While simplicity and portability are clearly good attributes for an interface to have, they can be challenging to achieve in practice. This post proposes a few principles for approaching the problem, and introduces a (hopefully novel) method I call “flipped parameter selection.” Throughout the post, I’ll try to view the problem through the lens of math, using concepts like model interpretability, dimensionality reduction, and parameter spaces.

Model Interpretability

…just by judiciously selecting parameters and their corresponding names, the user gets a free lesson of how the model works, taking their mental model one step closer towards interpretability.

Model. A model is a representation of a real-world system. Models are useful because they enable us to simulate how the real system would respond to different inputs without having to actually interact with the real system.

An example model is Ohm’s Law, V = IR, which is a representation of a real-world system involving voltage, current, and resistance. The real world system is a set of atoms, including all their subatomic and quantum phenomena, interacting in an immeasurably complex manner that we don’t fully understand. Ohm’s Law is only a model of this system, and one that greatly simplifies it (sometimes resulting in inaccuracies!), but still allows us to study how the real world system might behave with different combinations of the input variables.

Another example of a model is a reverb effect. The real world system requires a hallway or auditorium, and is the result of a complex of interacting pressure waves. The model of reverb is a simple representation governed by a compact set of parameters, e.g., room size, dampness, and more, depending on the maker of the reverb effect. Though simple, modern reverb models are increasingly adept at simulating how sound would behave in the real-world system.

When users interact with interfaces, such as a reverb pedal, they learn a “mental model” of how the system works. Mastering an interface essentially means that the mental model is accurate and reliable. But just because someone has developed a strong intuition for how a set of inputs will result in an output doesn’t mean they can explain how the model goes about producing the output. Just as it’s easy to predict how reverb parameters will result in a final sound but difficult to describe how the underlying reverb process actually works, mental models can be both robust and tacit.

Interpretability. In math, model interpretability refers to the degree to which we can explain how a model works. In addition to understanding how inputs lead to outputs, for a model to be interpretable we must be able to explain why.

Using this concept, we can redefine a simple interface as one that leads towards an interpretable mental model of the underlying system. But how can designers approach such an objective? Obviously we can’t —nor should we try to— teach physics to a causal synthesizer player. Here, the goal of interpretability is only a north star that leads us towards a better interface.

If we shouldn’t try to explain the groundwork required to understand the model, e.g., the math and physics behind how reverb actually works in gory detail, we can offer some conceptual metaphors that the user can latch onto. These metaphors are most effective when they are rooted in something the user already understands, e.g., room size, which hints to the user that an echo is involved—where the user knows that bigger the room is, the bigger the echo. Thus, just by judiciously selecting parameters and their corresponding names, the user gets a free lesson of how the model works, taking their mental model one step closer towards interpretability.

Parameter Selection

…what makes this approach interesting is that it’s data-driven, which means we can benefit from powerful mathematical tools to solve the problem

I’ll now discuss approaches for choosing parameters and introduce a (hopefully novel) method called “flipped” parameter selection.

Classic Approach. The traditional form of the problem is formulated as follows: make a best guess at a set of parameters that explain the model, exert effort to explain those choices by finding a set of examples to illustrate the parameters. This process that follows is:

  1. Choose, via intuition, parameters that explore the full range of the model (e.g., foo, bar, baz, qux)

  2. Curate a set of examples, again by intuition, to show how each set of parameters produces a unique and interesting result (model interpretability)

Advantages. This may be optimal for some cases. After all, the joy of experiencing design is appreciation for someone else’s good tastes. It can be fun to use a distortion pedal that has a knob named “capt’n crunch, ” however, this is better suited for fun products. For more clinical designs, e.g. photography apps, microwaves, and medical devices, the names should improve the user’s mental model so that they can make more accurate and intentional decisions.

Criticism. The central problem with this approach is that since starting with a best guess is subjective, there’s no guarantees on whether they will help the user understand why the model works. Since designer is using their own intuitions to choose the parameters, i.e., knobs with whatever names they like, they are imposing their world view onto the user. Going deeper, this approach is susceptible to confirmation bias, an idea from behavioral economics where examples are cherry-picked and interpreted (at risk of distorting the evidence) to support, or confirm, an a-priori world view. This bias is problematic because subsequent decisions are then rooted in beliefs instead of reality.

“Flipped” Approach. Here is my (again, hopefully novel) contribution to the conversation. I’d like to present a principled, objective approach. This is not meant to replace the designer’s intuitions, but instead be rolled into the designer’s toolkit.

Instead of starting with parameters and choosing examples, start with examples and then work your way to the final parameters. I call this the “flipped” approach, and it’s formulated as follows: given the best guess at a set of input-output examples that explain the model, find the optimal set of parameters to illustrate the examples. As a process:

  1. Curate a selection of examples, by intuition or by dimensionality reduction, that efficiently explore the model (e.g., big rooms and small rooms)

  2. Choose the parameters directly from the reduced dimensions of those curated examples (e.g., since “room size” is the main dimension of the examples, it becomes the parameter)

Advantages. In this approach, initial choices are grounded by evidence, which is the exact opposite of searching for evidence to support initial choices (confirmation bias). Thus, the flipped parameter selection approach forces the designer to stay rooted in reality—the core concepts of how the model actually works—instead of being biased towards their own mental model. The main benefit is that accurate parameter names will lead to accurate mental models.

There are additional advantages. It’s much easier to find a parameter name to follow room size examples than it is to find examples that follow whatever foo, bar, baz, and qux are. In turn, the designer can feel more confident in the result.

What makes this approach interesting is that it’s data-driven, which means we can benefit from powerful mathematical tools to solve the problem. In particular, we can use “dimensionality reduction” methods, which are explained in the next section.

Criticism. There’s still no guarantee that the outcome will help the user understand the model. It’s tough to actually run these models in practice, and you could end up with hard-to-interpret dimensions. However, when it works, which may take considerable effort, the outcome may be better than the designer’s best-guess. So in the end, this approach only transports the designer decision-making risk into the model’s performance risk. However, the approach is still more objective, and because it’s more objective, you benefit from being able to standardize a process, build tools, share them with the community, and refine based on feedback, which gives you a shot at whittling down the performance risk over time.

Low-Dimensional Models

…you know you’re doing a good job when you have 1-4 parameters that seem to cover all the possibilities.

This section is devoted to the following problem: How do we summarize a large set of examples into three or four overarching themes? For summarizing text, we’d obviously ask an LLM, but for summarizing numerical examples, we’d need other methods.

Let’s start with a motivating example. Suppose you were tasked with redesigning a synthesizer with 25 knobs (see the Ableton example below). Recall that a knob (interface in physical world) adjusts the level of a parameter (input variable in model). Instead of trying to keep all the knobs by reorganizing them into logical groups, you could instead ask what “net effect” each group would represent, and then try to reduce the group into a single knob that adjusts the level of that entire effect. To toss an example out, say 5 of the 25 knobs were related to properties of reverb. You could obviously combine these into one general “reverb” knob. However, there is a challenge —you still need to decide how to best map the 5 lower-level parameters onto each value of the new general parameter.

In math, this is called a “dimensionality reduction problem.” The goal of the dimensionality reduction problem is to find a lower-dimensional subspace (a few macro parameters) that captures the most important variations in the full input space (all the low-level parameters). The output is known as a low-dimensional representation of the input.

Evaluation. Before explaining how to actually perform dimensionality reduction, e.g. using PCA or auto-encoders, it makes sense to offer a few thoughts on how to evaluate a low-dimensional representation. For interface design, a good low-dimensional representation should satisfy two key criteria:

  1. Compactness: The number of parameters should be minimized without significant loss of expressiveness.

  2. Orthogonality: Each parameter should affect the output in a distinct way.

Emirically, one can also argue that the selection of parameters has achieve their intended purpose if:

  1. Users spend more time exploring and experimenting.

  2. Users report that their design feels more intentional, i.e. chosen by a pre-existing goal vs. a desire to stop tuning parameters and move on.

In other words, you know you’re doing a good job when you have 1-4 parameters that seem to cover all the possibilities, and that users are satisfied with both the process and product.

Connections. Okay. These are some big ideas, and there are few connections I’d like to draw here before diving into some examples.

Portability. The motivation for compactness was outlined in the previous post: fewer parameters means that you can make physically smaller, portable interfaces.

Flipped Parameter Selection. Upon inspection of the compact set of parameters you might find a theme e.g., one parameter might seem to correspond to room size and another might correspond to crisp vs. washed-out sound. Using the “flipped” approach, you’d try to come up with names that describe what you’re observing, and those names are what you’d print as labels above each knob.

Human Computer Interaction. Parameters and their labels are can be classified as “affordances” and “signifiers,“ which help users build a conceptual model (see Norman, 1991, The Design of Everyday Things). Fewer knobs reduces the “articulatory distance” and forms a more direct user interface (see Hutchins, Hollan, & Norman, 1985, Direct manipulation interfaces). More interpretable parameters also prioritizes match “between the system and the real world” (see Neilsen, 1994, 10 Usability Heuristics). More generally, simple interfaces reduces cognitive load on the user.

Numerical Optimization. In classic artificial intelligence literature, depth-first search is not guaranteed to converge to the optimal solution or converge at all, however, breath-first search eventually will, though the search time explodes factorially (see Norvig, 2016, Artificial intelligence: A Modern Approach). By reducing the parameter space, you’re essentially allowing users to perform breadth-first grid search over the entire input space in a short amount of time. In other words, less parameters help users avoid getting stuck in local minima or lost in depth-first search.

Examples

“When you hear something you like, have a look around. You might find something even better close by.” -XLN Audio

OP-1 by Teenage Engineering. Teenage Engineering's decision to limit their instruments to four parameters per instrument is a great example. Here, each instrument has only four knobs. Unlike the sea of parameters in a classic synth, learning only four parameters is a tractable problem that can be solved quickly.

This is the famed OP-1 from Teenage Engineering. The four colored knobs in the upper right adjust the parameters of each instrument. Importantly, there are only four knobs per instrument. (Shot on my kitchen counter with an iPhone.)

Aside. The OP-1 and OP-Z are what sparked this series of posts. It went something like this: I started using the OP-1. I couldn't put it down. I stopped playing my other instruments. I started looking at my own products. I started asking why the OP-1 was so good and how I could make my own products better. I had a Cambrian explosion of epiphanies. I started clustering those epiphanies and distilling them. In a desire to unpack what I learned, I started feverishly writing.

XO by XLN Audio. To reduce the space of drum sounds down to 2 dimensions, XO developed a visualization-based interface. According the blog of the engineer who designed this, Svante Stadler, he developed a modified t-SNE to embed the high-dimensional sound space into an easy-to-navigate 2D graph. The net effect is that the embedding becomes the interface itself.

I love this quote from their website: “When you hear something you like, have a look around.
You might find something even better close by.”

Learning Synths by Ableton. Ableton did an excellent job explaining how macro-parameters work. To quote: “this box lets you move many knobs at the same time, which allows for very complex changes with a simple control.”

The macro-parameter demo. (Screenshot from Ableton website.)

The full synth demo. (Screenshot from Ableton website.)

To be continued…

If you liked this, drop me a line. I’d love to hear from you.

Next
Next

Winning the 2023 IF Industrial Design Award