Controls are Choices

Dan Saffer
7 min readNov 16, 2016

--

Every button, slider, switch, knob, or dial represents a choice. Many choices, actually: the choice to have it, the kind of control it is, and where to place it.

Controls and Complexity

A control, in its most basic form, is the visible manifestation of a piece of functionality. It gives a cue to the user: you can do something here.

The physical characteristics of a control, what interaction designers call affordances, dictate what can be done with it. A button affords pressing, for instance. Thus, the choice of control is important. Sliders, for example, are best used for granular control along a continuum, where there are known thresholds at the top and bottom. Switches are good for clear states, e.g. on/off. And so on.

Every decision to add a control, even a simple button, increases the complexity of a device. After all, in this digital age, devices can do almost anything for a user, from adjusting volume to taking pictures to controlling satellites in space. Control of the device can be performed completely by the device; it can be completely automatic. In some cases, like for pacemakers, say, you want this. You don’t want to have to tell your heart to beat! The less controls there are, the simpler the device is for a user (although probably not for those involved in making the device).

Of course, by reducing controls (and thus reducing complexity for the user), you also reduce control over the device. Users can do far less with it, and have fewer options for customization. Again, sometimes this is desirable. But sometimes, it is a disaster. Reducing complexity means reducing control, and some users, particularly those whose skill goes beyond that of amateur/beginner, don’t just want control, they need it to perform their tasks effectively. Thus, it becomes a balancing act, with simplicity and automation on one side, and complexity and control on the other.

How you determine which side of the continuum to fall on depends on a number of factors:

The kind of device it is. Is it a mass-market mobile phone, or an expensive controller for a custom-built home movie theater?

The kinds of activities the device engenders. Is it a kiosk that will be used sporadically, or a wearable that will be used several times a day? Is it a simple activity like listening to music, or a complex one like playing a first-person shooter?

Who the target users are. Is the device for a broad, mass-market audience, or for a group of specialists like heart surgeons?

The positioning of the device in the marketplace. Is the device a tool for power users, or a mass market, consumer device for initiates?

The emotional feeling you wish the device to convey. Is it something powerful and important, or simple and elegant?

No matter which side you choose, complexity just doesn’t go away. Larry Tesler, creator of such enduring interaction design paradigms like cut-and-paste, has noted this in Tesler’s Law of the Conservation of Complexity. All processes have a core of complexity that cannot be designed away. The only question is what handles it: the system, or the user. By creating a control, the designer is making a choice, saying to the user in effect, “You handle this.” But if the system has to handle the complexity, it means all kinds of decisions have to be made for the user — your defaults have to be smart, and god help you if you get them wrong.

Take for instance a digital camera in which all controls except two have been removed: an on/off switch and a button to take a picture. This means the system either has to handle focusing, picture management, exporting, employing flash, zooming, etc. or else not offer the feature at all. If some of these features are crucial to the success of the camera, the resulting complexity which might otherwise be handled by the user has to be built into the hardware and software. Bad choices (i.e. poor defaults) here will ruin the device. Pictures will be overexposed or blurry, users will be frustrated at not being able to zoom, and so on.

Where the Controls Are

It used to be that controls for an object were on the object. To operate a printing press, for instance, you stood next to it and pulled the lever that opened and shut the press.

Later, starting around the 1890s with the introduction of electricity, controls could move away from the object itself to other parts of the room. Everything from light switches to operation of machinery could be done away from the object it was affecting. Other mechanical processes, such as shutting off the water to a building, could also be done on-site or nearby.

Telegraphs, and afterwards the telephone, radio, and television allowed people to affect objects from afar, but the controls were always on the object itself (to tune in and to adjust the signal).

Networked devices connected via Ethernet extended controls to objects in other rooms and the internet expanded this range to include the entire wired world. You can now fly robot drone planes on the other side of the globe. The controls are nowhere near what is being controlled.

This presents a bit of a problem for designers. In “traditional” interaction design, the rule is simple: if you can’t directly control the object itself (via physical controls or touchscreen gestures), you put the controls as close as you can to whatever it is you’re manipulating. The reason for this is simple: feedback. When you turn a dial, you can watch or hear something being affected; when you click an icon to make a word bold, you can see it turn bold. But when I set my DVR with a mobile app, how do I know the DVR itself is executing the command? Sure, I can get feedback from the app, but some trust is definitely involved. If I get home and Oprah hasn’t recorded, what do I blame: the device, the connection, the app?

There is also a distinct emotional impact the farther the controls move away from the object itself. Do I want to cook on my stove from a control panel in another room? Perhaps there are use cases when that makes sense, but it definitely would change the nature of working with a stove and how users feel about their interactions. Of course, it’s not always a negative to move controls to be away from the device: the introduction in the 1950s of the remote control only brought users more power and pleasure from their television sets.

So another choice then, for designers, is not just which controls to have, but where to put them: on, near, or remote. And the answer should be one of context: how important and immediate is the feedback (especially multi-sense feedback) to the task, and (related) what will it feel like to use this device from afar: is it empowering or dehumanizing?

Once a control is in place, the final choice is that of the user: the decision to use that control. Designers need to help users make that decision as best as possible by designing the controls and their surrounding environment well. The best controls have three characteristics: an affordance to let the user know the control is there; an indication (often an icon or a label) of what the control controls; and feedback to let the user know the control has been used. If any of these characteristics are missing or done poorly (such as an unintelligible icon), the control will lead to doubt. Helping the user understand what will happen when a control is used (so-called “feedforward”) is essential to building trust in the device.

Predictability is also important in building trust. Users want to know when a control is operated, it will do the same thing each time. The system has to have an inner logic that, even if the users cannot articulate it, they understand it. Which is why, unless it undoes an action (e.g. an on/off switch), it’s a good idea to have a control only control one function. Having the same button do different things can be confusing. This is why soft-keys (where an onscreen label indicates a change in the functionality of a fixed physical button) are problematic.

On-screen controls (and increasingly physical ones as well), can be disabled or hidden when to use them would be dangerous (see: The Poka-Yoke Principle) or ineffective. Hidden controls are often extremely problematic. Users become accustomed to objects (even digital objects) being in the same place, and to remove them is cognitively jarring. Disabling, particularly if, via a tooltip or other means, the user is able to determine why it is disabled and how to engage it again, is usually a much better choice.

Positioning and prominence are other important cues for users. A giant green button with a label reading “PUSH ME!” won’t be ignored, while a small switch in a bank of them will need to be scanned for. The one control that will be the most used or is the most important (the so-called Hero Control) should get visual and/or spatial prominence. It should be clear what the most important (or at least the most drastic) action is. A Hang Up button, although used infrequently (once per call, to be exact), should be emphasized.

It can be good practice to cluster similar or related controls into zones on the interface, e.g. controls for printing on the right, for exporting on the left. Avoid putting “ejector seat” (undoable) controls next to other positive controls. Although done frequently, having Ok next to Cancel is not good practice.

Arbitrary choices are an anathema to good design. The best technology, the coolest features, can be ruined by poor choices, including your controls. Choose wisely.

This article originally appeared on the (now-defunct) Designing Devices website and in the Designing Devices book.

--

--

Dan Saffer
Dan Saffer

Written by Dan Saffer

Designer. Product Leader. Author. Professor.

Responses (1)