Still Life with Algorithms

Thoughts on designing algorithmic systems

Dan Saffer
4 min readAug 16, 2018

One of the many challenges of living in the 21st century is not being able to trust your own instincts. Think back to the first time you used Google Maps or Waze to navigate to a place you know well. Likely you had this thought: That’s not the way I would go. Maybe, like I frequently did, you rolled your eyes and drove your usual way — often to your detriment. Now, for any trip over a mile, I always use a navigation system because it knows things that I don’t, like traffic and street closures. It’s analyzed the fastest route based on probably thousands of trips, instead of just my personal sample size of one.

However, I do reserve the right to tell Waze to fuck off. Because I also know things it (currently) doesn’t. I can see when a slow moving truck is in front of me and I need to go a block out of my way to route around it. I can see two drunks fighting in the street, obstructing traffic. (Hey, I live in San Francisco.) The data the system is working with isn’t complete.

Our relationship to navigation systems is increasingly our relationship with the entire world. And I think it tells us something about how to design algorithmic systems.

Firstly: algorithms will do things (show routes, present content, make suggestions, et al) that seemingly make no sense, that go against what we expect. This will annoy/anger/frighten/confuse people. They (we) won’t understand why this decision was made. So there needs to be some mechanism to show why something was chosen and why/how it is good. Some kind of proof. In navigation systems, this is the estimated time of arrival. You can drive there using your usual route to see if your time is better than the algorithm’s. You can test it — try before you buy. Another means of showing competency is showing alternatives, especially poor alternatives. These other routes (one of which might be your typical one) are worse than the one the algorithm selected because they’d get you there at a later time. (You can still choose to select them, which we’ll get to in a minute.)

In general, black box decisions with no means of understanding the reasoning behind the decision, no options to select from, no way to influence the decision by changing a variable, and no means of manually overriding the decision are to be avoided. Leaving people with no understanding and no control will fast lead to upset people.

Provide some means for humans to work alongside the algorithm. Until AI are all-knowing (which is probably never), humans will want some measure of control. This could be to instruct the algorithm that they guessed wrong about a suggestion, or to adjust the results based on new information, like a street closure preventing a particular route. Algorithms need flexibility to work with humans when necessary. It might not be total control, but it should mean options. We live in a world of imperfect algorithms responding to imperfect data and probably always will. When the algorithm guesses wrong, we need to be able to tell it so, adjust it manually, have it respond, and remember it for the future.

In the field I currently work in, social media, there’s currently a backlash against algorithmic content feeds. People complain about content out of (chronological) order. But what I suspect they’re really complaining about is not seeing the right content at the right time — seeing mostly stale or random or irrelevant content. The ideal content feed (of any kind) is one that shows you exactly the information you need at exactly the right time for your mood/schedule/context/etc. — an algorithmic timeline, in other words. Now, the algorithm could show you a chronological feed when it makes sense to, such as during a breaking news story or when you’ve exhausted all other content. It could even show a feed sorted in reverse chronological order if that makes sense for the story. It could show feeds sorted and filtered in numerous ways depending on your interests and your context. Whichever way gets you the best content.

The algorithms that surround us shape our lives and culture in ways we’re just starting to come to terms with. What we see, how we drive, how financial, health, legal, and all manner of decisions are made can affect…well, everything. It’s the algorithms we don’t see, don’t understand, and can’t control that are the most powerful, most dangerous, and most ethically challenged. Showing how and why decisions are made, providing options to verify and demonstrate the wisdom of those decisions, and allowing people to manually override them are ways to keep humanity designed into our future.

Originally published at The Pastry Box Project. Special thanks to Lisa Ding and Sean Thompson who substantially contributed to these ideas.

--

--