Designing Ethically

Dan Saffer
7 min readJun 24, 2019

One of the biggest problems facing designers is how to create a more humane future. This has probably been the case for as long as there’s been a design profession. However, Design’s scale, influence, and reach has never been more profound—or more dangerous. Care has to be taken to keep people respected, dignified, and safe. Our sacred obligation as designers is to create a better world.

This is all broadly understood, if not always followed. Sure, some designers (and our colleagues in other disciplines) clearly don’t care about ethics and will do whatever they want, regardless of the consequences. There’s even an argument to made that most companies are unethical—unless it affects their bottom line. I tend to think that most people generally are moral and want to do what’s right, even if they don’t call the decisions they have to make ethics per se.

Often the problems people encounter aren’t simple, good-vs.-bad problems, but rather morally ambiguous. This is definitely true for designers, where most of the existential problems of the product come to a head as the design reveals them.

Some ethical dilemmas that designers encounter are systematic in that they are built into business models, organizational structures, policies, and operations. The ability of an individual designer to affect these systems is going to vary widely from not at all to completely, depending on (at least) the size of the organization and the complexity of the system. Many systems are not easily unwound, as many subsequent decisions have likely been based on their being in place. Take changing a business model, for example. If your product is advertising-based, switching to a paid subscription model will have many consequences, internally and externally. You can’t just flip a switch and make it happen. And this power certainly doesn’t often reside with a lone designer, or even the design department alone.

Additionally, there are many decisions made outside of the design process that will affect the design: business and technical decisions, manufacturing choices, resource allocation, scheduling, policies…the list goes on and on ad infinitum. Although they may affect the design, these are not design decisions. These are what designers have traditionally called constraints and, despite now having “a seat at the table,” designers might not be able to affect them, only work with them until they can be changed—which might be never. Every product ever made has had to deal with constraints. “Design depends largely on constraints,” said Charles Eames in 1972, i.e. before most of the designers working today were born. “Constraints of price, of size, of strength, of balance, of surface, of time, and so forth. Each problem has its own peculiar list.”

Some constraints or systems can be so odious as to prevent any just work from being done. In those situations, the only moral action someone in the system can do is walk away—if they can. (Leaving a job can be a privileged stance that not everyone can take.) Sure, someone else may step in and continue the work, but at least it won’t be you. What you refuse to do, another may find no objection to. This doesn’t necessarily mean the other person is wrong, only that their values differ from yours. As an example, many designers refuse to work on military projects. I have, although I would refuse to work on systems like targeting that are specifically designed to kill people. But others would have no objection to that. Your values may vary.

There are, however, issues that designers encounter that are not necessarily systemic or constraints, that can be dealt with at the product level. Here’s a list of some of these, compiled with help.

  • When shouldn’t a product be built?
  • When can you go against a user’s (known) wishes?
  • When do you side with the business over an individual? Can the company benefit more than the users?
  • How do you decide what is an edge case and what is essential?
  • How do you decide how much (if any) harm is acceptable?
  • How much user information is required? Is the amount of information being collected commiserate with the value users are getting? How much information can you use about a user without explicit permission? What happens when it (inevitably) will be compromised? Are we providing users enough control over their privacy and data?
  • When should an existing ecosystem of products/services be disrupted? Will people lose their jobs? Is there an ok amount? Will it create new jobs? Who will be benefit with those?
  • How damaging will it be when the product inevitably fails?
  • How can bad actors exploit this and how might we prevent that?
  • When do you use opt out vs opt in?
  • By solving this problem what other problems are we creating?
  • What should be automated? What decisions should be made for the user?
  • Who benefits from this? Who doesn’t? Does this benefit the individual or society? What if it benefits most people, but can be used by a small group to harm others?
  • Does this reinforce negative biases? Have we, or how do we decide if we’ve, excluded voices in our process? How do we assess the impact this will have?
  • Could this cause self-harm and how can we prevent it?
  • What happens if this is adopted by a large portion of the population? What happens if this becomes essential for participation in important societal functions? What long-term societal behavior or impact might these new interactions create? Does this improve our relationships one with another?
  • Will this harm the planet? Will it create pollution, damage biospheres, displace or kill wild animals, or otherwise tax the planet’s life-force?
  • How long can a product be left in a state known to be dangerous before addressing it becomes a priority?
  • How many unknowns or open questions related potentially dangerous outcomes must be thoroughly investigated before moving work forward? When is an open question too substantial to be deferred until v2?
  • What injuries (mental, physical, emotional) could result through repetitive use of this design?
  • When is it ok to oversimplify an explanation to avoid confusion?
  • Is using a “dark” pattern truly dark if it pushes the user in a direction that benefits them? (e.g multi-factor authentication)
  • What is the addictive potential of this design? When deploying affective influence (e.g. positive, negative reinforcement) where are the boundaries? When does this become abusive?
  • With multiple standards and so much variation in disability, how accessible is “accessible enough?”

How do we make decisions ethically? We’ve been debating that for around 2500 years now, since the time of Confucius and Socrates. Numerous frameworks and models have been developed to help people make ethical decisions, because it’s pretty easy for humans to rationalize any decision. Neuroscience tells us we usually make decisions first, then rationalize them. So the first step is often just to pause and realize you are making a decision that has consequences—an ethical decision in other words. If design had no consequences, there would be no need to think about ethics.

Once we have an awareness of our responsibility, actually making the decision involves some combination of personal, organizational, societal, and cultural values. What you think is unethical or immoral, someone else might find perfectly acceptable, and the same is true for groups. Different groups, be they organizations, governments, or cultures, have different values. Ethical decision making is about navigating values—between users, between users and companies, between companies and cultures.

How can we evaluate whether a decision was made ethically? By looking at its nature, the circumstances, and the motives. Some decisions are, by their nature, immoral. Randomly killing innocent people, for an extreme example, is always unethical. Most other decisions are less clear. And while killing people is usually wrong, circumstances like war, or motives like preventing a larger tragedy can supersede its immoral nature.

There is a tendency to attribute malice or subterfuge as motives to decisions we don’t agree with, and this is certainly well-deserved when it comes to companies. But motives can also be well-meaning and the product still cause unintentional harm. This does not mean the product was designed unethically. Almost any product can—and does!—cause harm, even something as well-regarded as a bicycle has caused accidents and deaths.

Likewise, a flawed product does not necessarily mean the product was made unethically. Certainly, companies that knowingly put out dangerous or unsafe products are operating unethically. But some flaws aren’t discovered until after a product is launched. The response (and speed of the response) to fixing or addressing these problems is telling as to the values the company holds.

We can also look to see if the decision violates what Dick Buchanan calls the Core Values of Design, which are:

  • Good: Affirming the proper place of human beings in the spiritual and natural order of the world.
  • Just: Supporting equitable and ethical relationships among human beings.
  • Useful: Supporting human beings in the accomplishment of their intentions.
  • Satisfying: Fulfilling the physical, psychological, and social needs of human beings.

If these values aren’t present, it can be a sign that somewhere, a decision was not made ethically.

Just because you don’t like a design decision doesn’t mean it’s unethical. A user or someone outside the decision-making process might in fact unwittingly want an unethical decision because without knowing or considering all the nuances and consequences, an ethical choice you don’t like can seem unethical. Of course the opposite can be true. An organization may think they are making a just decision but because they haven’t considered all the nuances and consequences, a seemingly ethical decision really isn’t.

Ultimately, ethics is about how we make decisions, and design is all about making decisions. So in a sense, designing is ethics: decision making in action, solidified into products. Every product is a set of decisions calcified. People then use those products to make their own decisions, for good or ill. How we design products matters, probably more than we know, and we know quite a lot. It changes the world, in ways subtle and profound. It’s a responsibility easily forgotten as we go about our daily work but it’s there nonetheless. Pray let us practice our craft wisely.

I’m greatly indebted to my design professor Richard Buchanan for the many ethics classes he taught us. His fingerprints are all over this essay.

--

--