Written by
James Reith
Art Direction by
Manoel do Amaral
During a press conference on 8 November 2021, American transport secretary Pete Buttigieg alluded to the story of Robert Moses and his racist overpass.
It goes a little something like this:
Moses, the so-called “master builder” of mid-20th century New York, was very concerned that white-only spaces in America (like beaches) might eventually be opened to all people. When designing the Grand Central Parkway in the late 1920s — a road intended, in part, to give New Yorkers “an easy way to reach Jones Beach’’ — Moses purposely made the bridges over the parkway too low for buses to travel under. At the time of the parkway’s construction, Black Americans predominantly relied on public transport. This made it very difficult for them to access Jones Beach.
Source: Bloomberg
In response to Buttigieg’s press conference “some right-leaning Twitter users immediately cried foul,” as The Washington Post put it, about the Moses story. Originating in Robert Caro’s 1974 Pulitzer-winning Moses biography The Power Broker, this supposed “myth” or “urban legend” has, apparently, since been debunked.
Some pundits, such as Fox News’ Tucker Carlson, took this a step further and responded with the broader, philosophical claim that “inanimate objects, like roads, can’t be racist.” As always, the general and the particular get jumbled: as if debunking this story also debunks structural racism as a concept. The Washington Post article shows that the Moses story hasn’t been debunked outright (it’s just tricky to prove). But sociologist Rhua Benjamin’s Race After Technology is full of examples of structural racism - from AI beauty pageant judges preferring lighter skin to predictive policing algorithms using racial profiling - alongside the Moses story. It is “a narrative tool,” in Benjamin’s words; one that has taken on an almost fabular role within design.
And if you’re thinking “James, this is getting a bit political,” you’re right. Because within design theory that’s exactly what the Moses story has come to illustrate: that designs have ethical and political properties independent of their designers; that designs “materialise morality,” as philosopher Peter-Paul Verbeek puts it.
Interstate 20 in the US state of Georgia was built to be racist, but failed. “In Atlanta, the intent to segregate was crystal clear,” writes historian Kevin M. Kruse. The highway was plotted through 1950s Atlanta as a “boundary between the white and negro communities,” according to then mayor William Hartsfield. It didn’t work. It just created the dreadful traffic jams Atlantans still suffer with to this day. Racists, however, are rarely as candid as Hartfield.
As they stand today, Moses’ bridges are low. Lower than other bridges built in the same era. Why they are low, however, is contested. Some Moses apologists argue that his bridges are low due to road widening and other modifications that were not part of his original design. Others acknowledge the low bridges but, instead, shift his motivation from racism to classism or claim his bridges were low for aesthetic purposes, so that they conformed to “a specific politics of nature.” These critiques, however, focus on Moses’ original intent and vision; it is a discourse based around him and not the effect of his designs.
We will never definitively know Moses’ motivations. As philosopher Kate Manne notes in Down Girl: The Logic of Misogyny, prejudicial attitudes are “very difficult to diagnose” and often “epistemically inaccessible” to other people. The impossibility of proving what lies in the hearts and minds of others is a silencing strategy: an endless, unresolvable debate that looks like dialogue while protecting prejudice.
This isn’t to say that intention doesn’t matter at all: so much of modern design is about iterating until execution matches aim. Intending to produce a prejudicial product, such as Interstate 20, dramatically increases the likelihood of it existing. But these aims are the concern — or, rather, responsibility — of the designer. As a user, as a citizen, it doesn’t matter what you meant but what you made.
You are standing by a train track. Five people are tied to it and a train is approaching. There is a switch. You could pull it and divert the train. But a single person is tied to the diversion track. What do you do?
This is the infamous trolley problem from moral philosophy. It typifies how ethics is usually understood: as an answer to the question “what should I do?” But that ‘I’ is doing an awful lot of work in the trolley problem.
For a moment, think about this not as a moral but a design dilemma. Why was there no fail-safe mechanism? What kind of security system lets someone tie six people to diverging train tracks? This sounds like pedantry, and obviously goes against the spirit of the trolley problem. But this framing of ethics as individual choice has real influence on how we think about ethics in design. (‘Should a self-driving car kill the baby or the Grandma?’ is a very real article from the revered MIT Technology Review) One that ignores how the very creation of such a choice is itself a moral failure.
Moral dilemmas like the trolley problem ignore:
• choice architecture
• design histories
• the responsibility of designers
The choice architecture of the trolley problem is a binary choice between killing 1 and 6 people. The choice architecture of Moses’ low-hanging overpass is, in fact, a denial of choice in all but name. By making something so inaccessible that few would choose to do it, you can still blame people for making that choice. Social scientist Langdon Winner says that Moses’ bridges embody “a systematic social inequality, a way of engineering relationships among people that, after a time, becomes just another part of the landscape.” That we do not recognise their influence on our choices is part of their power.
It’s a tidy story: racist man builds racist bridge. But most discriminatory design is unintended. The inaccessibility of much architecture and technology, for Winner, “arose more from long-standing neglect than from anyone’s active intention.” It’s doubtful that the Google engineers who produced that racist AI did so intentionally. But they still made it. They failed in their responsibility to prevent racism from occurring in their work. It is our duty as technologists to take responsibility for the values we inscribe in our designs; to make doing so a routine, boring part of the design process. Like closing JIRA tickets.
Debating whether Moses’ bridges discriminate is an unhelpful distraction. But can a bridge be discriminatory? Absolutely.
The central thesis of “The Power Broker is that Moses, an unelected city planner, wielded more power in New York than democratically elected officials. Our liberties and choices are increasingly mediated through technical systems which we, as designers, help create and maintain. This too is power: we need to wield it knowingly.
But that’s the problem: designs have ethical consequences, but ethics are not a part of the design process. At best, ethics are a responsibility shouldered by individual designers; at worst, they aren’t considered at all. And if designs can have unintended ethical consequences then we need to make ethics an intentional part of our process. Not just to avoid unintended harm, but also to do good. Design has the power to reinforce hierarchies, yes. But that also means, as Rhua Benjamin points out, it has the power to subvert them too.
In his article The Interaction Design Public Intellectual, design theorist and critic Cameron Tonkinwise notes a problem at the heart of design practice. Designers are “reflective practitioners” who critique the work of fellow designers, internalise that critical dialogue and attempt to predict design outcomes before they happen. But they also rely on evidence to validate their designs. A tension occurs, however, when designers do not apply that critical reflection to the very methods they use to validate their designs. “Without higher-level critique of overall directions,” says Tonkinwise, “the reflective practitioner is at risk of validating, through tight action research cycles, a response to a situation that works but is heedless of wider consequences.” As an example, he considers a wearable health device. Conventional UX design might produce a ‘delightful’ product that meets user needs, but never even considers “the ecological impact of the e-waste that all wearables become at the end of their use life.”
Or consider the Decolonise Design movement, which argues that design is not neutral: instead it masks Anglocentric thinking as neutral. This not only silences non-Western ways of thinking but, as design methods spread throughout the world, risks damaging non-Western cultures by importing Western ideas as universal truths; covert colonialism. This very article, with its deference to Western philosophy and the classical essay, risks doing the same. That isn’t to say that Western thought doesn’t have value. But it is a tradition, not the tradition, and working within it is a choice, not a necessity. One with its own benefits and limitations. Acknowledging the limitations of our methods opens us up to new possibilities.
Just because something is a certain way doesn’t mean it ought to be. There is a gap between the two. Philosophers call this the is-ought problem, and sometimes broaden it to the fact-value distinction. It means no decision is made on facts alone. That’s why when you present the same evidence to two different people they may make different decisions; they are interpreting evidence through their, often unspoken, values. In design, we love to make evidence-based decisions. But if the fact-value distinction is true, this means only half of our decision-making process is documented or articulated.
Value Sensitive Design is a complementary method to user-centred design. It makes values a conscious part of the design process and provides a host of techniques to help you do so, from value heuristics to mapping value tensions (like the common conflict between security and privacy). It has been criticised for being “descriptive rather than normative,” as design theorist Sasha Constanza-Chock says, “it urges designers to be intentional about encoding values in designed systems but does not propose a particular set of values.” I too thought of this as a major flaw; a paradoxical symptom of how neutrality is so fetishised within technology that even a design method based around values won’t tell you what values to use. But I also think this criticism harkens to a much older sense of how ethics operate, one at odds with both contemporary philosophy and design.
There is a tension between design and morality, at least in how they are traditionally understood. Design is process-oriented and evidence-based; morality is rule-oriented and value-based. By this logic, morality will simply operate as a set of absolute limits on design. But just as Tonkinwise argues that this simplistic characterisation of design is wrong, many modern thinkers would not recognise this characterisation of ethics. Maria Puig de la Bellacasa, for example, describes ethics as “a hands-on, ongoing process,” a “thick, impure, involvement in a world where the question of how to care needs to be posed.” And Jeroen van den Hoven acknowledges that traditional, rule-based morals haven’t made the world any better. Which is why he and other applied moral philosophers are turning to design: a field comfortable navigating the messy ambiguity that Bellacasa describes.
Designer Cennydd Bowles claims “ethical issues are like design briefs,” in that there is rarely a single answer to an ethical question. I prefer to think of them as problem statements, for recognising something as a problem is itself an ethical act. (To dismantle the patriarchy, for example, you first need to see it as a problem.) Ethics then is, perhaps, less about finding the right principle and more about defining the right problems.
In articles like this, designers often want practical tips, models or methods they can apply to their own work. Unfortunately, Ethics doesn’t quite work like that. “Ethics and politics,” as Jacques Derrida put it, “start with undecidability.” If moral problems could be solved by following a simple process, they would, by definition, not be moral problems. For Derrida, Ethics are not a matter of knowledge but responsibility. I hope in this article to have outlined ways of recognising the ethical responsibility we all hold as designers, and some methods for considering and articulating it.
As a subject, Ethics is as necessary as it is infuriating. The philosopher Ludwig Wittgenstein once claimed that if anyone “could write a book on Ethics which really was a book on Ethics, this book would, with an explosion, destroy all other books in the world.” Critics have variously interpreted this as a comment on the futility of ethics, or how belief in absolute moral rules can hinder other kinds of knowledge. But too little attention has been paid to the medium in the metaphor: writing. Prescribing moral rules may be a dead-end, but Wittgenstein never said anything about design.