Trauma and technology
Practising trauma-informed design
It was 2022 and I was in a Zoom call. The host, now a friend and a teacher, was a trauma survivor. When it was time to end the call, she shared with us that the red “end” button was activating (sometimes called “triggering”) for her. It made her feel like finishing the call meant she was doing something bad.
At that time, I was working as a mobile and front-end developer. In my free time I was learning about trauma. So my friend’s observation instantly caught my attention. I found myself asking: is there an intersection between trauma and technology? And is there such a thing as trauma-informed (UI/UX) design?
Not long after the call, I searched the Internet for “trauma-informed design.” There weren’t many things back then, but there were some. And those first videos and articles marked the beginning of my journey towards becoming a trauma-informed designer (and developer).
It’s been a while since that time. I’ve joined calls. I read articles, papers, and books. I spoke with amazing people. I wrote a 20-week blog on the topic, and I took on more software projects related to mental health and trauma. There are a lot of things I learned and a lot of things I had to unlearn. I made mistakes, and I very often still do.
This is the nature of this work. Being a trauma-informed designer isn’t something we become. It isn’t something we can “finish” after taking a course, completing a certification, or reading a book.
It’s a practice.
A practice of leaning towards the people who’ve been harmed and listening to what they need.
A practice of working with our own hurt and of grappling with our power and ability to cause harm.
And, ultimately, a practice of healing and connection.
I invite you to read this chapter with that in mind. The chapter includes principles and patterns that I and others have found useful when designing with and for trauma survivors. But they are not universal truths, nor are they the only ones. Not everything will resonate, and that’s okay.
They are also not a checklist. Dr. Carol Scott, when talking about trauma-informed principles, often tells people that they can think of them as ingredients to a recipe. I intentionally don't go over the trauma-informed principles in this chapter, but I use the same metaphor. Not every ingredient is used in the same way in every dish. Figuring out how to use each is also a practice. Hopefully, this chapter will help you get started with that.
A note on the words I use
Throughout this chapter I use the word “technology.”
Normally that word refers to any kind of technology, from a computer to a dishwasher, and from a printing press to a steam engine. That’s not how I’ll use it here. Whenever I use the word “technology,” I am referring to information technology – that is, anything that has to do with computers.
I am not touching on all information technology, but I’m exploring the parts of it that most people directly interact with. This excludes things like servers, network switches, and so on. Unless a distinction is made, whenever you see the word “technology,” you can think of your laptop or your phone and of the software that runs on it.
When I use the word “tech,” I'm generally referring to the companies that work in information technology.
The word “Internet” written with a capital “I” refers to the World Wide Web. When written with a lowercase “i" (internet), it refers to any network of interconnected devices. In this chapter I intentionally use the word with a capital “I.”
A note on my positionality
When reading any material, I believe it’s important to stay aware of the positionality of the author. I am a white, cishet, currently able-bodied man who lives in Greece (Europe). I hold a master’s degree in computer engineering and work as a developer. I also identify as a trauma survivor.
Most of these identities come with power. I’ve tried to stay mindful of that as I was writing this chapter. I don’t know if I’ve always succeeded.
A note on why this matters for content designers
Content designers aren’t often developers, so it may seem like there is limited value in reading a chapter about technology. But many elements of software design rely on working with content designers.
Wording for error messages and buttons, for example, are all things that should be designed together. Adding components and choosing colours and images are often better done in partnership.
My aim in this chapter is to show you where you might influence the creation of a software product, and the questions and issues you might want to raise with your developers to create more trauma-informed experiences.
Software: interacting with technology
When considering trauma, we need to acknowledge that in every experience there is room for harm, activation, and re-traumatisation. That’s a hard reality to work with. At the same time, in every experience there is room for connection, restoration, and healing. I find hope in knowing that.
Software is what first piqued my interest around trauma-informed design. That’s partly because I design and build software. But it’s also because software is how we usually engage with technology. Software allows for experiences to happen, which means that software can both cause harm and support healing. Intentionality in how software is designed can make all the difference, and it’s at the heart of a trauma-informed practice.
Here are some of the ways in which we can be intentional when designing software.
Reducing cognitive load
Cognitive load refers to the load placed on working memory by a range of cognitive processes. In short, it’s how much “thinking” we need to do. When designing trauma-informed software, it’s important to be extra attentive to it.
Creating software that isn’t cognitively taxing isn’t new. It has always been part of the role of design. But when working with trauma survivors, being brief and clear becomes even more important. Trauma can impair our cognitive abilities, which means that we might have a hard time concentrating, focusing, or making decisions. Emotional activations are also all too common with trauma. When we are activated, the thinking part of our brain shuts down. This can make engaging with information harder.
Some design patterns that can increase cognitive load (and should be avoided) are:
- large blocks of text or long sentences,
- long videos or audio files,
- large collections of items (for example, in a navigation bar),
- asking people to make multiple decisions at the same time,
- long onboarding flows,
- asking people to remember something that’s not right in front of them.
Consider breaking down the information in smaller chunks or reducing the available options to only what’s truly important. Research can be very helpful in this process.
Giving back agency
My friend, who is a sexual assault survivor advocate, told me how she would meet survivors at the hospital right after an assault. She would bring with her different kinds of food and ask them what they would like to have. In that way, she would give them a choice. Trauma strips us of our agency. Having a choice again, no matter how small, can feel empowering.
Our design can also equip survivors with choices, supporting them in recovering a sense of agency. Here are some examples of choice we can include:
- dark or light theme,
- ways of signing up or logging in (email, social sign-in, one time password, biometric authentication),
- levels of security (enabling or disabling encryption or multi-factor authentication),
- ways to view content,
- ways to ask for help (customer support chats, chats with an AI agent, phone calls),
- the amount of notifications,
- the amount of personal information shared publicly,
- the amount and types of data that are collected.
However, it’s important to remember that too many choices can be overwhelming and tiring. Because of that, I like to distinguish between meaningful choices and burdensome choices.
A meaningful choice strengthens a survivor’s agency by letting them decide about something they care about. The choice might be simple and optional, offering a survivor a sense of control without significantly increasing their associated cognitive load.
A burdensome choice is one that a survivor doesn't want to deal with. It’s potentially too unimportant for them, it's asked of them too often, or it requires a lot of thinking. Sometimes it might refer to a situation in which they simply don’t know what’s best (for example, a very technical configuration option). Instead of helping them feel in control, it increases the cognitive load associated with using the design, making it harder to use. It can also cause stress.
The separation between meaningful and burdensome choices isn’t always clear-cut or objective. What’s important for me might be burdensome for you, or the other way around. In order to work with the subjectivity of this matter, you can:
- conduct research,
- gather data on how often people engage with each choice and consider adapting the ones that are rarely used or require unnecessary decision making - always see this data in the context of accessibility and safety,
- offer a choice of how much customization (or, in general, how many choices) people would like to have.
Choices and safety
When there is risk associated with your design, offering choices can be a way to mitigate that. For example, Chayn has a feature that allows survivors to receive resources by email. Chayn also gives them the choice to select a custom email subject. This can help subscribers stay safe in case someone sees their email inbox.
Features like that can give survivors agency over their own safety. For someone who’s experienced trauma, this can feel particularly empowering.
Choices and transparency
Choices can also go beyond direct ways of engaging with a design. Indirect choices can look like:
- picking which files to upload on the cloud,
- choosing which events to add on a calendar,
- deciding what information to disclose when chatting with an AI companion.
As designers, it’s our responsibility to help people make informed decisions in cases like these. Being transparent and explicit about how data is used and stored can give people the information they need.
Intentional friction
When we talk about agency, we also need to talk about friction.
For years, designing digital experiences has meant making things so simple and seamless that people would naturally flow through them. Opening a video streaming platform provides us with the videos we’d want to watch without us having to search for them. Watching a video directly leads us to the next one. And the next one . . . and the next one. With barely a click, we’re able to interact with the platform and consume hours of “relevant” content. It’s seamless, easy, frictionless.
In many cases, we have achieved that. We can now consume all the content we want with minimal action. We can find dates by simply swiping left and right. We can order food with fewer than 10 taps. And we can learn “everything” by talking with an AI companion on our phone.
If we consider where it all started (punched card and writing commands in a terminal), the evolution of software is impressive. And it’s not only that software has become easier to use. What used to be a specific experience in time (“I will open my computer to do X”) is now a fluid and pervasive part of our lives.
This isn’t always bad. Technology now lets us do more, faster, and better. It’s more convenient to use and, overall, it's often a better experience. But this convenience isn’t always good either. Because when we make something frictionless, we can end up taking away people’s choices in the process. Frictionless technology can imply choiceless technology.
If we can spend hours watching video content without taking any action, how much choice are we actively making over what we watch or for how long?
When the information we consume is filtered through social media algorithms, search engines, and AI models, how much choice do we have over our information sources?
And when popular apps are pre-installed on our phones, how much choice do we have over which ones we are using (and where our data goes)?
I’m usually quite optimistic when thinking about tech, so I’m not trying to paint an image of a dystopian tech world. I am, however, trying to point us to the idea that frictionless technology might not always be what we need. Especially when practising trauma-informed design.
And that’s where intentional friction (or speed bumps) come in. Friction, because we break the continuous stream of automatic actions to create space for choice. Intentional, because we don’t add it everywhere; we carefully place it where it can equip people with agency, or where its absence could cause harm.
What could this look like? Here are some ideas:
- directing people to another place for content that may be activating, so they have to actively choose to see it,
- social media feeds that stop after a certain amount of posts; we can see more, but we’ll have to restart the app first,
- video conferencing software that forces us to pause for a few minutes after several hours of meetings (this can work for games, too),
- streaming platforms that stop autoplaying videos after a while,
- apps and websites that don’t automatically “remember” us (this can also be good for safety),
- notifications that are disabled by default and can be enabled in the settings (also good for safety).
In general, intentional friction can help protect us from harm. When an algorithm or a technology becomes so automatic that it strips us of our agency and autonomy, harm can (and almost definitely will) happen. Intentional friction instils moments of pause and space that give us back choice.
But intentional friction is more of a pillow than a wall. It’s extremely helpful, but it requires buy-in from the people developing and, most importantly, funding those technologies. As we’ll see later, the real protection against harm requires a change in the culture of tech.
Ingredient shelf: software
In this section I include some specific practices for working with software. This isn't an exhaustive list nor a checklist. I once again use Dr. Carol Scott's ingredients metaphor here to convey that.
So, welcome to the first ingredient shelf! This one’s for software.
UX fundamentals
Trauma-informed software is built on top of “good” software. The fundamentals of UX design are the basis of a trauma-informed practice.
Move towards
- usability heuristics,
- considering the intersections between accessibility and trauma in your product strategy and decision making.
Move away from
- deceptive patterns.
Communication
Warm, inviting, welcoming, and inclusive content and visuals help survivors stay regulated while engaging with a design. It’s important to avoid content and visuals that can activate or re-traumatise a survivor.
Content
Move towards
- warm, inviting copy with language and tone that is validating, affirming, empathetic, understanding, and non-judgmental,
- inclusive and gender-responsive language. As Chayn recommends, “use gender-neutral language without being gender-ignorant.”
Move away from
- shaming or blaming the person using your design,
- content walls and content that adds a lot of cognitive load. Make sure that this is true in both desktop and mobile designs; content that looks great on a large screen might be too long on a smaller one.
Typefaces
Move towards
- accessible typefaces with adequate letter spacing and minimised imposter shapes.
Move away from
- hard-to-read decorative or hand-written script font families.
Move with curiosity
- consider picking typefaces created by designers with marginalised identities.
Colours
Move towards
- warm and soothing colours,
- in light themes, softening the white background using your primary colour. In my experience, this results in a more soothing experience than softening with grey. Alternatively, use a very soft non-white background,
- accessible colour contrasts. Use tools that help you check accessibility of colour combinations, like “Who Can Use.”
Move with curiosity
- be mindful of over-emphasising western associations with colours (for example, that red is a bad colour only used for errors and urgent notifications).
Images
Move towards
- using warm and inviting imagery that makes trauma survivors feel calm and welcomed,
- inclusive images or illustrations of people,
- images of nature and animals.
Move away from
- using images that directly depict harm or suffering which can activate and re-traumatise survivors,
- images of items that are often involved in traumatic events or that many people have negative associations with (for example, guns).
Motion and animation
Move towards
- allowing animations to be disabled, ideally at the operating system level.
Move with curiosity
- use animations, moving images, videos, and motion sparingly. Trauma can make concentrating harder. Animations and motion can be distracting, and that can be emotionally activating,
- animations can cause vomiting and intense discomfort for people with vestibular disorders.
The “Exit this page” button
The “exit this page” button is a component designed to provide a fast and safe path out of a website. It’s often used in websites that could place people in physical or emotional danger if they are seen browsing them. When the “exit this page” button is clicked, it sends people to another predetermined, “safer” website, like a search engine or the local weather.
The button should work like this:
- I’m in
searchEngine.com
, - I go to
website.com
, which has the “exit this page” button, - I press the button,
- I am navigated to the local weather website (let’s call it
localWeather.org
), - I press the browser’s back button and am navigated back to
searchEngine.com
(website.com
is skipped).
Move towards
- make sure that when the “exit this page” button is pressed, it disables the ability to return back to the original website,
- opening a second tab or using an overlay to quickly hide the open website,
- using a keyboard shortcut (for example,
Esc
, tripleShift
) to activate the button, - using the “exit this page” pattern in some situations. The “exit this page” pattern includes 2 additional pages, 1 that explains how the “exit this page” button works and 1 that provides more information for safety online. This pattern can be useful when the “exit this page” button only appears in specific parts of the website (for example, a form to report domestic abuse in a government website) or when you have found in your research that people are misunderstanding how the “exit this page” button works.
Move away from
- making the button hard to access. Don’t hide it behind dialogs or inside navigation drawers.
Move with curiosity
- be mindful of where you send survivors. News or weather websites often include negative headlines that can be activating. Consider using websites with soothing content, such as images of small animals, instead,
- the most common implementation of the “exit this page” button can fail when someone visits multiple pages of the website that contains it. Using the above example, consider the following flow:
searchEngine.com → website.com → website.com/resources → localWeather.org
. Because we're implementing an "exit this page" button, we want to make sure that navigating back fromlocalWeather.org
takes us tosearchEngine.com
. However, this isn’t the case. Instead, we will be navigated towebsite.com
, only skipping the last page viewed (website.com/resources
). To fix this behaviour, consider closing the current tab and opening a new one when the button is pressed.
Adaptive and responsive design
Not everyone has safe access to every type of device. Some survivors may only have private access to a phone. Others might not have a phone or may worry that spyware has been installed on it. The only safe option for them could be something like a PC in a public space (like an internet cafe or library).
Designing for all types of screens and devices allows us to meet people where they are.
Move towards
- designing for phones, tablets, and desktops, and testing the final product on multiple devices,
- considering if designing for less common screen types (like smart watches or smart TVs) could also benefit survivors,
- designing for touch screens first (mobile devices lack many input accelerators such as right click, mouse hover, and keyboard shortcuts),
- designing to the strength of each platform and screen.
- testing accessibility with real people using the screen readers on multiple devices and operating systems.
Move away from
- conducting fast, convenient, or "guerilla" research (research conducted quickly in public spaces). These techniques tend to over-emphasize certain types of devices and often exclude people with disabilities,
- locking orientation when designing for phones and tablets. Some people may be unable to turn the orientation of their phone from either a portrait or landscape perspective easily.
“Stressful” components
There are some UI components which tend to cause more stress or discomfort. Pop-ups and timers are prime examples of these. Designing them with care can result in a smoother experience for survivors. Microsoft, in their resource “Mental Health and Cognition: Design pattern guidance,” provides some useful ways to achieve that.
Pop-ups
Pop-ups often interrupt an individual’s intended action. The resulting distraction and cognitive overload can activate stress and anxiety.
Move towards
- timing pop-ups to appear at appropriate times so that they are aligned with what a person is trying to do,
- providing the option to disable pop-ups,
- including the relevant actions inside the pop-up window; for example, a pop-up warning someone about their battery running low should include a button that activates the battery saver mode.
Timers
Design timers carefully. Engaging with them can contribute to anxiety.
Move towards
- counting up rather than counting down to zero,
- soothing colours that can help minimise feelings of anxiety,
- using calming and figurative imagery for a timer, like a tree that grows over time, rather than a stopwatch,
- providing choices to manage a timer, such as adding more time or pausing it.
Data: being known by technology
Data has been recently dominating the conversation around technology. Of course, data existed long before that. But when we use this word now, we mostly associate it with digital information. Very often the conversation is around people’s data and how that information is shared. Or better: how, with whom, and for which purpose. Questions of ethics, value, trust, and safety arise when we talk about it. And some of these questions are critical to trauma-informed design.
Privacy
One of my favourite things in Chayn’s trauma-informed design principles is the inclusion of privacy. Privacy is a fundamental right. Also, it’s intertwined with safety and trust, which makes it even more important for survivors.
Unfortunately, more often than not technology is used to strip people of their privacy. Our obsession with data and targeted advertisements can lead us to see “users” as a collection of data points to extract instead of people to serve. And when working with trauma survivors, this isn’t only unethical, it’s also unsafe.
For people who are living in situations of crisis, for people who have been (or still are) targeted by abusers, or for people who have had their personal information exposed, a data leak or a misplaced targeted ad could make the difference between life and death.
Of course data will keep existing, and we will keep collecting and using it. But what’s important here is the way we approach data. Are we being extractive or are we being consentful?
The FRIES of consent
In the past few years, consent has taken a more central role in the world of tech. Regulations like the European Union's general data protection regulation (GDPR) have helped establish basic rules on how consent and data should be managed. But there’s more work to be done.
When we practise trauma-informed design, consent can both support and damage the trust-building process. When done right, people can feel seen and respected. For many, it can be a restorative experience. When done wrong, people can feel frustrated, used, or taken advantage of. This can re-enact the dynamics that left them traumatised in the first place.
But what does right or wrong mean when looking at consent? American sexual health organisation Planned Parenthood has defined 5 characteristics of consent. According to them, consent should be:
- freely given,
- reversible,
- informed,
- enthusiastic,
- specific.
Yes, FRIES!
The Consentful Tech Project has adapted these characteristics for tech. Let’s go over them:
Freely given
This means that a design shouldn’t mislead us into doing something we wouldn’t normally do. An example would be pre-filling a checkbox or using deceptive language (“I do not want to receive marketing communication emails”). These cases are misleading partly because they assume consent before it is given, and partly because they deviate from what people expect.
Reversible
Even if we initially agree to something, we should be able to change our mind about it. A great example here is the unsubscribe button in most email lists. Another great one is “user preferences” that allow for changes in how our data are managed. However, the latter can sometimes be problematic if that option is hard to find.
Reversible consent is also very important when conducting research.
Informed
Clarity is important when it comes to consent. Unfortunately, we rarely see this in tech, where consent is usually built around people agreeing to long legal documents like terms of service or privacy policies.
This practice makes informed consent harder, creates walls of content, and fuels inequity through inaccessible language (“legalese”).
Enthusiastic
This is my favourite one. It refers to our wanting to give consent instead of being forced to do so. If our workplace uses Slack or all of our friends are on Facebook, it’s unlikely that we’ll be able to avoid agreeing to their terms of service. But it’s unfair to assume that we want our data to be used for targeted ads or to train AI models just because we need to use a service.
Specific
Agreeing to our data being used in one way doesn’t mean that we agree to its being used in another way. This is a common problem with most terms of service; they don’t allow for specificity in what we are agreeing with. And when they change, it’s often hard to track what happened. In contrast, many cookie notices ensure this specificity by providing options around which cookies are accepted and which ones aren’t.
Ingredient shelf: Data
Welcome to the second ingredient shelf. This one includes specific practices for working with data. Same as before, this isn’t an exhaustive list, nor a checklist. Take what resonates, and leave the rest.
Access
Not everyone has an email address or a phone number. Some people might be accessing our services from a shared device. The need for authentication creates a barrier for them and risks excluding information from the people who need it the most.
Move towards
- providing access to your content (or part of your content) without the need for authentication,
- allowing people to view the devices that accessed their account, so that they can know if they have been hacked,
- enabling biometric authentication, which can be more secure and reduces the cognitive load associated with remembering a password. Keep in mind, however, that not everyone is comfortable with sharing biometric data, so make that feature optional.
Move away from
- remembering people's credentials by default if you are working with at-risk populations. Do offer it as an option, however, since it makes the log-in process significantly easier and it reduces the cognitive load associated with remembering passwords.
- using only facial recognition for biometric authentication. Sharing facial information can be more vulnerable. There have also been reported cases of facial recognition failing for people of colour. Consider including it as an optional feature to make your design more accessible.
Move with curiosity
- modern password requirements can be cognitively overwhelming, so consider also including alternative authentication methods like one-time passwords (OTP) or social sign-in,
- have clarity on the intention behind using authentication: is it truly a needed feature that makes our design work? Or is it a tool that we use to extract people’s email addresses and capture data?
Anonymity and encryption
Both anonymity and encryption can protect survivors. At the same time, they can hide or enable abuse. It’s important to be intentional and transparent in our decisions around them.
Anonymity
Move towards
- encouraging or enforcing anonymity, if protecting the identity of survivors is a priority for your design,
- disabling or discouraging anonymity, if it can be used to hide the identity of abusers.
Move away from
- storing data in people’s devices, as this can put them at risk.
Move with curiosity
- when collecting or publishing data, be aware that anonymisation can be ineffective. For example, using 1990s US census summary data, Dr. Latanya Sweeney has estimated that 87% of the US population can potentially be identified using only their 5-digit ZIP code, their gender, and their date of birth.
Encryption
Move towards
- using encryption when working with sensitive data,
- informing people about the use of encryption, and especially about its absence.
Move with curiosity
- mindfully consider when to use encryption, since it can be used for storing illegal data or abusive content.
Hardware: being in space with technology
When I was first engaging with trauma-informed design, I heard about a small device that can be used to provide location data, similar to how a phone does. At first, it didn’t occur to me that this was problematic. Then, someone pointed out how this device could very easily enable stalking.
When discussing trauma-informed design, we rarely extend the conversation to hardware. Since this is primarily a book on content, I won’t go into details either. But I believe that it’s important to at least point out the need for trauma-informed hardware.
Here are some examples of where trauma and hardware intersect:
- devices that can access people’s location can enable stalking, expose survivors’ data, and put them at risk (this also includes personal cameras that attach the location as metadata to a photo or video),
- devices that include microphones or cameras can be used to spy on people, and the idea of them being used in that way can make many survivors very uncomfortable. Hardware and software that disable those features (like camera covers) can provide survivors with some comfort,
- public cameras and other public sensors collect vast amounts of data without consent.
Artificial intelligence: Exploring the futures of technology
At the time of writing this book (2024), everyone is talking about artificial intelligence (AI). The future of this technology is unclear, and there is so much noise surrounding it. Writing about it risks adding to the noise or creating content that will soon become obsolete or inaccurate. But not including it would be a notable miss.
That being said, it’s important to acknowledge that the underlying principles of designing software and working with data also apply here. AI also has specific nuances that we need to take under consideration when practising trauma-informed design.
Machine learning
Artificial intelligence isn’t a new concept, and neither is the hype around it. Different approaches to developing AI systems have existed since the 1960s, and even though most of them found little to moderate success, there have been periods of high interest in the past.
Currently, the most popular way of developing AI systems is called machine learning. A subcategory of it, deep learning, is also frequently used. Arguably these 2 ways perform better and are more successful than anything done before. They have only become possible recently because they require a large amount of data and a lot of computational power. In many ways, contemporary AI is only possible thanks to modern hardware.
Machine and deep learning are not the only ways of developing AI systems today. But they dominate most of the AI conversations. So I will only be focusing on them in this section.
Inherent issues of machine learning
If one were to oversimplify machine learning, they could use the term “fancy statistics.” In machine learning, a computer program receives a large amount of data and creates a statistical model to represent that data. This process is called training. After the training, when new data is given to the AI system (input), it uses that representation to generate an output. Deep learning is similar, but uses a lot more data and fancier statistical models.
This way of working with AI creates 2 fundamental problems which are relevant for trauma-informed design. First, the output of an AI system isn’t always accurate. Second, what is deemed as “correct” by the model (and thus what is produced as an output) is defined by the dataset used to train it.
The first problem is important because it undermines safety and trust. Since machine learning models are using statistics under the hood, their output is always an approximation of the data used to train it. The statistics used now are very sophisticated, but an approximation is never the same as reality. This means that every time we choose to use an AI system, we run the risk of providing inaccurate information to the people using our design.
Knowing how important trust is for survivors, this process has the potential to cause harm. But, even more importantly, if we are reaching out to an AI system in moments of crisis, the accuracy of information could significantly impact our safety.
The second problem is trickier. An AI system’s output is heavily influenced by the data used to train it. If those data are biassed, the system will replicate this bias. And because deep learning requires vast amounts of data, uncurated datasets from the Internet are often used. Unfortunately, there is a lot of bias on the Internet.
AI is evolving very fast, and it’s hard to say where we’ll be in 1, 5, or 10 years. Chances are that inaccuracy will still be a part of it, but we’ll probably get better at managing it (as designers and as people). The bias problem is harder to solve. Of course we need better datasets, but this is very hard to achieve since the datasets need to be so large. And this isn’t the only issue.
In their 2021 paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, Emily M. Bender and colleagues conclude that language models “trained on large, uncurated, static datasets from the Web encode hegemonic views that are harmful to marginalised populations.”
The bias in our datasets is not random. It has the potential to benefit the dominant social groups, and it points to how systemic issues affect AI.
Systemic issues affecting AI
Capitalism + White supremacy + Patriarchy = harmful AI :(
Everything from how AI is built to how it “acts” and the consequences it has is coloured by the systems it is created in: systems of capitalism, white supremacy, and patriarchy.
AI is built by extracting the data of the people who use it (often without consent) and by extracting the labour of annotation workers in the Global South. It’s used to produce biassed and even intentionally abusive content, as in the case of deepfake porn. And it results in the loss of jobs, especially in the content design, graphic design, and customer support fields. All these are both impacts of trauma and sources of it.
And, to some extent, this is true for every technology. In her book The Real World Of Technology, Ursula Franklin argues that every technology “ages” in a 3-phase pattern. This includes:
- advocacy (excitement and promises),
- adoption (acceptance, growth, and standardisation),
- institutionalisation (economic consolidation and stagnation).
Unfortunately, as Ethan Marcotte writes in The World-Wide Work, “The promise of liberation that’s made in the first phase is never, ever fulfilled.”
AI has existed for decades, but it has only become commercially mainstream in the last few of them. And yet it is already causing disproportionate harm to the ones who have historically and systemically been excluded and marginalised. No amount of content warnings (“AI can make mistakes”) or output curation will fix that.
With AI dominating the conversation in tech, and with multiple emerging technologies on the horizon, it’s worth asking ourselves: what is our role as trauma-informed designers? Can we advocate for fairer, more ethical, and more trauma-informed technology development? Can we interrupt the way technologies age? Can we work towards a more liberated world?
Ingredient shelf: AI
Trauma-informed practices won’t fix the systemic issues of AI. But sometimes we have to work inside those systems and do the best we can to mitigate harm. Here are some practices that could help with that.
Especially here, because of how novel AI is, it’s important to remember that this isn’t an exhaustive list. I’m expecting things to become clearer and more standardised as the current way of building AI matures, and as the hype and noise go away. Everything is fluid now; we cannot know what will stand the test of time.
Move towards
- making it easy for people to double-check important information,
- including citations in AI’s responses whenever possible,
- continuously testing the results of AI systems,
- including systematically excluded people in the development of AI and the design of AI solutions,
- advocating for AI regulations.
Move away from
- designing AI solutions that directly replace people’s work,
- designing AI solutions that can directly cause harm (for example, deepfake video generators),
- using people’s data to train AI systems unless they have consented to it (FRIES),
- using AI systems for use cases where the AI’s inherent problems or biases can be amplified (for example, automatic profiling),
- using AI systems for surveillance.
Move with curiosity
- consider if AI is needed in the design and avoid using AI for the sake of it.
Creating a more trauma-informed world
Lately, I have been in plenty of calls on trauma-informed design, accessibility, and diversity, equity, and inclusion (DEI). More often than not, someone will ask about how we can push the adoption of those practices in the face of resistance. The question is valid, and also reveals a larger systemic issue.
For years, the culture in tech has been anti-trauma-informed. Maxims such as “design for the 80%” and “move fast and break things” have been used all too often. And countless apps and services have ended up causing more harm than good to both individuals and communities. In the face of such cultural and systemic barriers, we might ask ourselves if practising trauma-informed design is worth it.
I believe that it is. Because I see trauma-informed design as a practice that spans beyond the creation of interfaces and experiences. It’s a practice of pushing against the culture and systems of oppression that dominate tech. Every time we avoid re-traumatising someone, every time we keep a survivor safe by protecting their privacy, and every time we help someone feel in control through meaningful choices, we are moving towards a more just and equitable world.
A practice of trauma-informed design alone might not be enough to get us there. But it is a much-needed start.