STORIES

Able, allowed, should: Navigating modern tech ethics

PROCESS
CULTURE
OP-ED
PRODUCT DESIGN
By Margaret S.
21 min read
May 7, 2018
Able, Allowed, Should: Navigating Modern Tech Ethics

SUMMARY

As technologists, we’re stewards of some of the most powerful communication tools ever created. How can our responsibilities encompass both systems effects and societal impacts?

I want to share a talk I gave on the design track at SXSW in early March of this year. This talk was inspired by a deeply held belief that we are at a critical time in my company and our industry’s history, and to propose some ways in which we can rise to the challenges that go along with these amazing products that we’ve built.



As technologists, we are all stewards of some of the most powerful communication tools ever created. These tools have generated a lot of good in the world, but their very power requires a deep sense of responsibility and a commitment to making most ethically responsible decisions possible, every day.



While this talk was geared towards designers, the core of it is meant for all of us to broaden the aperture with which we view our responsibilities and to encompass both systems effects and societal impact. This is hard, and we will need to continue to invest time and energy into new ways of thinking, working, and assessing our success to get these tough decisions right.



I have full confidence that the challenges we are currently navigating will make us and the industry as a whole stronger, more rigorous in how we approach our responsibilities, and better equipped to navigate the inevitable challenges that lie ahead.



Note: If you’d prefer to listen instead of read, here’s the audio recording.


This past year has not been the easiest year for Facebook.



We’ve faced a lot of hard questions and few controversies over issues like election interference, privacy, social media’s effect on well-being, content policy, among others. When it comes to these issues, there are rarely obvious answers or easy fixes.



We support an incredibly diverse, global community who do not all share the same ideas about right and what is wrong, acceptable or unacceptable. to say nothing of the same laws.



Pop ethics quiz



Let’s try an ethics puzzle, and don’t worry, it doesn’t involve a runaway trolley.



A natural disaster happens. Could be a hurricane, an earthquake, a landslide. Point is: lives are at stake. An international disaster response organization asks Facebook to provide information about people in the affected area: their location and their movements.



What do we do? If we share the data, we might be able to help save some lives. But, that’s a lot of data to share about people, their locations, their movement. They might consider it akin to surveillance and a breach of their privacy.



I could travel the world, and I would hear different responses from just about any group of people.



This probably won’t come as a surprise, but for us, this wasn’t a hypothetical. In March 2017, Peru faced some terrible flooding. Humanitarian organizations engaged with us to see if we could help. We looked at usage trends on our platform and they reflected where people were located, where they were moving and where they were checking in as “safe”.



In June of last year, we announced a partnership with UNICEF, the International Federation of Red Cross and Red Crescent Societies and the World Food Program in which we share real-time, but anonymized, aggregated data about people during natural disasters. Working with respected organizations certainly helps some people feel better about the data being shared, but it doesn’t make the privacy concerns go away. It’s a complicated decision with valid points on both sides.



That’s just one example. Situations like this arise frequently when you work on products used all over the world. Part of the job is dealing with challenging decisions on the best way to design and deploy new technology. This is true at Facebook, and it was also true when I led design at YouTube, as well as Google Search.



I’ve been working in design and technology for over 20 years, leading design for Google Search, then YouTube, and now at Facebook, and I can tell you that many of these problems aren’t new. But as we’ve learned, with the scale at which we are operating, this work requires a heightened sense of awareness and responsibility.



So, I want to talk about that responsibility, and some of the lessons we’ve learned and are still learning. And I also want to talk specifically about how design plays a critical role in all of this. And when I say design, I’m not just talking about design at the surface level, where we make things pleasing and beautiful. I’m talking about the full spectrum of designing useful and usable products end to end, from the initial idea of what we might build to launching it and all the iterations that follow.


The intersection of the humanities and technology



As designers, we sit at the intersection of technology and the humanities. This isn’t a new role for us, even in the modern digital age. Since the beginnings of the study of human-computer interaction back in the late 1970s, designers have been working to understand and ensure that the systems we build are an effective interface for people.





And in a world of global, networked, and highly personalized digital products, the stakes for getting things right as designers has gotten a lot, lot higher. Doing right by the people using our products means sometimes asking ourselves hard questions about what we are building in the first place.


Able, allowed, should



Designers, engineers, and product managers spend a lot of time thinking about what we are able to build. That is, what’s possible, within the limits of our current technology and what we can do with the resources we have.





We also spend time thinking about what we are allowed to build. That is, what ladders up to our company business objectives, what’s within our policy, and of course, what’s legal.

But as an industry, we need to spend just as much time thinking about what we should build, even though we’re not required to; and in some cases, what we shouldn’t build, even though we’re technically allowed to. “Should we” is the essence of ethically responsible design.

As in the case disaster relief, the answers are rarely obvious. If they were, we wouldn’t call them ethical dilemmas. The question of “should we” also gets to the heart of what motivates us.

Multiple motivations

The fact is, companies like Facebook and other large, global platforms often have multiple motivations that drive decisions about what, how, and why we build.


A major motivation of ours and most tech companies is innovation. We always want to push the envelope, explore the ways in which science and technology can solve problems and amplify human abilities.



Another motivation is profit. Facebook is a public company with a legal obligation to care about being profitable for shareholders. Most of us work for businesses. And there’s nothing inherently wrong with being a business.





And there’s often a third motivation: making a positive contribution to society. I work at Facebook because I believe in technology’s power to be in service of human and societal needs. It’s been true of my work at Google, at YouTube and now at Facebook.

Facebook’s mission is to “give people the power to build community and bring the world closer together.” I genuinely believe in that mission. That’s why it’s particularly painful for us to see the products we’ve built with this mission in mind used by others to hurt people or society.



There’s a tension between these motives to be sure, and if they get out of balance, consequential decisions and unintended outcomes can happen. But the fact is, a well-capitalized business can do a lot of innovative things that are good for the world.



We just need to make sure that we keep the long view in mind and are vigilant about making ethical, responsible decisions along the way.

Because it’s in the long view of things that these three seemingly competing interests realign.



A history of disruption

Of course, we’re not the first technologists to face these questions.



Throughout history, whenever new, powerful technologies are invented, they upset existing social norms and systems, and it can be unsettling to live through. From agriculture to the printing press, the rise of industrialization and digitalization — — all of it disrupted existing industries, ideologies and power structures. All of it had unforeseen consequences and its share of doubters.



Fun fact: did you know that Socrates warned against writing because people wouldn’t use their memories and that would “create forgetfulness in the learners’ souls.” Now, as a dyslexic, I kinda wish he’d won that battle. But while even dyslexics like me probably agree that writing is a technology we should continue to embrace, we only know that in hindsight. When you are living through disruption, it’s hard to separate change aversion from a well-founded fear of things that might be bad for people and society. And even more challenging, these inventions can simultaneously be both good and bad; it’s all about how they are deployed, used, and managed.



What’s new? Speed and scale

And on top of the disruption new technology can cause, there are two things that make Facebook’s current situation materially different than challenges of the past: Scale and Speed.



Out of the 7.6 billion people on earth, and the almost 4 billion people estimated to be on the Internet, there are over 2 billion people on Facebook. In 1930, 2 billion roughly was the population of everyone on the planet Earth. The sheer number of people using our products is astounding. While that’s a wonderful thing in many respects, it means our responsibility is that much greater.



All this is even more challenging to navigate when things are moving so incredibly fast. Every major technological advancement in the past took generations to reach a mass audience.

Understanding the time it took for 100 million people to adopt different products over the course of history.

It took print over a century to reach an audience of 100M. Radio did it in 45 years, Television in 20. Facebook & Snapchat reached their first 100M in just 4 years And Instagram & YouTube? Around 28 and 24 months, respectively.



Relative to other technologies, we’ve had just a fraction of the time to understand how the interfaces we develop impact people and society. This isn’t an excuse. But it puts a fine point on the challenges we face at keeping pace with the the growth of our own products.



But despite the scale and the speed with which these changes are happening, taking into account the impact of new tech on people and society, the good and the bad, must become a more central part of how we as designers approach our work.



The four quadrants of design responsibility

Here’s a framework for how I’m thinking about this problem. I call it the Four Quadrants of Design Responsibility.

The Four Quadrants of Design Responsibility represent a broader view of design’s duties towards people and society.

On the X-Axis, you have, from left to right, what we build, at an increasing scale of things, from the pixel to the product to the whole ecosystem in which your product operates. As you move from pixel to ecosystem, constraints, inter-dependencies and complexities grow.



And on the Y-Axis, same idea, but the scale is the audience you’re designing for, starting with one individual human and working your way up to all of society. And as the scale grows, the less we can assume about who we are designing for, their motivations, their culture, their needs and wants.



As designers, we feel really comfortable in that bottom left quadrant: it’s where we can push our pixels and polish our work to make something really well crafted. And it’s where we can be human-centered, focusing on a particular audience of people that we are intentionally designing for, and a set of tasks that we understand is important to them. We feel like we can understand and control things there. You can imagine this as the space of designing a book cover; while you need to make it legible and convey the nature of what the book contains, but it’s a relatively finite design space to operate in.



But at the extreme end of the X-axis, we may face unanticipated consequences from network effects. The systems we’re designing are sometimes difficult to model and sometimes we only really see what happens with them when they’re used at scale. Think of a cleaning product that might work exceptionally well for a particular task, but when released into the water system, might have unintended effects on plant and animal life.



When you go far out on the Y-axis, designing for the whole world, there’s a whole other set of problems that can arise. For instance, you see how differently Facebook works in different parts of the world, from the way people access the internet, and the kinds of devices they have, to their social norms, political contexts, economic conditions. And as the number of people using your product grows, so does the likelihood that people will use it in unintended ways, which can create both good and bad outcomes.



In the digital space, we see this with a product like YouTube being used not only for you and me to capture and share our own personal life moments on video, but also for the Khan Academy to transform ways in which people learn in a highly personalized way through instructional video. Or the ways in which Facebook has been used to organize social change movements like the Women’s March.



These uses, by large numbers of people, are impacting how our society functions.



Of course, there are less positive examples of these effects: election interference, polarization, or concerns about health and well-being; ways technology might inadvertently cause harm when it scales towards the top of that Y-axis.



These quadrants are not a space where design traditionally spends a lot of time. It’s the world of sociology, public health, cultural studies, sustainable design, economics. In a sense, it is a new kind of digital urban planning.



Designing for all four quadrants, thinking expansively about the impact of our inventions on people and society, that is the heart of ethically responsible design. In the digital realm there aren’t a lot of examples of products that do a great job of this…yet. It points to both a challenge and a huge opportunity for all us.



So where can we look for inspiration? We have to look back in history a bit to see what lessons we can learn.

One helluva fish hook

This is one of my favorite design objects.

Halibut Hook (USA). Collection of John J. McLean, 1881, Baranof Island, Alaska, Department of Anthropology, National Museum of Natural History, Smithsonian Institution, E45990.

It’s a hand carved fish hook, designed by a member of an indigenous tribe in Alaska over 100 years ago. I first saw it at the Cooper Hewitt Smithsonian National design museum in an exhibit about tools crafted by humans throughout history. Whoever made this probably didn’t think of themselves as a designer or inventor, but they were.



It may look archaic, but the design is more sophisticated than many of today’s fish hooks, and here’s why: it was designed to only catch halibut of a certain size. It left the small fish for future seasons, and it avoided the larger fish that were too big to haul into the canoe, which is a very practical design consideration.



Essentially, it allowed the people of that community to practice sustainable fishing, providing them with many seasons of prosperity. That alone makes it a fascinating, inspiring design artifact.



But a second aspect of this halibut hook also captured my imagination: in addition to its ingenious functionality: it also has a beautiful carving that depicts the spirit exchange between the Inuit people and the fish in the sea. This community believed that if they showed respect to the fish that they were looking to catch, more would come back the next season.




An object like this operates in all four quadrants because it’s not just concerned with catching a single fish, it’s designed to protect and sustain a broader ecosystem. It’s not designed to just benefit one fisherman, but an entire society of people over time, their spiritual life included.



All in all, it’s one hell of a fish hook.



So what lessons can we apply from this to something like Facebook? We need to excel at that bottom left quadrant to be sure, but we need to also get even more skilled at anticipating systems effects, protecting the health and well-being of our community, and understanding the impact on society when a much larger and more diverse population is using our products.



We haven’t always gotten this right at Facebook. And to be fair, this is really hard to consistently do well. But just because something is hard is not an excuse. We, as a company and as an industry, have to always do our best and always strive to get better at it.



While we may be one of the most visible examples of a tech company grappling with these issues, we are not alone.



So let’s examine a few themes that have emerged for us and ways we are working through them, in case they might be helpful to you in your own organizations.

Designing for misuse cases

As designers, we spend a lot of time thinking about the “use cases” we want to support. But as we’ve learned through some very hard lessons, we need to spend more time planning for “misuse cases.” That is, when people take tools that are meant to do good and do bad things with them.



This has been a challenge with every global platform I’ve ever worked on, from Facebook to YouTube to Google, and if you are old enough to remember, Tripod and Angelfire.



Our industry — the tech industry — is very optimistic. Optimism is a good thing. Without optimism, most of our products never would have been built in the first place. But optimism at the scale we’re talking about needs to be tempered a bit at times. Ethical design demands we ensure the safety of everyone on our platform and the integrity of the platform itself.



Here’s an example involving online abuse.



In the last few years, we observed, through qualitative research in India, that some Indian women wanted to upload profile photos but did not feel safe doing so. They were concerned that strangers — bad actors — would download their photos, use the photos to stalk or harass them, even threaten their personal safety and do harm to their family’s reputation.

Optional profile picture guard on Android.

So, in partnership with safety organizations, we released a bunch of safety features to address these concerns. People in India were given the option to add a profile picture guard. When people added just that visual cue, we found that other people were 75% less likely to copy the picture. And to truly block the bad actors, on the Android platform, we made it so that no one can screen shot your profile picture.



This is an example of where we were able to see a misuse case in action on our platform; people using our product to hurt other people. We’ve reacted to this problem effectively, but in the future, we need to get better at anticipating these kinds of issues advance of launch and having these protections in place from the get-go. This is very hard to do at a global scale, and we won’t always get it right, but it is our responsibility to always get better at it.

Outside expertise

Anticipating bad actors and, more generally, bad outcomes is complicated; sometimes it requires the help of outside expertise. When we’re operating at the scale of billions and engaging in those complex systemic and societal quadrants, we have to confer with external experts who will give us a fresh, valuable critique of our work and a perspective beyond Silicon Valley.

Asking for help is not a sign of weakness; it’s a sign of maturity.

And it is another foundational tool in ethically responsible design.



In our early days, we didn’t always seek outside help to the extent that we could or should have. We’ve learned a lot since then and are increasingly collaborating with outside experts to get things right.



Here’s a concrete example: Because people connect with friends and family through our products, Facebook is sometimes in a position to recognize and help people experiencing distress and suicidal ideation. When we were trying to figure if and how we should engage in these situations, we sought outside help, because while we care a lot about the health and well- being of our community, we are not experts in suicide prevention.

Using pattern recognition AI to identify suicidal ideation.

What resulted was a set of tools co-developed with leading experts in mental health. Now, when we recognize that someone is expressing thoughts of suicide, we provide resources and offer help in connecting them with loved ones and mental health professionals. And we’ve recently started to roll out the use of Artificial Intelligence to help identify suicidal ideation on Facebook Live and connect people to resources to get the help they need in real time. In just the last month, we’ve worked with first responders on over 100 cases based on reports we received from our proactive detection efforts.



We also use pattern recognition AI to help accelerate the most concerning reports. We’ve found these accelerated reports are escalated to local authorities twice as quickly as other reports. We are committed to continuing to invest in AI technology such as this example to better serve our community.



As someone who has lost two family members to suicide, these efforts are very important to me personally. These relatives didn’t live close to me, and I wish so much that I might have had a chance to understand sooner what they were going through the change the outcome.



Of course, this isn’t the only area where we can learn from experts. We’re partnering with journalists in the Facebook Journalism Project to help ensure high quality news on our platform and more sustainable business models for news organizations; we’re engaging with psychologists and other health experts to inform our research about concerns of overuse of social media, and we are engaging with external experts to inform our approach to responsible development of AI and machine learning.



I’m grateful to these experts for helping us build products in a way that’s designed to create the best possible outcomes for people and society.

Assessing success

But let’s say you get better at anticipating and designing for misuse, and you are consistently seeking the help of outside experts to create better outcomes. You may still not make the right decisions if you don’t have a good way to understand what success looks like.



The tech industry uses a lot of metrics. MAU, DAU, ARPU, and more. Facebook is no exception. But, it’s easy to forget when we are looking at dashboards and numbers that metrics are just a proxy for something that’s usually much more complicated than a single number can describe.



Figuring out whether your product is good for people and the world is a very complex, nuanced thing. Its easy to fall into the trap of valuing what we can measure instead of measuring what we truly value. When not properly contextualized, metrics can serve as horse blinders, limiting your field of vision and causing you to miss important signals about how your work may be impacting people.



Instead of focusing solely on measuring success, we should focus on assessing success, because not everything important can be measured. By the way, this problem is not new and it’s not unique to the tech industry.



In 1968, Robert Kennedy gave a campaign speech on the topic of our Gross National Product, or GNP — a metric that measures a given country’s economic growth. And, after describing some awful poverty conditions he had witnessed seen in America, he said:

“Our Gross National Product, now, is over $800 billion dollars a year. But that counts air pollution and cigarette advertising. It counts the destruction of the redwood. It counts napalm and counts nuclear warheads and armored cars. Yet the gross national product does not allow for the health of our children, the quality of their education or the joy of their play. It measures neither our wit nor our courage, nor our devotion to our country, it measures everything in short, except that which makes life worthwhile.”

Nearly 50 years later, in 2015, the UN adopted a set of 17 sustainable development goals to clarify in a much more nuanced way what it means to have a thriving society. While there are still major challenges with operationalizing and achieving these goals, the UN is no longer trying to assess a highly complex issue like societal health with a single number.



Figuring out what success means for Facebook has its own challenges, given the size, diversity, and complexity of our community. For a long time, we focused on time spent as one of our key measures of success. And especially when you are first starting out building your product, time spent can serve as a reasonable measure of whether or not you have created something of value. If people spend a lot of time using your product, it is reasonable to assume it’s meeting some kind of need.



The problem lies in looking at Time Spent in isolation of the bigger picture. Maximizing for that goal could create unintended bad effects for people and society. Facebook was created to be, first and foremost, a forum for friends and family to connect with one another. Through research and product changes, we’re working to create a service that supports meaningful relationships in the long term — both online and offline — not just passive scrolling.



Recently we announced that we are revising our approach to measuring our success and changing one of our core metrics from time spent to Meaningful Social Interactions, prioritizing the kinds of interactions that create the fabric that is our social graph; people talking to each other and sharing with each other about the things that matter most to them. And as we learn, our metrics will continue to evolve.



But beyond improving the quantitative metrics, another key way to get a richer picture of how things are working, a way to capture important things that are not measurable, is by counter-balancing the powerful quantitative data we have with equally compelling qualitative research.


Usage data may tell you what people are doing, but it doesn’t tell you why.

It’s critical that we get beyond our office walls and listen to real people, from all walks of life in every corner of the world, as to how our products affect their lives.



There’s danger in assuming that wisdom only comes from large numbers. Sometimes, the most powerful learnings come from a single person. Here’s a painful example of something incredibly important we learned that never would have come up in our usage logs: Three years ago, we created a “Year in Review” product for Facebook, that used an algorithm to create a video summarizing people’s year on Facebook.



Eric Meyer, a member of the Facebook community, told the story, which he titled on his blog, “Inadvertent Algorithmic Cruelty.” he saw his friends’ videos online, but avoided creating one of his own, knowing the kind of year he just had — a year in which he lost his young daughter. But then, in his newsfeed, he saw a “suggested post” of a video that we created for him. And there it was staring back at him — a photo of his recently deceased daughter.



After reading his story, we invited him to Facebook to talk about his experience. His talk deeply influenced the design team and one thing he said was turned into a memorable poster by our Analog Research Lab: “When you say “Edge Case” you’re really just defining the limits of what you care about”.


Since then, we have changed our approach to these kinds of experiences, and avoid making assumptions about people, their feelings, or their lives. Just listening and being empathetic to people’s experiences is a critical part of building and designing in an ethically responsible way. It’s harder to scale but that’s no excuse.

We aren’t designing for numbers, we are designing for people.

With over 2 billion people using our products, it would be irresponsible to NOT use quantitative data to influence our decisions. But ethically responsible design requires us to look broader and more deeply than just spreadsheets. Qualitative research, and indeed, just listening to people with powerful stories to tell about how our technology is impacting them, are critical sources of wisdom, and help us get beyond measuring to truly assessing our impact on people and the world.



I’ve shared some of the challenges that our industry and my own company are facing. And while I and many others are carefully considering the impact of what we have created and will create; I want to make one thing very clear: even though we have a lot of work to do in facing these challenges, I don’t regret my role in creating these products.




I’ve spent my whole career building tools that democratize systems, tools that have brought unprecedented opportunity to the world to access information, to express ourselves creatively and to get and stay connected to the people you care about most.

These are enduring human needs that tech can and should help address. And by doing it at a global scale, we’ve given voice to the most diverse group of people the world has ever heard from. So despite all of these significant challenges, I would not want to go back to a world before these inventions. It’s because I believe they are so valuable that I am committed to making sure we design and deploy them in the most responsible way possible.



So how do we do that? We need to design for all four quadrants by:



  1. Designing just as much to combat misuse cases as we design for use cases.

  2. Seeking outside expertise to handle complex systems and societal effects.

  3. And being very deliberate and nuanced in how we assess success.


And above all, we must remember that all of us, as designers, as businesses, as an industry, we all have a broad responsibility to ensure that technology is built and deployed in service of humanity and not the other way around.

At Facebook, we like to say that this journey is 1% finished. But I am excited and energized by the challenge, and I am hard pressed to think of a more important thing to be focused on as a designer today.



There’s a saying, nothing worth doing is easy. And this is definitely, most assuredly, worth doing.



Design at Meta is for everyone who touches user experience and design.

Whether you’re a product designer, writer, creative strategist, researcher, project manager, team leader or all-around systems-thinker, there’s something here for you.


Design at Meta is a window into the unique expertise and perspectives of the multidisciplinary teams who are building the future of digital connection and bringing the world closer together.

FacebookInstagramThreadsDribbbleMedium