STORIES

Design for safety and integrity in social technologies

PROCESS
By David G., Sara G., Hailey C.
7 min to read
February 28, 2022
Illustrated construction cranes drop a giant lock and safety check symbol into an abstract construction scene composed of digital interfaces.

SUMMARY

Get an inside look into the integrity design discipline and how Meta uses it to combat hate speech, misinformation and more.



How can social media promote greater safety, dignity, and authenticity, all while striving to create a place for expression and give people a voice? Much of the conversation around these questions has rightly focused on policy, algorithms, and operational aspects of global content moderation. Here, we’ll explore the emerging discipline of integrity design — and how we can fold it into our core product-making practice as builders of social media.



Even for people who've spent years at the intersection of design, technology and social issues — when it comes to addressing problems like misinformation, hate speech, bullying and harassment — there is no easy answer. The weight and gravity of this space loom large, complex dilemmas often sit at the heart of our work, and we know there is so much more to figure out. At the same time, we’ve learned valuable lessons and are establishing principles, patterns and playbooks that point a way forward.



In this piece — the first in a series from integrity designers across Facebook, Instagram, Messenger, WhatsApp, and Reality Labs — we’ll walk through concrete examples of how design can prevent misuse of our technologies, reduce integrity risk, and promote effective and fair enforcement. We hope to get constructive feedback on our work, spark new ideas to explore, and begin to build a larger community of practice around integrity design.

Misuse prevention

By carefully crafting the core mechanics of how actions like sharing content or connecting with other people fundamentally work, design can help discourage and even prevent certain types of bad experiences. Something you’ll see in these examples is that designing with the goal of preventing misuse isn’t just about building-in constraints; it can also be about empowering people with greater context and control.



Either way, we can turn repeatable solutions to common integrity problems into design patterns for broader use. Some patterns include:



Limits and restrictions


Adversarial actors look for ways to systematically abuse social tools, so a big part of integrity design is identifying and closing down vulnerabilities. For example, we want to disincentivize behaviors like blasting out hundreds of friend requests or spamming the same comment into multiple groups. An effective solution is to make this too difficult to do at scale by enforcing “rate limits.” In our messaging products, adding restrictions to prevent unsupervised interactions between adults and minors or unwanted contact with strangers can help protect people from conversations that may be unsafe.

Two mobile phone screens display the 2011 Messenger interface and the logo at the time.

Rate limits on Facebook; messaging restrictions for adults and minors in Instagram; spam folder in Messenger

Account transparency


Adding context about certain accounts can make it difficult for inauthentic actors to hide abusive or misrepresentative behavior. For example, on Messenger we’ve found that highlighting accounts that were recently created along with general location information makes potential spam or impersonating threads easier to spot and avoid. On Facebook, we’ve begun to apply labels in News Feed that can help people identify authentic civic posts from official officeholders and provide more context about Fan and Satire Pages. Finding a way to introduce additional transparency while preserving privacy for authentic accounts is often a big part of the design challenge.

Two mobile phone screens display the 2011 Messenger interface and the logo at the time.

Context lines in messaging; labels on Facebook

Safety controls


How accessible you are on social media should be up to you. Safety controls can give people greater agency to block unwanted contact or other potentially bad experiences. For example, we’ve seen that comment moderation tools on Facebook and Instagram can effectively close off vectors for harassment. Due to the immersive nature of VR experiences, we want to make it easy for people to take action when they need to. In Horizon Worlds, we offer a feature called Safe Zone, which lets people take a break from their surroundings and then block, mute or report.

Two mobile phone screens display the 2011 Messenger interface and the logo at the time.

Comment controls on Facebook; Hidden Words on Instagram; Safe Zone in Horizon Worlds

Risk reduction

There are no singular solutions in integrity design. While we aim to prevent as much risk as possible on our platforms, the reality is the problems we design for are complex and require a nuanced, multifaceted approach. We need to account for diverse cultural norms and varying personal, societal and situational contexts.



While one solution may not work in isolation, progress can happen when multiple efforts — some of them seemingly small on their own — start to work together in systematically sustainable ways. Some patterns that can reduce the potential reach and intensity of integrity risks include:



Contextual friction


We’ve found that purposeful friction — additional gut-check steps triggered in specific contexts — can help people be more intentional about the content they click, read and share online. This is a generative, scalable pattern, as there are many different signals and situations where it can be used: highly-forwarded messages, dated or fact-checked content, unread articles, information about public-interest topics like election results or COVID-19 updates. However, it is important to ensure that these friction experiences feel valuable and accurate, so there’s a balance to strike.

A device screen in WhatsApp displays a message sending limit; another, a Messenger notice asks, "Do you know this person; the last, a Facebook message that reads: "Make sure you're sharing reliable information."

Forwarding limits in WhatsApp; safety notices in Messenger; reshare friction for dated or sensitive content on Facebook

Informative treatments


Many topics rightly invite robust public discourse and debate online. Informative overlays and labels help reduce risks associated with misinformation or potentially sensitive (e.g. graphic) content through annotative UI. In even more nuanced conversations, it can be hard to disentangle facts from opinions, or know when important context is missing. So, we’re also working to understand how we can better highlight reliable information to make comments on Facebook and Instagram posts more helpful and informative for people.

Five color explorations of the Messenger logo.

Informative overlays, labels, and highlighted comments on Instagram and Facebook

Positive community norms


By helping people understand that there are rules and norms for expression and interaction with others, our platforms can foster more positive community experiences. These can be organic and user-driven. For example, on Instagram, pinnable comments allow account owners to set a more constructive or uplifting tone for larger threads. Proactive nudges — reminders that appear in the user interface, e.g. when someone composes a comment — are a slightly more assertive direction. These encourage people to pause and consider the appropriateness or accuracy of comments before they hit send or post. On Facebook Groups, we’ve been beta testing norm-setting experiences such as new member greetings that include house rules, and community awards that focus on uplifting, positive contributions.

Five color explorations of the Messenger logo.

Positive nudges on Instagram; community norm setting in Facebook Groups

Effective and fair enforcement

Most integrity-related decisions involve equities that have hard tradeoffs or competing interests. We must work to promote safety in our products, while also protecting voice, due process, privacy, and accountability.



It’s important to respect and find creative ways to balance these considerations. For example, from co-design (e.g. with civil society organizations) and community feedback, we’ve learned that sweeping enforcement can sometimes disproportionately impact vulnerable populations, including people who seek to raise awareness about legitimate but sensitive or even safety-related issues.



Design has an important role to play in bringing greater equity, accuracy, proportionality, and procedural fairness to our enforcement systems. For example:



Community feedback


In order to help increase the precision of the type of content we demote, especially in instances where automatic detection might have lower confidence, we can leverage input from the community. Reporting is one important mechanism for this, but to get even more signal we introduced the ability to hide posts right at the top-level of News Feed. Lightweight feedback like this creates an entrypoint to surface more user controls.

A device screen shows a Facebook feed and an overlay that reads "Tap to hide this post;" another, a Facebook overlay that reads "Post hidden;" the last, a Facebook overlay that reads "Report: please select a problem," with a list of issues.

Negative feedback and reporting flows on Facebook

Education and recourse


We’ve come to understand that the experiences around our enforcement process are often just as important, if not more so, than any action we might take. People need to know what the rules and penalties are, be understood and heard as individuals, and have meaningful pathways to appeal. Experiences like Account Status strive to address these needs and empower the community at large: Beyond basic transparency and appeals, we’ve been working on more ways for people to influence policy formation and enforcement product development — e.g. via the Oversight Board, which deliberates openly, makes binding final-word decisions on content, and issues recommendations on our policies and our processes we must publicly respond to.

Five composition explorations for the Messenger logo.

Account Status flows on Facebook

Integrity design tomorrow

Working on safety and integrity issues is hard and humbling, but we truly believe design can have a big impact. We hope these patterns to prevent, reduce, and responsibly enforce on integrity risks help illustrate areas of both progress and opportunity.



This is by no means an exhaustive toolkit or taxonomy; in fact, you’ve probably noticed that many of the elements and intents mesh together across product examples. As we continue to pursue progress on the most challenging issues facing the internet and society at large, we know we must proceed with care and work together with the broader design community to chart meaningful pathways forward.

Design at Meta is for everyone who touches user experience and design.

Whether you’re a product designer, writer, creative strategist, researcher, project manager, team leader or all-around systems-thinker, there’s something here for you.


Design at Meta is a window into the unique expertise and perspectives of the multidisciplinary teams who are building the future of social connection and bringing the world closer together.

FacebookInstagramThreadsDribbbleMedium