SUMMARY
How can social media promote greater safety, dignity, and authenticity, all while striving to create a place for expression and give people a voice? Much of the conversation around these questions has rightly focused on policy, algorithms, and operational aspects of global content moderation. Here, we’ll explore the emerging discipline of integrity design — and how we can fold it into our core product-making practice as builders of social media.
Even for people who've spent years at the intersection of design, technology and social issues — when it comes to addressing problems like misinformation, hate speech, bullying and harassment — there is no easy answer. The weight and gravity of this space loom large, complex dilemmas often sit at the heart of our work, and we know there is so much more to figure out. At the same time, we’ve learned valuable lessons and are establishing principles, patterns and playbooks that point a way forward.
In this piece — the first in a series from integrity designers across Facebook, Instagram, Messenger, WhatsApp, and Reality Labs — we’ll walk through concrete examples of how design can prevent misuse of our technologies, reduce integrity risk, and promote effective and fair enforcement. We hope to get constructive feedback on our work, spark new ideas to explore, and begin to build a larger community of practice around integrity design.
By carefully crafting the core mechanics of how actions like sharing content or connecting with other people fundamentally work, design can help discourage and even prevent certain types of bad experiences. Something you’ll see in these examples is that designing with the goal of preventing misuse isn’t just about building-in constraints; it can also be about empowering people with greater context and control.
Either way, we can turn repeatable solutions to common integrity problems into design patterns for broader use. Some patterns include:
Adversarial actors look for ways to systematically abuse social tools, so a big part of integrity design is identifying and closing down vulnerabilities. For example, we want to disincentivize behaviors like blasting out hundreds of friend requests or spamming the same comment into multiple groups. An effective solution is to make this too difficult to do at scale by enforcing “rate limits.” In our messaging products, adding restrictions to prevent unsupervised interactions between adults and minors or unwanted contact with strangers can help protect people from conversations that may be unsafe.
Rate limits on Facebook; messaging restrictions for adults and minors in Instagram; spam folder in Messenger
Adding context about certain accounts can make it difficult for inauthentic actors to hide abusive or misrepresentative behavior. For example, on Messenger we’ve found that highlighting accounts that were recently created along with general location information makes potential spam or impersonating threads easier to spot and avoid. On Facebook, we’ve begun to apply labels in News Feed that can help people identify authentic civic posts from official officeholders and provide more context about Fan and Satire Pages. Finding a way to introduce additional transparency while preserving privacy for authentic accounts is often a big part of the design challenge.
Context lines in messaging; labels on Facebook
How accessible you are on social media should be up to you. Safety controls can give people greater agency to block unwanted contact or other potentially bad experiences. For example, we’ve seen that comment moderation tools on Facebook and Instagram can effectively close off vectors for harassment. Due to the immersive nature of VR experiences, we want to make it easy for people to take action when they need to. In Horizon Worlds, we offer a feature called Safe Zone, which lets people take a break from their surroundings and then block, mute or report.
Comment controls on Facebook; Hidden Words on Instagram; Safe Zone in Horizon Worlds
There are no singular solutions in integrity design. While we aim to prevent as much risk as possible on our platforms, the reality is the problems we design for are complex and require a nuanced, multifaceted approach. We need to account for diverse cultural norms and varying personal, societal and situational contexts.
While one solution may not work in isolation, progress can happen when multiple efforts — some of them seemingly small on their own — start to work together in systematically sustainable ways. Some patterns that can reduce the potential reach and intensity of integrity risks include:
We’ve found that purposeful friction — additional gut-check steps triggered in specific contexts — can help people be more intentional about the content they click, read and share online. This is a generative, scalable pattern, as there are many different signals and situations where it can be used: highly-forwarded messages, dated or fact-checked content, unread articles, information about public-interest topics like election results or COVID-19 updates. However, it is important to ensure that these friction experiences feel valuable and accurate, so there’s a balance to strike.
Forwarding limits in WhatsApp; safety notices in Messenger; reshare friction for dated or sensitive content on Facebook
Many topics rightly invite robust public discourse and debate online. Informative overlays and labels help reduce risks associated with misinformation or potentially sensitive (e.g. graphic) content through annotative UI. In even more nuanced conversations, it can be hard to disentangle facts from opinions, or know when important context is missing. So, we’re also working to understand how we can better highlight reliable information to make comments on Facebook and Instagram posts more helpful and informative for people.
Informative overlays, labels, and highlighted comments on Instagram and Facebook
By helping people understand that there are rules and norms for expression and interaction with others, our platforms can foster more positive community experiences. These can be organic and user-driven. For example, on Instagram, pinnable comments allow account owners to set a more constructive or uplifting tone for larger threads. Proactive nudges — reminders that appear in the user interface, e.g. when someone composes a comment — are a slightly more assertive direction. These encourage people to pause and consider the appropriateness or accuracy of comments before they hit send or post. On Facebook Groups, we’ve been beta testing norm-setting experiences such as new member greetings that include house rules, and community awards that focus on uplifting, positive contributions.
Positive nudges on Instagram; community norm setting in Facebook Groups
Most integrity-related decisions involve equities that have hard tradeoffs or competing interests. We must work to promote safety in our products, while also protecting voice, due process, privacy, and accountability.
It’s important to respect and find creative ways to balance these considerations. For example, from co-design (e.g. with civil society organizations) and community feedback, we’ve learned that sweeping enforcement can sometimes disproportionately impact vulnerable populations, including people who seek to raise awareness about legitimate but sensitive or even safety-related issues.
Design has an important role to play in bringing greater equity, accuracy, proportionality, and procedural fairness to our enforcement systems. For example:
In order to help increase the precision of the type of content we demote, especially in instances where automatic detection might have lower confidence, we can leverage input from the community. Reporting is one important mechanism for this, but to get even more signal we introduced the ability to hide posts right at the top-level of News Feed. Lightweight feedback like this creates an entrypoint to surface more user controls.
Negative feedback and reporting flows on Facebook
We’ve come to understand that the experiences around our enforcement process are often just as important, if not more so, than any action we might take. People need to know what the rules and penalties are, be understood and heard as individuals, and have meaningful pathways to appeal. Experiences like Account Status strive to address these needs and empower the community at large: Beyond basic transparency and appeals, we’ve been working on more ways for people to influence policy formation and enforcement product development — e.g. via the Oversight Board, which deliberates openly, makes binding final-word decisions on content, and issues recommendations on our policies and our processes we must publicly respond to.
Account Status flows on Facebook
Working on safety and integrity issues is hard and humbling, but we truly believe design can have a big impact. We hope these patterns to prevent, reduce, and responsibly enforce on integrity risks help illustrate areas of both progress and opportunity.
This is by no means an exhaustive toolkit or taxonomy; in fact, you’ve probably noticed that many of the elements and intents mesh together across product examples. As we continue to pursue progress on the most challenging issues facing the internet and society at large, we know we must proceed with care and work together with the broader design community to chart meaningful pathways forward.
RELATED STORIES
Whether you’re a product designer, writer, creative strategist, researcher, project manager, team leader or all-around systems-thinker, there’s something here for you.