Here are some of the factors we Content Designers consider while crafting guidance for suggested user responses.
We consider two main audiences while crafting guidance:
The commenter is the person to whom we’re suggesting the content. Before a commenter selects a suggestion, it embodies the product’s voice. After a commenter selects it, it represents the commenter’s voice.
The recipient is the person who receives the suggestion once it’s selected by the commenter. After the commenter sends a suggested response, the recipient can’t differentiate between what was a commenter’s custom response versus what the product suggested they say.
Art vs. text
Art expressions, such as an illustration of a cartoon animal or person, are obviously not created by the commenter, so they hold a lighter weight than a text response or an emoji response.
In some cases, an illustrated or photographic version of a text response may be more appropriate to suggest than the text-only version. For example, it might feel more appropriate to suggest “I love you” as a sticker or as a caption on a GIF than “I love you” as text.
Slang and informal language
Slang can be tricky, alienating or potentially offensive to recipients, and can be challenging to get right for all locales and demographics. We avoid misspellings (for example, “woah!”) even if commonly used, but allow for acronyms and phrases that have been well-adopted into languages across the internet, such as, "LOL."
Suggestions that assume race, ethnicity, color, national origin, religion, age, sex, sexual orientation, gender identity, family status, disability, medical or genetic condition are likely to misaddress, offend, or discriminate, so we generally avoid any suggestion that requires these assumptions to be correct in order to be relevant.
Inappropriate language, profanity, or vulgarity
Would certain imagery or wording feel inappropriate, creepy, odd, or otherwise risky? If so, we generally recommend that when in doubt, take it out.
It’s important that suggestion sets don’t alienate people. To prevent that, we consider many different facets of diversity, including asking ourselves questions such as:
Are suggestions biased toward stereotypes of gender or race?
Do they communicate a Western bias?
Are we suggesting references that only younger generations would be likely to understand?
Would people with lower digital and reading literacy be able to understand the context?
Are we assuming or promoting negative dialogue using suggested responses? People may interpret illustrations of negative emotions as more appropriate than text that suggests, for example, sadness or surprise. Facebook suggesting you say, “What on earth?!” could feel quite alarming; suggesting you use the 😯 emoji might feel less so.
We’re mindful of timeliness and of how repeated use could feel dated on the platform over time, and strive to consider which locales get these suggestions to ensure they’re relevant and understandable.
The process of crafting guidance is never fully done, since human expression evolves every day, and so too, must our suggested response content. For instance, an email platform’s auto-fill feature recently suggested the phrase “climate change” be used, but it’s increasingly accepted (and accurate) to say “climate crisis” instead. And phrases such as “Hey guys” or “That’s crazy” that could have felt innocuous years ago may no longer be colloquial or culturally appropriate today.