When Is Digital Censorship Permissible? A Conversation Norms Account

                 Tami Kim

Every day, millions of photos, videos, and comments are posted online. Social media platforms like Facebook, Instagram, and X must decide which of these posts are appropriate and which should be taken down. But how do users feel about these decisions, and what factors do they believe should matter?

Understanding users’ views on content moderation is critical. Platforms rely not only on users to generate and engage with content, but also to report posts they believe violate community standards. If users perceive moderation decisions as inconsistent or unfair, they may lose trust in the platform or disengage altogether. In addition, as lawmakers and regulators pay closer attention to how platforms govern speech, public attitudes can help shape policy grounded in everyday norms and expectations.

In this research, Tami Kim introduces a three-factor framework to understand how users think about censorship decisions. The framework identifies three dimensions that users believe should guide content moderation: the content of the post, the intent behind it, and the outcome it might produce. Content refers to whether a post contains clearly objectionable material, such as nudity, profanity, or violence. Intent captures the motivation of the person posting—for example, whether they intended to cause harm or simply to raise awareness or be humorous. Outcome refers to what could happen if the post remains online, including its potential to mislead, offend, or inspire action.

Although censorship decisions have traditionally relied on content-based rules—for instance, automatically removing posts with certain words or images—this research finds that users expect platforms to consider intent as well. This is because social media platforms are widely used for interpersonal connection and conversation, and users apply the same norms to these digital spaces that they would in face-to-face interactions. For example, users may be more tolerant of posts containing graphic imagery or strong language when they believe the poster’s intent is positive—such as advocating for a cause, raising awareness, or making a joke.

The research also explores when users are more or less likely to expect platforms to consider intent. Generally, users support intent-based moderation more on platforms that foster personal interaction, like Facebook, than on professional platforms like LinkedIn. On social platforms, posts are often viewed as part of casual, everyday conversation, so users expect moderators to take the content creator’s intent into account. On professional platforms, more formal norms tend to apply, and users are more likely to accept content-based rules that prioritize clarity and decorum. Another factor that shapes user expectations is the post’s level of visibility. When a post is shared only with close friends—mimicking a private, interpersonal exchange—users are more likely to expect the platform to consider intent and move beyond relying solely on content-based censorship. But when a post is shared publicly, it takes on a less conversational tone, and users are more inclined to support content-based moderation.

As social media platforms, regulators, and lawmakers work toward consensus on how to govern user-generated content, this research highlights the importance of aligning moderation policies with users’ conversational expectations. Because people use these platforms as spaces for dialogue, they expect censorship decisions to mirror the norms of everyday social interaction—especially by taking intent into account. For managers, this means that rigid, one-size-fits-all rules based solely on content may feel misaligned with user expectations and risk backlash. More broadly, the research reminds us that moderation decisions are no longer just internal policy calls—they increasingly shape how we understand and define free speech in the digital public sphere.

Read full paper:

Tami Kim, When Is Digital Censorship Permissible? A Conversation Norms AccountJournal of Consumer Research, Volume 52, Issue 1, June 2025, Pages 49–69.