LinkedIn hero
Strategic Research Enterprise

Understanding User Trust and Misinformation Reporting on LinkedIn

Research Abstract

As misinformation spreads faster than the truth, trust on platforms like LinkedIn is increasingly fragile. Through mixed-methods research with 200+ LinkedIn users, this study identifies where users struggle most, and where the product can realistically intervene.

Role

UX Researcher

Timeline

3 Months

Company

LinkedIn (Research Project)

Research Ownership

  • Designed and executed a mixed-methods study using survey and interview data.
  • Led qualitative interviews to explore reporting behaviors and perceived misinformation markers.
  • Synthesized behavioral findings into 3 actionable design concepts.
  • Partnered with engineering to validate research implications against real-time technical constraints.
  • Produced a strategic white paper connecting research insights to long-term user trust and safety planning.

Imagine you are scrolling on LinkedIn....

Amy storyboard panel
Amy quote panel

Research Context

The 23-Minute Gap

Currently, LinkedIn requires approximately 23 minutes to process a reported post. Within this window, the reportedcontent remains fully visible and viral.
With over 60 countries holding elections in 2024, political debate could further escalate the spread of false information, potentially misleading thousands.

Example of fake news on LinkedIn

Research Question

“How might we address the spread of misinformation during the 23-minute moderation window to maintain user trust?”

Methods

Survey participants
196

Survey Participants

identify macro patterns in reporting behavior and trust levels across different content types.

In-depth interviews
4

In-depth Interviews

explore how users judge credibility and react to flagged content.

Journey mapping

Journey Mapping

visualize the lifecycle of a reported post and identify where users experience friction and loss of trust.

A Dual Perspective

To address misinformation on LinkedIn, we engaged with both the users who encounter content and the creators who publish it.

Key Findings

Finding 1

Users experience a general lack of trust

“With all the misinformation and fake news out there now, I just can’t trust anything I see online anymore.”

Finding 2

Users had difficulty identifying misinformation independently

because social validation from their network outweighed credibility cues.

“It's tough to spot misinformation, when everyone in my network shares the same views.”

Finding 3

Users struggled to trust the flagging system

because the platform does not explain why content was labeled as false.

“Even when misinformation is flagged, I’m frustrated because I often don’t even understand why it’s considered false.”

Product Implications

Enhanced Report Context

Structured reporting builds credibility and efficiency in moderation workflows.
LinkedIn feature 1

Intervention Over Absolute Removal

Immediate signals preserve trust and slow down virality.
LinkedIn feature 2
LinkedIn feature 3

Transparency as Education

The platform should surface specific reporting reasons rather than simple labels, increasing transparency.

Measured Impact

68%

Increase in Platform Trust

75%

say it improves spotting misinformation

75%

find the explanations useful

Now, in those crucial 23 minutes, people will become more aware, identify inaccuracies, and think more critically about content consumption.

Learnings & Reflection

Managing Conflicting Feedback

When two user groups presented conflicting opinions, I prioritized quantitative patterns from our primary target users to ensure the final research implications aligned with overarching business goals.

Learning from the User Journey

Following users through the reporting journey taught me that decision-making is rarely linear and highly context-dependent, and how important journey mapping is for discovering nuanced behaviors.