Leveraging Human-AI Collaboration to Combat Misinformation on Social Media
UX/UI Design
AI/ML
Systems design
With the rise of social media, misinformation now has the capacity to spread and cause harm at greater magnitudes than ever before. To combat this, we partnered with the Pacific Northwest National Laboratory (PNNL) to design Veracity: an AI system that supports users in evaluating and taking action against misinformation on social media.
Timeline
February - August 2021
I was responsible for...
UX/UI
prototyping
design modeling
content strategy
User research
Meet the team
Team
Alexandra Hopping Janelle Wen Kylon Chiang Rachel Arredondo
PNNL Clients
Dustin Arendt Maria Glenski
Advisors
Anna Abovyan Raelin Musaraca
The Challenge
Users & Platforms vs. Misinformation
On social media, users are subject to psychological and situational factors that hinder their ability to discern fact from fiction—the high speed at which information travels on platforms only makes it more difficult for them to keep up.
Most people don’t want to spread or consume misinformation, but they currently do not have the tools to evaluate or take action against misinformation. These are the users who make up our target audience: users who unintentionally spread and consume misinformation.
The Solution
Veracity: A System of AI Interventions
Veracity is a two-part, platform-agnostic system of AI interventions: the system contains a fact-checker and other tools that support users in evaluating and taking action against misinformation on social media.
Fact-checker
Instant Info Evaluation
The Fact-checker’s main purpose is to help social media users to quickly discern fact from fiction as they scroll through their feeds. Though many platforms have already implemented fact-checking features, Veracity leverages AI to make fact-checking faster and more holistic.
Engagement hub
View Trends & Take Action
The Engagement Hub tracks user data to provide users with trends in their engagement with misinformation. In addition, the Hub also houses features and settings that support users in taking action against misinformation.
Understanding User Behavior & Pain Points on Social Media
Think-Aloud
Expert interviews
Literature review
How do users currently engage with misinformation on social media? What are the primary factors that contribute to the consumption and spread of misinformation?
To answer these questions, we conducted an literature reviews, think-alouds with users, secondary research, and interviews with experts in AI, social computing, and psychology. Through this, we surfaced several major pain points that hinder users in their ability to recognize and evaluate misinformation.
Pain point #1
Human Factors
Most people don't actually want to spread or consume misinformation, but psychological and situational factors leave them vulnerable.
Pain point #2
Platforms
Platform objectives to increase engagement and fast content consumption inhibit users' ability to critically evaluate information.
Pain point #3
Information
Information is generated and travels on social platforms at a speed that is impossible for users to keep up with.
In addition, we validated two core user goals—to be able to quickly evaluate and take action against misinformation:
User Goal #1
Quick Info Evaluation
Users want to be able to quickly evaluate information they see in their feeds.
User Goal #2
Take Action
Users want tools that enable them to take informed actions against misinformation they engage with.
Testing for Human-AI Interaction
Exploring User Touchpoints & AI Interventions
Storyboarding
Speed dating
The next step was to test for the qualities that made an AI intervention effective in helping social media users achieve their goals—in particular, how users could build trust in AI.
Storyboarding & Speed Dating
We began by storyboarding different AI interventions for misinformation on social media. These storyboards were categorized within a framework that classified them by AI involvement level, or how directly involved the interventions are in user interactions on social media. Speed dating then revealed user needs for AI to be effective in the context of our problem space, specifically the factors that go into trust and effectiveness.
Low
High
AI Involvement level
Design Process
Building AI Tools to Help Users Fight Misinformation
lo-fi prototyping
Determining the Best Features to Combat Misinformation
Using insights from speed dating as a basis, I worked with my team to prototype and test the interventions that had the most potential. We then quantified the testing data along with other criteria, such as likelihood of user adoption and technical feasibility to narrow down on the features to move forward with: the Fact-checker and the Engagement Hub.
Fact-checker
Helping Users Evaluate Info At the Speed of Their Scrolling
The Fact-checker serves to help users quickly evaluate information as they scroll at different levels of depth, providing in-feed indicators, fact-check details, and additional credible sources to learn more.
Final design
Exploration of different fact-check indicators
The key design considerations for the Fact-checker revolved around three main areas: how the visual fact-check indicators would manifest in feeds, what content would be most helpful to users in evaluating information credibility, and—most importantly—what elements users needed to build trust in fact-check verdicts.
Designing for trust
Explainability & AI Errors
Through testing, we identified the elements that are crucial in helping users build trust: integrating user feedback, we incorporated explainability on how the fact-checker determines credibility as well as avenues for action in the case of AI errors.
Fact-checker AI
1. IDENTIFY CLAIMS
AI identifies verifiable claims in social media posts and determines whether or not they are valid for fact-checking
2. Evaluate Claims
AI evaluates the claim and generates a new fact-check verdict, or reevaluates verdict if the claim has been fact-checked before
Engagement Hub
Empowering Users with Insights & Action Features
The Engagement Hub provides users with insights on their engagement with misinformation, as well as features that support them in taking both preemptive and retrospective actions against misinformation.
Designing for Actionability
Different Levels of Action
The Hub is built around the fact that different users want to take different levels of action against misinformation, from quick batch actions to in-depth investigations of misinformation sources.
Corrections
The Corrections feature provides users with updates in fact-check verdicts, serving two main purposes: to help users stay aware of information changes in new topics, and to account for errors in fact-check verdicts.
Accounts to Review
The Accounts to Review feature surfaces accounts that account for—or are likely to account for—a high proportion of misinformation user feeds. This feature is powered by AI that evaluates risk in two ways: by evaluating the user's past engagement with social media content, and by predictively analyzing accounts that the user is likely to engage with.
Engagement Hub AI
1. IDENTIFY High Risk Accounts
The AI identifies social media accounts that have a high risk of posting misinformation
2. Evaluate user engagement
The AI surfaces high-risk accounts that users have engaged with or are likely to engage with
3. Recommend Alternatives
The AI recommends similar credible accounts to replace those that the user unfollows
Future impact
Measuring Success for Social Platforms & Users
metrics for success
Reducing Misinformation & Rebuilding Trust in Social Media Platforms
Ultimately, the goal of this project is to reduce risk and complications for both operators and their robots, and, in turn, increase the rate of successful operations for the GVSC. To measure the interfaces’ effectiveness in achieving this goal, I worked with my team to establish the following set of success metrics:
If veracity is successful...
1. Misinformation
The quantity of social media posts containing misinformation should decrease
2. engagement
The average rate of instances in which users engage with misinformation should decrease
3. Satisfaction
Feedback on platforms should report a high rate of user satisfaction towards platform actions against misinformation
Other projects
An accessible autonomous ridesharing app with an integrated VUI
A platform that optimizes team game booking with AI matching
Anomaly alert interfaces for GVSC robots & operators