Red Giant
Many democracies around the globe are facing challenges to election integrity, democratic governance, public health, and climate change. These challenges are exacerbated by social media platforms, which facilitate the spread of toxic, clickbaity, hyper-partisan, and sometimes misinformative content. In addition, the online environment and social media offer access to unlimited sources and information, leading the majority of the users to consume entertainment and low-quality content, not credible news and public affairs. This further increases public susceptibility to misinformative or hyper-partisan rhetoric.
The Red Giant Team is working on several research projects that aim to tackle these challenges from different angles.
First, the team is examining whether Large Language Models (LLMs) can be used to reliably identify and also downrank harmful content on social media platforms. We focus on harmful YouTube content. Given the rising importance of YouTube as a vital information source, we aim to identify harmful content that users may encounter. We define harmful content as YouTube videos promoting information harms, hate and harassment harms, ideological harms, addictive harms, clickbait harms, exploitative harms, and physical harms and examine whether LLMs or human coders more accurately identify harmful content. We use GPT-4, Amazon Mechanical Turk (Mturk) workers, and trained domain experts to label approximately 70,000 YouTube videos.
ChatGPT vs. Humans Identifying “Harmful” YouTube Videos: Whom Should We Trust in the Difference of the Content-Annotation Tasks?
Claire Wonjeong Jo, Miki Wesolowska, Magdalena Wojcieszak. Paper presented at the 74th Annual Conference of the International Communication Association. Gold Coast, Australia
In an extension of this project, the team, jointly with computer science students and professors from UC Davis is examining whether LLMs can minimize recommendations and exposure to harmful content. We introduce a novel reranking approach for harm mitigation via LLMs in the zero-shot and few-shot setting. We also propose three novel metrics that can measure whether recommended content aligns with user preferences or exposes them to harmful content. Through experiments on simulated data from YouTube, we demonstrate the effectiveness of our LLM Reranking approach, and empirically show how it can mitigate harm on social media platforms.
Preference Re-ranking Using Large Language Models for Mitigating Exposure to Harmful Content on Social Media Platforms
Rajvardhan Oak, Magdalena Wojcieszak, Anshuman Chhabra. Paper submitted to Annual Meeting of the Association for Computational Linguistics (ACL)
We also use LLMs to identify polarization in language used on social media platforms.
Measuring Language Polarization Among Politicians and News Media With Large Language Models. Bartek Balcerzak; Magdalena Wojcieszak; Anshuman Chhabra. Paper presented at the 74th Annual Conference of the International Communication Association. Gold Coast, Australia
The second strain of research examines whether these challenges could be part overcome by shifting the proportion of “good” (i.e., verifiable and democratically useful information) to “bad” (e.g., clickbaity, misinformative, radical, etc.) information on people’s social media feeds. The Red Giant Team is collaborating with Reality Team, a US-based non-profit, working to offer credible information in social media feeds in naturalistic settings among users (see https://realityteam.org/). As part of this collaboration, the Team is analyzing and will be running field experiments using Instagram targeted ads, to put factual and verified information on topics of civic importance in users’ feeds toward enhancing users’ belief accuracy, and pro-social attitudes and behavioral intentions. The way corporations use advertisements to sell products, we target users with short video-based ads that make them better informed and more resilient to various democratic threats, focusing specifically on the large population of users who do not consume news and public affairs information on platforms. The analyzed Instagram field experiments focused on four topics (i.e., climate change, COVID-19 vaccines, digital literacy, and election integrity).
Ad-Based Social Media Interventions Increase Belief Accuracy and Generate Pro-Social Opinions Among Non-News Readers
Magdalena Wojcieszak, Maria Babinska, Dominik Batorski, Reality Team Members. Paper submitted to Nature Human Behavior
prof. Magdalena Wojcieszak
Magdalena Wojcieszak (Ph.D. U. of Pennsylvania) is a Professor of Communication at UC Davis, an Associate Researcher at the U of Warsaw, Poland (PI ERC Consolidator NEWSUSE), an Affiliate Faculty in Computational Social Science and a Member of the Graduate Group in Computer Science at UC Davis. Previously, she directed ERC Starting Grant EXPO at the U. of Amsterdam (2018-2023). Prof. Wojcieszak examines how people select (political) information online, the effects of digital media on extremity, polarization, and (mis)perceptions, and interventions to incentivize platform algorithms to promote quality and diverse political contents. She has (co-)authored ~90 articles (incl. Science, Nature, Science Advances, PNAS), is the Associate Editor of Political Communication, is part of the U.S. 2020 Facebook & Instagram Election Study an independent partnership with Meta to study the impact of Facebook and Instagram on the U.S. 2020 elections, and of the Misinformation Committee at the Social Science One. She has received awards for her teaching and research (including being elected the Fellow of the International Communication Association).
dr Dominik Batorski
A sociologist and data scientist who combines academic work at the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) at the University of Warsaw with consulting, public outreach, and business activities. His academic research focuses on the social and economic transformations driven by the proliferation of information and communication technologies and the development of artificial intelligence. He teaches courses on computational social sciences, data science, and value creation based on data and AI at the University of Warsaw, as well as in postgraduate programs at Kozminski University and Warsaw University of Technology.
As an expert, he has repeatedly advised governmental and local administration units and has led the preparation of numerous analytical reports and expert opinions. Currently, he supports NASK-PIB in developing analytical solutions that will aid public administration in developing evidence-based public policies. He is a member of the Council of the Polish Economic Institute and the chair of the Council of the Public Opinion Research Center (CBOS).
He co-founded Sotrender, a company that develops analytical tools and machine learning solutions for social media marketing. He is an active leader in the data science and machine learning community—since 2014, he has organized the Data Science Warsaw meetups and chairs the Program Council of the Data Science Summit conferences.
dr Paweł Matuszewski
Pawel Matuszewski is a habilitated doctor of sociology and a university professor at Collegium Civitas.
His research focuses on identifying causal relationships between the level of action taken by individuals and social phenomena. He is particularly interested in the mechanisms of formation and spread of political beliefs and behaviours in cyberspace. His latest research focuses on politically-oriented conspiracy theories, as well as methods of countering the spread of false information through digital media.
Prof. Matuszewski is the author of the monograph “Cyberplemiona. Analiza zachowań użytkowników Facebooka w trakcie kampanii parlamentarnej (Cybertribes. An analysis of Facebook users’ behavior during the parliamentary campaign)” (2018, PWN Scientific Publishers), “Logika przekonań społecznych (The Logic of Social Beliefs)” (2017, UKSW), and dozens of articles on the sociology of politics, the sociology of the Internet, the sociology of public opinion, and social research methodology. He has participated in academic and commercial research projects, including those for consulting firms, research institutions, academic institutions, businesses, trade unions, and political parties.
Pawel Matuszewski is an active member of the Polish Sociological Society (Chairman of the Board of the Warsaw branch, 2018-2022) and the European Sociological Association.
Claire Wonjeong Jo
Claire Wonjeong Jo is a PhD student in Communication at UC Davis, advised by Prof. Magdalena Wojcieszak. She earned a B.A. and M.A. in Journalism and Communication at Kyung Hee University in South Korea. Her research focus lies in computational social science and political communication, especially mitigating online negativity. On the one hand, she focuses on developing methods to identify harmful content. On the other hand, she explores partisan news consumption environments that can minimize uncivil behavioral reactions. She primarily uses computational methods, such as large language models, network analysis, deep learning, and natural language processing, to analyze digital trace data.
dr Anshuman Chhabra
Dr. Anshuman Chhabra is an Assistant Professor of Computer Science and Engineering at the University of South Florida, working on improving next-generation AI models. He obtained his PhD in Computer Science at the University of California, Davis. His research seeks to safeguard users from harm by curbing the negative behavior of foundational ML/AI models as well as real-world systems employing these models. He received the UC Davis Graduate Student Fellowship in 2018 and has held research positions at Lawrence Berkeley National Laboratory (2017), the Max Planck Institute for Software Systems, Germany (2020), and the University of Amsterdam, Netherlands (2022). His research has been internationally recognized by an oral talk acceptance at ICLR 2024 and a spotlight talk acceptance at AAAI 2020.