Arjun Arunasalam (Purdue University), Habiba Farrukh (University of California, Irvine), Eliz Tekcan (Purdue University), Z. Berkay Celik (Purdue University)

Refugees form a vulnerable population due to their forced displacement, facing many challenges in the process, such as language barriers and financial hardship. Recent world events such as the Ukrainian and Afgan refugee crises have centered this population in online discourse, especially in social media, e.g., TikTok and Twitter. Although discourse can be benign, hateful and malicious discourse also emerges. Thus, refugees often become targets of toxic content, where malicious attackers post online hate targeting this population. Such online toxicity can vary in nature; e.g., toxicity can differ in scale (individual vs. group), and intent (embarrassment vs. harm), and the varying types of toxicity targeting refugees largely remain unexplored. We seek to understand the types of toxic content targeting refugees in online spaces. To do so, we carefully curate seed queries to collect a corpus of ∼3 M Twitter posts targeting refugees. We semantically sample this corpus to produce an annotated dataset of 1,400 posts against refugees from seven different languages. We additionally use a deductive approach to qualitatively analyze the motivating sentiments (reasons) behind toxic posts. We discover that trolling and hate speech are the predominant toxic content that targets refugees. Furthermore, we uncover four main motivating sentiments (e.g., perceived ungratefulness, perceived fear of safety). Our findings synthesize important lessons for moderating toxic content, especially for vulnerable communities.

View More Papers

U.S. Election Expert Perspectives on End-to-end Verifiable Voting Systems

Julie M. Haney (National Institute of Standards and Technology, Gaithersburg, Maryland), Shanee Dawkins (National Institute of Standards and Technology, Gaithersburg, Maryland), Sandra Spickard Prettyman (Cultural Catalyst LLC, Chicago), Mary F. Theofanos (National Institute of Standards and Technology, Gaithersburg, Maryland), Kristen K. Greene (National Institute of Standards and Technology, Gaithersburg, Maryland), Kristin L. Kelly Koskey (Cultural Catalyst LLC, Chicago), Jody L. Jacobs (National Institute of Standards…

Read More

GraphGuard: Detecting and Counteracting Training Data Misuse in Graph...

Bang Wu (CSIRO's Data61/Monash University), He Zhang (Monash University), Xiangwen Yang (Monash University), Shuo Wang (CSIRO's Data61/Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61), Shirui Pan (Griffith University), Xingliang Yuan (Monash University)

Read More

“So I Sold My Soul“: Effects of Dark Patterns...

Oksana Kulyk (ITU Copenhagen), Willard Rafnsson (IT University of Copenhagen), Ida Marie Borberg, Rene Hougard Pedersen

Read More

SyzBridge: Bridging the Gap in Exploitability Assessment of Linux...

Xiaochen Zou (UC Riverside), Yu Hao (UC Riverside), Zheng Zhang (UC RIverside), Juefei Pu (UC RIverside), Weiteng Chen (Microsoft Research, Redmond), Zhiyun Qian (UC Riverside)

Read More