Zhuo Chen, Jiawei Liu, Haotan Liu (Wuhan University)

Neural network models have been widely applied in the field of information retrieval, but their vulnerability has always been a significant concern. In retrieval of public topics, the problems posed by the vulnerability are not only returning inaccurate or irrelevant content, but also returning manipulated opinions. One can distort the original ranking order based on the stance of the retrieved opinions, potentially influencing the searcher’s perception of the topic, weakening the reliability of retrieval results and damaging the fairness of opinion ranking. Based on the aforementioned challenges, we combine stance detection methods with existing text ranking manipulation methods to experimentally demonstrate the feasibility and threat of opinion manipulation. Then we design a user experiment in which each participant independently rated the credibility of the target topic based on the unmanipulated or manipulated retrieval results. The experimental result indicates that opinion manipulation can effectively influence people’s perceptions of the target topic. Furthermore, we preliminarily propose countermeasures to address the issue of opinion manipulation and build more reliable and fairer retrieval ranking systems.

View More Papers

Free Proxies Unmasked: A Vulnerability and Longitudinal Analysis of...

Naif Mehanna (Univ. Lille / Inria / CNRS), Walter Rudametkin (IRISA / Univ Rennes), Pierre Laperdrix (CNRS, Univ Lille, Inria Lille), and Antoine Vastel (Datadome)

Read More

CrowdGuard: Federated Backdoor Detection in Federated Learning

Phillip Rieger (Technical University of Darmstadt), Torsten Krauß (University of Würzburg), Markus Miettinen (Technical University of Darmstadt), Alexandra Dmitrienko (University of Würzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

Exploring Phishing Threats through QR Codes in Naturalistic Settings

Filipo Sharevski (DePaul University), Mattia Mossano, Maxime Fabian Veit, Gunther Schiefer, Melanie Volkamer (Karlsruhe Institute of Technology)

Read More

TextGuard: Provable Defense against Backdoor Attacks on Text Classification

Hengzhi Pei (UIUC), Jinyuan Jia (UIUC, Penn State), Wenbo Guo (UC Berkeley, Purdue University), Bo Li (UIUC), Dawn Song (UC Berkeley)

Read More