Keika Mori (Deloitte Tohmatsu Cyber LLC, Waseda University), Daiki Ito (Deloitte Tohmatsu Cyber LLC), Takumi Fukunaga (Deloitte Tohmatsu Cyber LLC), Takuya Watanabe (Deloitte Tohmatsu Cyber LLC), Yuta Takata (Deloitte Tohmatsu Cyber LLC), Masaki Kamizono (Deloitte Tohmatsu Cyber LLC), Tatsuya Mori (Waseda University, NICT, RIKEN AIP)

Companies publish privacy policies to improve transparency regarding the handling of personal information. A discrepancy between the description of the privacy policy and the user’s understanding can lead to a risk of a decrease in trust. Therefore, in creating a privacy policy, the user’s understanding of the privacy policy should be evaluated. However, the periodic evaluation of privacy policies through user studies takes time and incurs financial costs. In this study, we investigated the understandability of privacy policies by large language models (LLMs) and the gaps between their understanding and that of users, as a first step towards replacing user studies with evaluation using LLMs. Obfuscated privacy policies were prepared along with questions to measure the comprehension of LLMs and users. In comparing the comprehension levels of LLMs and users, the average correct answer rates were 85.2% and 63.0%, respectively. The questions that LLMs answered incorrectly were also answered incorrectly by users, indicating that LLMs can detect descriptions that users tend to misunderstand. By contrast, LLMs understood the technical terms used in privacy policies, whereas users did not. The identified gaps in comprehension between LLMs and users, provide insights into the potential of automating privacy policy evaluations using LLMs.

View More Papers

Space Cybersecurity Testbed: Fidelity Framework, Example Implementation, and Characterization

Jose Luis Castanon Remy, Caleb Chang, Ekzhin Ear, Shouhuai Xu (University of Colorado Colorado Springs (UCCS))

Read More

Continuous Smartphone Authentication using Wristbands

Shrirang Mare (University of Washington); Reza Rawassizadeh (University of Rochester); Ronald Peterson, David Kotz (Dartmouth College)

Read More

The Discriminative Power of Cross-layer RTTs in Fingerprinting Proxy...

Diwen Xue (University of Michigan), Robert Stanley (University of Michigan), Piyush Kumar (University of Michigan), Roya Ensafi (University of Michigan)

Read More

SHAFT: Secure, Handy, Accurate and Fast Transformer Inference

Andes Y. L. Kei (Chinese University of Hong Kong), Sherman S. M. Chow (Chinese University of Hong Kong)

Read More