Marian Harbach (Google), Igor Bilogrevic (Google), Enrico Bacis (Google), Serena Chen (Google), Ravjit Uppal (Google), Andy Paicu (Google), Elias Klim (Google), Meggyn Watkins (Google), Balazs Engedy (Google)

A recent large-scale experiment conducted by Chrome has demonstrated that a "quieter" web permission prompt can reduce unwanted interruptions while only marginally affecting grant rates. However, the experiment and the partial roll-out were missing two important elements: (1) an effective and context-aware activation mechanism for such a quieter prompt, and (2) an analysis of user attitudes and sentiment towards such an intervention. In this paper, we address these two limitations by means of a novel ML-based activation mechanism -- and its real-world on-device deployment in Chrome -- and a large-scale user study with 13.1k participants from 156 countries. First, the telemetry-based results, computed on more than 20 million samples from Chrome users in-the-wild, indicate that the novel on-device ML-based approach is both extremely precise (>99% post-hoc precision) and has very high coverage (96% recall for notifications permission). Second, our large-scale, in-context user study shows that quieting is often perceived as helpful and does not cause high levels of unease for most respondents.

View More Papers

TrustSketch: Trustworthy Sketch-based Telemetry on Cloud Hosts

Zhuo Cheng (Carnegie Mellon University), Maria Apostolaki (Princeton University), Zaoxing Liu (University of Maryland), Vyas Sekar (Carnegie Mellon University)

Read More

Evaluating Disassembly Ground Truth Through Dynamic Tracing (abstract)

Lambang Akbar (National University of Singapore), Yuancheng Jiang (National University of Singapore), Roland H.C. Yap (National University of Singapore), Zhenkai Liang (National University of Singapore), Zhuohao Liu (National University of Singapore)

Read More

Stacking up the LLM Risks: Applied Machine Learning Security

Dr. Gary McGraw, Berryville Institute of Machine Learning

Read More