Sebastian Zimmeck (Wesleyan University), Rafael Goldstein (Wesleyan University), David Baraka (Wesleyan University)

Various privacy laws require mobile apps to have privacy policies. Questionnaire-based policy generators are intended to help developers with the task of policy creation. However, generated policies depend on the generators' designs as well as developers' abilities to correctly answer privacy questions on their apps. In this study we show that policies generated with popular policy generators are often not reflective of apps' privacy practices. We believe that policy generation can be improved by supplementing the questionnaire-based approach with code analysis. We design and implement PrivacyFlash Pro, a privacy policy generator for iOS apps that leverages static analysis. PrivacyFlash Pro identifies code signatures --- composed of Plist permission strings, framework imports, class instantiations, authorization methods, and other evidence --- that are mapped to privacy practices expressed in privacy policies. Resources from package managers are used to identify libraries.

We tested PrivacyFlash Pro in a usability study with 40 iOS app developers and received promising results both in terms of reliably identifying apps' privacy practices as well as on its usability. We measured an F-1 score of 0.95 for identifying permission uses. 24 of 40 developers rated PrivacyFlash Pro with at least 9 points on a scale of 0 to 10 for a Net Promoter Score of 42.5. The mean System Usability Score of 83.4 is close to excellent. We provide PrivacyFlash Pro as an open source project to the iOS developer community. In principle, our approach is platform-agnostic and adaptable to the Android and web platforms as well. To increase privacy transparency and reduce compliance issues we make the case for privacy policies as software development artifacts. Privacy policy creation should become a native extension of the software development process and adhere to the mental model of software developers.

View More Papers

SquirRL: Automating Attack Analysis on Blockchain Incentive Mechanisms with...

Charlie Hou (CMU, IC3), Mingxun Zhou (Peking University), Yan Ji (Cornell Tech, IC3), Phil Daian (Cornell Tech, IC3), Florian Tramèr (Stanford University), Giulia Fanti (CMU, IC3), Ari Juels (Cornell Tech, IC3)

Read More

On Building the Data-Oblivious Virtual Environment

Tushar Jois (Johns Hopkins University), Hyun Bin Lee, Christopher Fletcher, Carl A. Gunter (University of Illinois at Urbana-Champaign)

Read More

Data Poisoning Attacks to Deep Learning Based Recommender Systems

Hai Huang (Tsinghua University), Jiaming Mu (Tsinghua University), Neil Zhenqiang Gong (Duke University), Qi Li (Tsinghua University), Bin Liu (West Virginia University), Mingwei Xu (Tsinghua University)

Read More

SpecTaint: Speculative Taint Analysis for Discovering Spectre Gadgets

Zhenxiao Qi (UC Riverside), Qian Feng (Baidu USA), Yueqiang Cheng (NIO Security Research), Mengjia Yan (MIT), Peng Li (ByteDance), Heng Yin (UC Riverside), Tao Wei (Ant Group)

Read More