Usable Security and Privacy (USEC) Symposium 2022
Note: All times are in PDT (UTC-7) and all sessions are held in Kon Tiki Ballroom.
Thursday, 28 April
-
Gokul CJ (TCS Research, Tata Consultancy Services Ltd., Pune), Vijayanand Banahatti (TCS Research, Tata Consultancy Services Ltd., Pune), Sachin Lodha (TCS Research, Tata Consultancy Services Ltd., Pune)
Phishing threats are on the rise, especially through Business Email Compromise (BEC). Despite having several tools for phishing email detection, the attacks are becoming smarter and personal, targeting individuals to gain access to personal and organizational information. Game-based cybersecurity training methods are found to have positive results in educating users. Along this line, we introduce PickMail, an anti-phishing awareness game that simulates typical real-life email scenarios to train an organization’s employees. In PickMail, we train participants to judge the legitimacy of an email by inspecting its various parts, such as the sender’s email domain, hyperlinks, attachments, and forms. The game also records participants’ decision-making steps that lead to their final judgment. Our study with 478 participants shows how the serious game-based training helped the participants make better judgments on emails, with the correctness in identifying email legitimacy reaching 92.62%. The study also provided us with insights that could help develop better training methods and user interfaces.
-
Benjamin Maximilian Berens (SECUSO, Karlsruhe Institute of Technology), Katerina Dimitrova, Mattia Mossano (SECUSO, Karlsruhe Institute of Technology), Melanie Volkamer (SECUSO, Karlsruhe Institute of Technology)
The use of security awareness and education programmes is very common in organisations. But how effective are they over time? Some initial research on this question is, among others, the extensive study of Reinheimer et al. [74] that measured effectiveness at several time intervals. Their research found still significantly better results than before the awareness program after four months, but no longer after six months. This left open a two months interval for the reminder. The contribution of our paper is to study whether the reminder should be closer to four or six months. Thus, we measured effectiveness after five months. With still significant better results than before the programme after five months, we conclude that it is recommended to remind users more towards six months rather than already after five. However, we kindly invite the community to conduct more long-term studies, in different contexts, to confirm these findings.
-
June De La Cruz (INSPIRIT Lab, University of Denver), Sanchari Das (INSPIRIT Lab, University of Denver)
Gamification is an interactive technology that enhances the user experience by designing modular objectives into game-design elements. In the same manner, gamification has the potential to enhance cybersecurity Awareness for neurodiverse individuals and people with disabilities by using Assistive Technology (AT) to achieve reward-system objectives. To understand further, we conducted a detailed systematization of knowledge (SoK) on 71 peer-reviewed publications concentrating research efforts to increase cybersecurity awareness through accessible gamification. The findings of this SoK establish fundamental components required to address the inclusive nature of gamification in cybersecurity and thereby identify requirements gathering objectives for impacting increased results in raising cybersecurity awareness. After a methodical process of iterative screening and manual analysis in this targeted subject matter, we found that only 9 out of the 71 gamified cybersecurity research initiatives directly address “accessibility” and the implementation methods for game-design elements that would facilitate accessible user-experience. Moreover, a cross-functional Learning Management System (LMS) and Modular Reward System can be optimized by data formulated through a Technology Acceptance Model (TAM) for people with disabilities using AT. Lastly, we propose that a modular training format should effectively engage and facilitate user interface and user experience despite context-oriented limitations on physical.
-
Vincent Drury (IT-Security Research Group, RWTH Aachen University), Rene Roepke (Learning Technologies Research Group, RWTH Aachen University), Ulrik Schroeder (Learning Technologies Research Group, RWTH Aachen University), Ulrike Meyer (IT-Security Research Group, RWTH Aachen University)
Anti-phishing learning games are a promising approach to educate the general population about phishing, as they offer a scalable, motivational, and engaging environment for active learning. Existing games have been criticized for their limited game mechanics, which mostly require binary decisions to advance in the games, and for failing to consider the users’ familiarity with online services presented in the game. In this paper, we present the evaluation of two novel game prototypes that incorporate more complex game mechanics. The first game requires the classification of URLs into several different categories, thus giving additional insights into the player’s decision, while the second game addresses a different cognitive process by requiring the creation of new URLs. We compare the games with each other and with a baseline game which uses binary decisions similar to existing games. A user study with 133 participants shows, that while all three games lead to performance increases, none of the proposed game mechanics offer significant improvements over the baseline. However, we show that the analysis of the new games offers valuable insights into the players’ behavior and problems while playing the games, in particular with regards to different categories of phishing URLs. Furthermore, the user study shows that the participants were significantly better in classifying URLs of services they know than those they do not know. These results indicate, that the distinction between known and unknown services in phishing tests is important to gain a better understanding of the test results, and should be considered when designing and reproducing studies.
-
Megan Nyre-Yu (Sandia National Laboratories), Elizabeth S. Morris (Sandia National Laboratories), Blake Moss (Sandia National Laboratories), Charles Smutz (Sandia National Laboratories), Michael R. Smith (Sandia National Laboratories)
MiTechnological advances relating to artificial intelligence (AI) and explainable AI (xAI) techniques are at a stage of development that requires better understanding of operational context. AI tools are primarily viewed as black boxes and some hesitation exists in employing them due to lack of trust and transparency. xAI technologies largely aim to overcome these issues to improve operational efficiency and effectiveness of operators, speeding up the process and allowing for more consistent and informed decision making from AI outputs. Such efforts require not only robust and reliable models but also relevant and understandable explanations to end users to successfully assist in achieving user goals, reducing bias, and improving trust in AI models. Cybersecurity operations settings represent one such context in which automation is vital for maintaining cyber defenses. AI models and xAI techniques were developed to aid analysts in identifying events and making decisions about flagged events (e.g. network attack). We instrumented the tools used for cybersecurity operations to unobtrusively collect data and evaluate the effectiveness of xAI tools. During a pilot study for deployment, we found that xAI tools, while intended to increase trust and improve efficiency, were not utilized heavily, nor did they improve analyst decision accuracy. Critical lessons were learned that impact the utility and adoptability of the technology, including consideration of end users, their workflows, their environments, and their propensity to trust xAI outputs.
-
Zekun Cai (Penn State University), Aiping Xiong (Penn State University)
To enhance the acceptance of connected autonomous vehicles (CAVs) and facilitate designs to protect people’s privacy, it is essential to evaluate how people perceive the data collection and use inside and outside the CAVs and investigate effective ways to help them make informed privacy decisions. We conducted an online survey (N = 381) examining participants’ utility-privacy tradeoff and data-sharing decisions in different CAV scenarios. Interventions that may encourage safer data-sharing decisions were also evaluated relative to a control. Results showed that the feedback intervention was effective in enhancing participants’ knowledge of possible inferences of personal information in the CAV scenarios. Consequently, it helped participants make more conservative data-sharing decisions. We also measured participants’ prior experience with connectivity and driver-assistance technologies and obtained its influence on their privacy decisions. We discuss the implications of the results for usable privacy design for CAVs.
-
Simin Ghesmati (Uni Wien, SBA Research), Walid Fdhila (Uni Wien, SBA Research), Edgar Weippl (Uni Wien, SBA Research)
Over the past years, the interest in Blockchain technology and its applications has tremendously increased. This increase of interest was however accompanied by serious threats that raised concerns over user data privacy. Prominent examples include transaction traceability and identification of senders, receivers, and transaction amounts. This resulted in a multitude of privacy-preserving techniques that offer different guarantees in terms of trust, decentralization, and traceability. CoinJoin [22] is one of the promising techniques that adopts a decentralized approach to achieve privacy on the Unspent Transaction Output (UTXO) based blockchain. Despite the advantages of such a technique in obfuscating user transaction data, making them usable to common users requires considerable development and integration efforts. This paper provides a comprehensive usability study of three main Bitcoin wallets that integrate the CoinJoin technique, i.e., Joinmarket, Wasabi, and Samourai. A cognitive walkthrough based on usability and design criteria was conducted in order to evaluate the ease of use of these wallets. The study findings will enable privacy wallet developers to gain valuable insights into a better user experience.
-
Habiba Farzand (University of Glasgow), Florian Mathis (University of Glasgow), Karola Marky (University of Glasgow), Mohamed Khamis (University of Glasgow)
Contact Tracing Apps (CTAs) have been developed and deployed in various parts of the world to track the spread of COVID-19. However, low social acceptance and the lack of adoption can impact CTA effectiveness. Prior work primarily focused on the privacy and security of CTAs, compared different models, and studied their app design. However, it remains unclear (1) how CTA privacy is perceived by end-users; (2) what reasons behind low adoption rates are, and (3) what the situation around the social acceptability of CTAs is. In this paper, we investigate these aspects by surveying 80 participants (40 from Australia, 40 from France). Our study reveals interesting results on CTA usage, experiences, and user perceptions. We found that privacy concerns, tech unawareness, app requisites, and mistrust can reduce the users’ willingness to use CTAs. We conclude by presenting ways to foster public trust and meet users’ privacy expectations that in turn support CTA’s adoption.
-
“So I Sold My Soul“: Effects of Dark Patterns in Cookie Notices on End-User Behavior and PerceptionsOksana Kulyk (ITU Copenhagen), Willard Rafnsson (IT University of Copenhagen), Ida Marie Borberg, Rene Hougard Pedersen
Cookies are widely acknowledged as a potential privacy issue, due to their prevalence and use for tracking users across the web. To address this issue, multiple regulations have been enacted which mandate informing users about data collection via. so-called cookie notices. Unfortunately, these notices have been shown to be ineffective; they are largely ignored, and are generally not understood by end-users. One main source of this ineffectiveness is the presence of dark patterns in notice designs, i.e. user interface design elements that nudge users into performing an action they may not otherwise do, e.g. consent to data collection.
In this paper, we investigate the mental models and behavior of users when confronted with dark patterns in cookie notices. We do this by performing a mixed-method study (on Danes in their late-20s) which integrates quantitative and qualitative insights. Our quantitative findings confirm that the design of a cookie notice does influence the decisions of users on whether or not to consent to data collection, as well as whether they recall seeing the notice at all. Our qualitative findings reveal that users do in fact recognize the presence of dark patterns in cookie notice designs, and that they are very uncomfortable with standard practices in data collection. However, they seldom take action to protect their privacy, being overall resigned due to decision fatigue. We conclude that website maintainers need to reconsider how they request consent lest they alienate their users, and that end-users need better solutions that alleviate their burden wrt. protecting their privacy whilst visiting websites that collect data.
-
Asmita Dalela (IT University of Copenhagen), Saverio Giallorenzo (Department of Computer Science and Engineering - University of Bologna), Oksana Kulyk (ITU Copenhagen), Jacopo Mauro (University of Southern Denmark), Elda Paja (IT University of Copenhagen)
Increased levels of digitalization in society expose companies to new security threats, requiring them to establish adequate security and privacy measures. Additionally, the presence of exogenous forces like new regulations, e.g., GDPR and the global COVID-19 pandemic, pose new challenges for companies that should preserve an adequate level of security while having to adapt to change. In this paper, we investigate such challenges through a two-phase study in companies located in Denmark—a country characterized by a high level of digitalization—focusing on software development and tech-related companies. Our results show a number of issues, most notably i) a misalignment between software developers and management when it comes to the implementation of security and privacy measures, ii) difficulties in adapting company practices in light of implementing GDPR compliance, and iii) different views on the need to adapt security measures to cope with the COVID-19 pandemic.