We sincerely acknowledge the generous support of Indiana University Bloomington – Security and Privacy in Informatics, Computing, and Engineering (SPICE) for sponsoring the awards and certificates. Their commitment to advancing security and privacy research is greatly appreciated.

Monday, 24 February

  • 07:00 - 17:30
    Registration
    Foyer
  • 07:30 - 09:00
    Breakfast
    Pacific Ballroom D
  • 09:00 - 09:10
    Opening Remarks by the Chairs
    Coast Ballroom
  • 09:10 - 10:00
    Opening Keynote: Dr. Vaibhav Garg
    Coast Ballroom
    • Dr. Vaibhav Garg is an Award Winning Executive who currently works as the Executive Director of Cybersecurity & Privacy Research and Public Policy Research at Comcast Cable. He has a PhD in Security Informatics from Indiana University and a M.S. in Information Security from Purdue University. He has more than 15 years of industry experience in domains that span Tech Policy, Cybersecurity, Privacy, AI, and Economics. He has co-authored over thirty peer reviewed publications and received the best paper award at the 2011 eCrime Researcher's Summit for his work on the economics of cybercrime. He previously served as the Editor in Chief of ACM Computers & Society, where he received the ACM SIGCAS Outstanding Service Award.

      Dr. Garg currently serves as the Working Group Lead for the President’s National Security and Telecommunication’s Advisory Committee’s workstream on Post-Quantum Cryptography. He served as the Vice Chair for Consumer Technology Association’s WG on Cybersecurity and Privacy. He is the Co-Chair for Communication Sector Coordinating Council’s Emerging Technology Committee. His projects and papers have been referenced by the National Institute of Standards and Technology, Organization for Economic Co-operation and Development, National Security and Telecommunication’s Advisory Committee, and Financial Sector Information Sharing and Analysis Center. He has previously presented at events like Broadband Breakfast, Usenix Enigma, Usenix PEPR, Telecommunications Policy Research Conference, Hack the Capitol, and State of the Net.

  • 10:00 - 10:20
    Morning Break
    Pacific Ballroom D
  • 10:20 - 12:00
    Paper Session 1
    Coast Ballroom
    • Youngwook Do (JPMorganChase and Georgia Institute of Technology), Tingyu Cheng (Georgia Institute of Technology and University of Notre Dame), Yuxi Wu (Georgia Institute of Technology and Northeastern University), HyunJoo Oh(Georgia Institute of Technology), Daniel J. Wilson (Northeastern University), Gregory D. Abowd (Northeastern University), Sauvik Das (Carnegie Mellon University)

      Passive RFID is ubiquitous for key use-cases that include authentication, contactless payment, and location tracking. Yet, RFID chips can be read without users’ knowledge and consent, causing security and privacy concerns that reduce trust. To improve trust, we employed physically-intuitive design principles to create On-demand RFID (ORFID). ORFID’s antenna, disconnected by default, can only be re-connected by a user pressing and holding the tag. When the user lets go, the antenna automatically disconnects. ORFID helps users visibly examine the antenna’s connection: by pressing a liquid well, users can observe themselves pushing out a dyed, conductive liquid to fill the void between the antenna’s two bisected ends; by releasing their hold, they can see the liquid recede. A controlled evaluation with 17 participants showed that users trusted ORFID significantly more than a commodity RFID tag, both with and without an RFID-blocking wallet. Users attributed this increased trust to visible state inspection and intentional activation.

    • Kaiming Cheng (University of Washington), Mattea Sim (Indiana University), Tadayoshi Kohno (University of Washington), Franziska Roesner (University of Washington)

      Augmented reality (AR) headsets are now commercially available, including major platforms like Microsoft’s Hololens 2, Meta’s Quest Pro, and Apple’s Vision Pro. Compared to currently widely deployed smartphone or web platforms, emerging AR headsets introduce new sensors that capture substantial and potentially privacy-invasive data about the users, including eye-tracking and hand-tracking sensors. As millions of users begin to explore AR for the very first time with the release of these headsets, it is crucial to understand the current technical landscape of these new sensing technologies and how end-users perceive and understand their associated privacy and utility implications. In this work, we investigate the current eye-tracking and hand-tracking permission models for three major platforms (HoloLens 2, Quest Pro, and Vision Pro): what is the granularity of eye-tracking and hand-tracking data made available to applications on these platforms, and what information is provided to users asked to grant these permissions (if at all)? We conducted a survey with 280 participants with no prior AR experience on Prolific to investigate (1) people’s comfort with the idea of granting eye- and hand-tracking permissions on these platforms, (2) their perceived and actual comprehension of the privacy and utility implications of granting these permissions, and (3) the self-reported factors that impact their willingness to try eye-tracking and hand-tracking enabled AR technologies in the future. Based on (mis)alignments we identify between comfort, perceived and actual comprehension, and decision factors, we discuss how future AR platforms can better communicate existing privacy protections, improve privacy-preserving designs, or better communicate risks.

    • Keika Mori (Deloitte Tohmatsu Cyber LLC, Waseda University), Daiki Ito (Deloitte Tohmatsu Cyber LLC), Takumi Fukunaga (Deloitte Tohmatsu Cyber LLC), Takuya Watanabe (Deloitte Tohmatsu Cyber LLC), Yuta Takata (Deloitte Tohmatsu Cyber LLC), Masaki Kamizono (Deloitte Tohmatsu Cyber LLC), Tatsuya Mori (Waseda University, NICT, RIKEN AIP)

      Companies publish privacy policies to improve transparency regarding the handling of personal information. A discrepancy between the description of the privacy policy and the user’s understanding can lead to a risk of a decrease in trust. Therefore, in creating a privacy policy, the user’s understanding of the privacy policy should be evaluated. However, the periodic evaluation of privacy policies through user studies takes time and incurs financial costs. In this study, we investigated the understandability of privacy policies by large language models (LLMs) and the gaps between their understanding and that of users, as a first step towards replacing user studies with evaluation using LLMs. Obfuscated privacy policies were prepared along with questions to measure the comprehension of LLMs and users. In comparing the comprehension levels of LLMs and users, the average correct answer rates were 85.2% and 63.0%, respectively. The questions that LLMs answered incorrectly were also answered incorrectly by users, indicating that LLMs can detect descriptions that users tend to misunderstand. By contrast, LLMs understood the technical terms used in privacy policies, whereas users did not. The identified gaps in comprehension between LLMs and users, provide insights into the potential of automating privacy policy evaluations using LLMs.

    • Jasmin Schwab (German Aerospace Center (DLR)), Alexander Nussbaum (University of the Bundeswehr Munich), Anastasia Sergeeva (University of Luxembourg), Florian Alt (University of the Bundeswehr Munich and Ludwig Maximilian University of Munich), and Verena Distler (Aalto University)

      Organizations depend on their employees’ long-term cooperation to help protect the organization from cybersecurity threats. Phishing attacks are the entry point for harmful followup attacks. The acceptance of training measures is thus crucial. Many organizations use simulated phishing campaigns to train employees to adopt secure behaviors. We conducted a preregistered vignette experiment (N=793), investigating the factors that make a simulated phishing campaign seem (un)acceptable, and their influence on employees’ intention to manipulate the campaign. In the experiment, we varied whether employees gave prior consent, whether the phishing email promised a financial incentive and the consequences for employees who clicked on the phishing link. We found that employees’ prior consent positively affected the acceptance of a simulated phishing campaign. The consequences of “employee interview” and “termination of the work contract” negatively affected acceptance. We found no statistically significant effects of consent, monetary incentive, and consequences on manipulation probability. Our results shed light on the factors influencing the acceptance of simulated phishing campaigns. Based on our findings, we recommend that organizations prioritize obtaining informed consent from employees before including them in simulated phishing campaigns and that they clearly describe their consequences. Organizations should carefully evaluate the acceptance of simulated phishing campaigns and consider alternative anti-phishing measures.

    • Zaid Hakami (Florida International University and Jazan University), Ashfaq Ali Shafin (Florida International University), Peter J. Clarke (Florida International University), Niki Pissinou (Florida International University), and Bogdan Carbunar (Florida International University)

      Online abuse, a persistent aspect of social platform interactions, impacts user well-being and exposes flaws in platform designs that include insufficient detection efforts and inadequate victim protection measures. Ensuring safety in platform interactions requires the integration of victim perspectives in the design of abuse detection and response systems. In this paper, we conduct surveys (n = 230) and semi-structured interviews (n = 15) with students at a minority-serving institution in the US, to explore their experiences with abuse on a variety of social platforms, their defense strategies, and their recommendations for social platforms to improve abuse responses. We build on study findings to propose design requirements for abuse defense systems and discuss the role of privacy, anonymity, and abuse attribution requirements in their implementation. We introduce ARI, a blueprint for a unified, transparent, and personalized abuse response system for social platforms that sustainably detects abuse by leveraging the expertise of platform users, incentivized with proceeds obtained from abusers.

    • Lea Duesterwald (Carnegie Mellon University), Ian Yang (Carnegie Mellon University), Norman Sadeh (Carnegie Mellon University)

      Human actions or lack thereof contribute to a large majority of cybersecurity incidents. Traditionally, when looking for advice on cybersecurity questions, people have turned to search engines or social sites like Reddit. The rapid adoption of chatbot technologies is offering a potentially more direct way of getting similar advice. Initial research suggests, however, that while chatbot answers to common cybersecurity questions tend to be fairly accurate, they may not be very effective as they often fall short on other desired qualities such as understandability, actionability, or motivational power. Research in this area thus far has been limited to the evaluation by researchers themselves on a small number of synthetic questions. This article reports on what we believe to be the first in situ evaluation of a cybersecurity Question Answering (QA) assistant. We also evaluate a prompt engineered to help the cybersecurity QA assistant generate more effective answers. The study involved a 10-day deployment of a cybersecurity QA assistant in the form of a Chrome extension. Collectively, participants (N=51) evaluated answers generated by the assistant to over 1,000 cybersecurity questions they submitted as part of their regular day-to-day activities. The results suggest that a majority of participants found the assistant useful and often took actions based on the answers they received. In particular, the study indicates that prompting successfully improved the effectiveness of answers and, in particular, the likelihood that users follow their recommendations (fraction of participants who actually followed the advice was 0.514 with prompting vs. 0.402 without prompting, p=4.61E-04), an impact on people’s actual behavior. We provide a detailed analysis of data collected in this study, discuss their implications, and outline next steps in the development and deployment of effective cybersecurity QA assistants that offer the promise of changing actual user behavior and of reducing human-related security incidents.

    • Ashley Sheil (Munster Technological University), Jacob Camilleri (Munster Technological University), Michelle O Keeffe (Munster Technological University), Melanie Gruben (Munster Technological University), Moya Cronin (Munster Technological University) and Hazel Murray (Munster Technological University)

      Based on Irish older adult’s perceptions, practices, and challenges regarding password management, the goal of this study was to compile suitable advice that can benefit this demographic. To achieve this, we first conducted semi structured interviews (n=37), we then collated advice based on best practice and what we learned from these interviews. We facilitated two independent focus groups (n=31) to evaluate and adjust this advice and tested the finalized advice through an observational study (n=15). The participants were aged between 59 and 86 and came from various counties in Ireland, both rural and urban. The findings revealed that managing multiple passwords was a significant source of frustration, leading some participants to adopt novel and informal strategies for storing them. A notable hesitation to adopt digital password managers and passphrases was also observed. Participants appreciated guidance on improving their password practices, with many affirming that securely writing down passwords was a practical strategy. Irish older adults demonstrated strong intuition regarding cybersecurity, notably expressing concerns over knowledge-based security checks used by banks and government institutions. This study aims to contribute to the aggregation of practical password advice suited to older adults, making password security more manageable and less burdensome for this demographic.

  • 12:00 - 13:30
    Lunch
    Loma Vista Terrace and Harborside
  • 13:30 - 15:10
    Paper Session 2
    Coast Ballroom
    • Andrew Searles (University of California Irvine), Renascence Tarafder Prapty (University of California Irvine), Gene Tsudik (University of California Irvine)

      Since 2003, CAPTCHAS have been widely used as a barrier against bots, while simultaneously annoying great multitudes of users worldwide. As the use of CAPTCHAS grew, techniques to defeat or bypass them kept improving. In response, CAPTCHAS themselves evolved in terms of sophistication and diversity, becoming increasingly difficult to solve for both bots and humans. Given this long-standing and still-ongoing arms race, it is important to investigate usability, solving performance, and user perceptions of modern CAPTCHAS. In this work, we do so via a large scale (over 3,600 distinct users) 13-month realworld user study and post-study survey. The study, conducted at a large public university, is based on a live account creation and password recovery service with currently prevalent CAPTCHA type: reCAPTCHAv2.

      Results show that, with more attempts, users improve in solving checkbox CAPTCHAS. For website developers and user study designers, results indicate that the website context, i.e., whether the service is password recovery or account creation, directly influences (with statistically significant differences) CAPTCHA solving times. We consider the impact of participants’ major and education level, showing that certain majors exhibit better performance, while, in general, education level has a direct impact on solving time. Unsurprisingly, we discover that participants find image CAPTCHAS to be annoying, while checkbox CAPTCHAS are perceived as easy. We also show that, rated via System Usability Scale (SUS), image CAPTCHAS are viewed as “OK”, while checkbox CAPTCHAS are viewed as “good”.

      Finally, we also explore the cost and security of reCAPTCHAv2 and conclude that it comes at an immense cost and offers practically no security. Overall, we believe that this study’s results prompt a natural conclusion: reCAPTCHAv2 and similar reCAPTCHA technology should be deprecated.

    • Yuxi Wu (Georgia Institute of Technology and Northeastern University), Jacob Logas (Georgia Institute of Technology), Devansh Ponda (Georgia Institute of Technology), Julia Haines (Google), Jiaming Li (Google), Jeffrey Nichols (Apple), W. Keith Edwards (Georgia Institute of Technology), Sauvik Das (Carnegie Mellon University)

      Users make hundreds of transactional permission decisions for smartphone applications, but these decisions persist beyond the context in which they were made. We hypothesized that user concern over permissions varies by context, e.g., that users might be more concerned about location permissions at home than work. To test our hypothesis, we ran a 44-participant, 4-week experience sampling study, asking users about their concern over specific application-permission pairs, plus their physical environment and context. We found distinguishable differences in participants’ concern about permissions across locations and activities, suggesting that users might benefit from more dynamic and contextually-aware approaches to permission decision-making. However, attempts to assist users in configuring these more complex permissions should be made with the aim to reduce concern and affective discomfort—not to normalize and perpetuate this discomfort by replicating prior decisions alone.

    • Daniel Timko (California State University San Marcos), Daniel Hernandez Castillo (California State University San Marcos), Muhammad Lutfor Rahman (California State University San Marcos)

      With the booming popularity of smartphones, threats related to these devices are increasingly on the rise. Smishing, a combination of SMS (Short Message Service) and phishing has emerged as a treacherous cyber threat used by malicious actors to deceive users, aiming to steal sensitive information, money or install malware on their mobile devices. Despite the increase in smishing attacks in recent years, there are very few studies aimed at understanding the factors that contribute to a user’s ability to differentiate real from fake messages. To address this gap in knowledge, we have conducted an online survey on smishing detection with 187 participants. In this study, we presented them with 16 SMS screenshots and evaluated how different factors affect their decision making process in smishing detection. Next, we conducted a post-survey to garner information on the participants’ security attitudes, behavior and knowledge. Our results highlighted that attention and Revised Security Behavior Intentions Scale (RSeBIS) scores had a significant impact on participants’ accuracy in identifying smishing messages. We found that participants had more difficulty identifying real messages from fake ones, with an accuracy of 67.1% with fake messages and 43.6% with real messages. Our study is crucial in developing proactive strategies to encounter and mitigate smishing attacks. By understanding what factors influence smishing detection, we aim to bolster users’ resilience against such threats and create a safer digital environment for all.

    • Cheng Guo (Clemson University), Kelly Caine (Clemson University)

      Social media platforms (SMPs) facilitate information sharing across varying levels of sensitivity. A crucial design decision for SMP administrators is the platform’s identity policy, with some opting for real-name systems while others allow anonymous participation. Content moderation on these platforms is conducted by both humans and automated bots. This paper examines the relationship between anonymity, specifically through the use of “throwaway” accounts, and the extent and nature of content moderation on Reddit. Our findings indicate that content originating from anonymous throwaway accounts is more likely to violate rules on Reddit. Thus, they are more likely to be removed by moderation than standard pseudonymous accounts. However, the moderation actions applied to throwaway accounts are consistent with those applied to ordinary accounts, suggesting that the use of anonymous accounts does not necessarily necessitate increased human moderation. We conclude by discussing the implications of these findings for identity policies and content moderation strategies on SMPs.

    • Leon Kersten (Eindhoven University of Technology), Kim Beelen (Eindhoven University of Technology), Emmanuele Zambon (Eindhoven University of Technology), Chris Snijders (Eindhoven University of Technology), Luca Allodi (Eindhoven University of Technology)

      The alert investigation processes junior (Tier-1) analysts follow are critical to attack detection and communication in Security Operation Centers (SOCs). Yet little is known on how analysts conduct alert investigations, which information they consider, and when. In this work, we collaborate with a commercial SOC and employ two think-aloud experiments. The first is to evaluate the alert investigation process followed by professional T1 analysts, and identify criticalities within. For the second experiment, we develop an alert investigation support system (AISS), integrate it into the SOC environment, and evaluate its effect on alert investigations with another cohort of T1 analysts. The experiments observe five and four analysts, respectively, conducting 400 and 36 investigations, respectively. Our results show that the analysts’ natural analysis process differs between analysts and types of alerts and that the AISS aids the analyst in gathering more relevant information while performing fewer actions for critical security alerts.

    • Ran Elgedawy (The University of Tennessee, Knoxville), John Sadik (The University of Tennessee, Knoxville), Anuj Gautam (The University of Tennessee, Knoxville), Trinity Bissahoyo (The University of Tennessee, Knoxville), Christopher Childress (The University of Tennessee, Knoxville), Jacob Leonard (The University of Tennessee, Knoxville), Clay Shubert (The University of Tennessee, Knoxville), Scott Ruoti (The University of Tennessee,…

      In this the digital age, parents and children may turn to online security advice to determine how to proceed. In this paper, we examine the advice available to parents and children regarding content filtering and circumvention as found on YouTube and TikTok. In an analysis of 839 videos returned from queries on these topics, we found that half (n=399) provide relevant advice to the target demographic. Our results show that of these videos, roughly three-quarters are accurate, with the remaining one-fourth containing incorrect advice. We find that videos targeting children are both more likely to be incorrect and actionable than videos targeting parents, leaving children at increased risk of taking harmful action. Moreover, we find that while advice videos targeting parents will occasionally discuss the ethics of content filtering and device monitoring (including recommendations to respect children’s autonomy) no such discussion of the ethics or risks of circumventing content filtering is given to children, leaving them unaware of any risks that may be involved with doing so. Our findings suggest that video-based social media has the potential to be an effective medium for propagating security advice and that the public would benefit from security researchers and practitioners engaging more with these platforms, both for the creation of content and of tools designed to help with more effective filtering.

    • Alexandra Klymenko (Technical University of Munich), Stephen Meisenbacher (Technical University of Munich), Luca Favaro (Technical University of Munich), and Florian Matthes (Technical University of Munich)

      Privacy-Enhancing Technologies (PETs) have gained considerable attention in the past decades, particularly in academia but also in practical settings. The proliferation of promising technologies from research presents only one perspective, and the true success of PETs should also be measured in their adoption in the industry. Yet, a potential issue arises with the very terminology of Privacy-Enhancing Technology: what exactly is a PET, and what is not? To tackle this question, we begin with the academic side, investigating various definitions of PETs proposed in the literature over the past 30 years. Next, we compare our findings with the awareness and understanding of PETs in practice by conducting 20 semi-structured interviews with privacy professionals. Additionally, we conduct two surveys with 67 total participants, quantifying which of the technologies from the literature practitioners consider to be PETs, while also evaluating new definitions that we propose. Our results show that there is little agreement in academia and practice on how the term Privacy-Enhancing Technologies is understood. We conclude that there is much work to be done towards facilitating a common understanding of PETs and their transition from research to practice.

  • 15:10 - 15:40
    Afternoon Break
    Pacific Ballroom D
  • 15:40 - 16:30
    Paper Session 3 (Vision Track)
    Coast Ballroom
    • Oliver D. Reithmaier (Leibniz University Hannover), Thorsten Thiel (Atmina Solutions), Anne Vonderheide (Leibniz University Hannover), Markus Dürmuth (Leibniz University Hannover)

      Email phishing to date still is the most common attack on IT systems. While early research has focused on collective and large-scale phishing campaign studies to enquire why people fall for phishing, such studies are limited in their inference regarding individual or contextual influence of user phishing detection. Researchers tried to address this limitation using scenario-based or role-play experiments to uncover individual factors influencing user phishing detection. Studies using these methods unfortunately are also limited in their ability to generate inference due to their lack of ecological validity and experimental setups. We tackle this problem by introducing PhishyMailbox, a free and open-source research software designed to deploy mail sorting tasks in a simulated email environment. By detailing the features of our app for researchers and discussing its security and ethical implications, we demonstrate the advantages it provides over previously used paradigms for scenario-based research, especially regarding ecological validity as well as generalizability through larger possible sample sizes.We report excellent usability statistics from a preliminary sample of usable security scientists and discuss ethical implications of the app. Finally, we discuss future implementation opportunities of PhishyMailbox in research designs leveraging signal detection theory, item response theory and eye tracking applications.

    • Yorick Last (Paderborn University), Patricia Arias Cabarcos (Paderborn University)

      To facilitate the growing demand for a universal means of digital identification across services, while preserving user control and privacy, multiple digital identity implementations have emerged. From a technical perspective, many of these rely on established concepts within cryptography, allowing them to provide benefits in terms of security and privacy. Recent legislation also promises broader recognition and acceptance of digital identities, both in the digital world and beyond. However, research into the usability, accessibility, and user understanding of digital identities is rare. We argue that the development of usable digital identity wallets is vital to the successful and inclusive application of digital identities in society. In this vision paper, we describe our research plans for obtaining a better understanding of how to develop these usable digital identities wallets.

    • Jacob Hopkins (Texas A&M University - Corpus Christi), Carlos Rubio-Medrano (Texas A&M University - Corpus Christi), Cori Faklaris (University of North Carolina at Charlotte)

      Data is a critical resource for technologies such as Large Language Models (LLMs) that are driving significant economic gains. Due to its importance, many different organizations are collecting and analyzing as much data as possible to secure their growth and relevance, leading to non-trivial privacy risks. Among the areas with potential for increased privacy risks are voluntary data-sharing events, when individuals willingly exchange their personal data for some service or item. This often places them in positions where they have inadequate control over what data should be exchanged and how it should be used. To address this power imbalance, we aim to obtain, analyze, and dissect the many different behaviors and needs of both parties involved in such negotiations, namely, the data subjects, i.e., the individuals whose data is being exchanged, and the data requesters, i.e., those who want to acquire the data. As an initial step, we are developing a multi-stage user study to better understand the factors that govern the behavior of both data subjects and requesters while interacting in data exchange negotiations. In addition, we aim to identify the design elements that both parties require so that future privacy-enhancing technologies (PETs) prioritizing privacy negotiation algorithms can be further developed and deployed in practice.

    • Rishika Thorat (Purdue University), Tatiana Ringenberg (Purdue University)

      AI-assisted cybersecurity policy development has the potential to reduce organizational burdens while improving compliance. This study examines how cybersecurity students and professionals develop ISO29147-aligned vulnerability disclosure policies (VDPs) with and without AI. Through this project, we will evaluate compliance, ethical accountability, and transparency of the policies through the lens of Kaspersky’s ethical principles.

      Both students and professionals will produce policies manually and with AI, reflecting on utility and reliability. We will analyze resulting policies, prompts, and reflections through regulatory mapping, rubric-based evaluations, and thematic analysis. This project aims to inform educational strategies and industry best practices for integrating AI in cybersecurity policy development, focusing on expertise, collaboration, and ethical considerations.

      We invite feedback from the Usable Security and Privacy community on participant recruitment, evaluation criteria, ethical frameworks, and ways to maximize the study’s impact on academia and industry.

    16:30 - 17:20
    Invited Talk – Test of Time with Closing Keynote
    Coast Ballroom
    • Dr. Patrick Gage Kelley is the Head of Research Strategy for Trust & Safety at Google. He has worked on projects that help us better understand how people think about their data and safety online. These include projects on the use and design of user-friendly privacy displays, passwords, location-sharing, mobile apps, encryption, technology ethics, designing products for people with the most significant digital safety risks, and most recently on people's relationship and understanding of AI. Patrick’s work on redesigning privacy policies in the style of nutrition labels was included in the 2009 Annual Privacy Papers for Policymakers event on Capitol Hill.

      Previously, he was a professor of Computer Science at the University of New Mexico and faculty at the UNM ARTSLab and received his Ph.D. from Carnegie Mellon University working with the Mobile Commerce Lab and the CyLab Usable Privacy and Security (CUPS) Lab. He was an early researcher at Wombat Security Technologies, now a part of Proofpoint, and has also been at NYU, Intel Labs, and the National Security Agency.

  • 17:20 - 17:30
    Closing Remarks
    Coast Ballroom
  • 18:00 -
    Reception
    Coast Ballroom