Friday, 28 February

  • 08:00 - 12:00
    Registration
    Foyer
  • 08:00 - 09:00
    Breakfast
    Pacific Ballroom D
  • 09:00 - 09:05
    Welcome and Opening Remarks
    Pacific Ballroom
  • 09:05 - 10:05
    Keynote talk by Nick Nikiforakis (Stony Book University)
    Pacific Ballroom
    • In this talk, we take a step back and argue that many varied and seemingly unrelated attacks on the web are actually symptoms of one deeper problem that has existed since the web's inception. Whether it is attacks due to expired domain names, cloaking done by malicious websites, malvertising, or even our growing distrust of the news can be largely attributed back to the issue of stateless linking. Stateless linking refers to the absence of any integrity guarantees between the time that a link for a remote resource was created, to a future time when this link is resolved by web clients. We draw on 10+ years of research to demonstrate how stateless linking and the resulting lack of content integrity is the true culprit for many of our past, current, and likely future web problems. Successfully tackling this one really challenging problem, has the potential of solving many of our web woes.

      Speaker's Biography: Nick Nikiforakis (PhD'13) is an Associate Professor in the Department of Computer Science at Stony Brook University. He leads the PragSec Lab, where his students conduct research in cyber security, with a focus on web and network security. He is the author of more than 90 peer-reviewed academic publications and his work is often discussed in the popular press. He is the recipient of the National Science Foundation CAREER award (2020), the Office of Naval Research Young Investigator Award (2020), as well as a range of other security-related and privacy-related awards by federal funding agencies. Next to multiple best-paper awards, the National Security Agency awarded him the "Best Scientific Cybersecurity Paper" award for his research on certificate transparency abuse in 2023.

  • 10:05 - 10:20
    Morning Break
    Pacific Ballroom D
  • 10:20 - 12:00
    Session 1: Network Meets the Web
    Pacific Ballroom
    • Tomer Schwartz (Data and Security Laboratory Fujitsu Research of Europe Ltd), Ofir Manor (Data and Security Laboratory Fujitsu Research of Europe Ltd), Andikan Otung (Data and Security Laboratory Fujitsu Research of Europe Ltd)

      Cyber attacks and fraud pose significant risks to online platforms, with malicious actors who often employ VPN servers to conceal their identities and bypass geolocation-based security measures. Current passive VPN detection methods identify VPN connections with more than 95% accuracy, but depend on prior knowledge, such as known VPN to IP mappings and predefined communication patterns. This makes them ineffective against sophisticated attackers who leverage compromised machines as VPN servers. On the other hand, current active detection methods are effective in detecting proxy usage but are mostly ineffective in VPN detection.

      This paper introduces SNITCH (Server-side Non-intrusive Identification of Tunneled CHaracteristics), a novel approach designed to enhance web security by identifying VPN use without prior data collection on known VPN servers or utilizing intrusive client-side software. SNITCH combines IP geolocation, ground-truth landmarks, and communication delay measurements to detect VPN activity in real time and seamlessly integrates into the authentication process, preserving user experience while enhancing security. We measured 130 thousand connections from over 24 thousand globally distributed VPN servers and client nodes to validate the feasibility of our solution in the real world. Our experiments revealed that in scenarios where the State of the Art fails to detect, SNITCH achieves a detection accuracy of up to 93%, depending on the geographical region.

    • Simone Cossaro (University of Trieste), Damiano Ravalico (University of Trieste), Rodolfo Vieira Valentim (University of Turin), Martino Trevisan (University of Trieste), Idilio Drago (University of Turin)

      Network telescopes (IP addresses hosting no services) are valuable for observing unsolicited Internet traffic from scanners, crawlers, botnets, and misconfigured hosts. This traffic is known as Internet radiation, and its monitoring with telescopes helps in identifying malicious activities. Yet, the deployment of telescopes is expensive. Meanwhile, numerous public blocklists aggregate data from various sources to track IP addresses involved in malicious activity. This raises the question of whether public blocklists already provide sufficient coverage of these actors, thus rendering new network telescopes unnecessary. We address this question by analyzing traffic from four geographically distributed telescopes and dozens of public blocklists over a two-month period. Our findings show that public blocklists include approximately 71% of IP addresses observed in the telescopes. Moreover, telescopes typically observe scanning activities days before they appear in blocklists. We also find that only 4 out of 50 lists contribute the majority of the coverage, while the addresses evading blocklists present more sporadic activity. Our results demonstrate that distributed telescopes remain valuable assets for network security, providing early detection of threats and complementary coverage to public blocklists. These results call for more coordination among telescope operators and blocklist providers to enhance the defense against emerging threats.

    • Christoph Kerschbaumer (Mozilla Corporation), Frederik Braun (Mozilla Corporation), Simon Friedberger (Mozilla Corporation), Malte Jürgens (Mozilla Corporation)

      The web was originally developed in an attempt to allow scientists from around the world to share information efficiently. As the web evolved, the threat model for the web evolved as well. While it was probably acceptable for research to be freely shared with the world, current use cases like online shopping, media consumption or private messaging require stronger security safeguards which ensure that network attackers are not able to view, steal, or even tamper with the transmitted data. Unfortunately the Hypertext Transfer Protocol (http) does not provide any of these required security guarantees.

      The Hypertext Transfer Protocol Secure (https) on the other hand allows carrying http over the Transport Layer Security (TLS) protocol and in turn fixes these security shortcomings of http by creating a secure and encrypted connection between the browser and the website. While the majority of websites support https nowadays, https remains an opt-in mechanism that not everyone perceives as necessary or affordable.

      In this paper we evaluate the state of https adoption on the web. We survey different mechanisms which allow upgrading connections from http to https, and provide real world browsing data from over 140 million Firefox release users. We provide numbers showcasing https adoption in different geographical regions as well as on different operating systems and highlight the effectiveness of the different upgrading mechanisms. In the end, we can use this analysis to make actionable suggestions to further improve https adoption on the web.

    • Michele Spagnuolo (Google), David Dworken (Google), Artur Janc (Google), Santiago Díaz (Google), Lukas Weichselbaum (Google)

      The area of security measurability is gaining increased attention, with a wide range of organizations calling for the development of scalable approaches for assessing the security of software systems and infrastructure. In this paper, we present our experience developing Security Signals, a comprehensive system providing security measurability for web services, deployed in a complex application ecosystem of thousands of web services handling traffic from billions of users. The system collects security-relevant information from production HTTP traffic at the reverse proxy layer, utilizing novel concepts such as synthetic signals augmented with additional risk information to provide a holistic view of the security posture of individual services and the broader application ecosystem. This approach to measurability has enabled large-scale security improvements to our services, including prioritized rollouts of security enhancements and the implementation of automated regression monitoring. Furthermore, it has proven valuable for security research and prioritization of defensive work. Security Signals addresses shortcomings of prior web measurability proposals by tracking a comprehensive set of security properties relevant to web applications, and by extracting insights from collected data for use by both security experts and non-experts. We believe the lessons learned from the implementation and use of Security Signals offer valuable insights for practitioners responsible for web service security, potentially inspiring new approaches to web security measurability.

  • 12:00 - 13:30
    Lunch
    Loma Vista Terrace and Harborside
  • 13:30 - 15:10
    Session 2: Authentication and Browser Security
    Chair: Jianjia Yu (Johns Hopkins University)
    Pacific Ballroom
    • Sirvan Almasi (Imperial College London), William J. Knottenbelt (Imperial College London)

      Password composition policies (PCPs) are critical security rules that govern how users create passwords for online authentication. Despite passwords remaining the primary authentication method online, there is significant disagreement among experts, regulatory bodies, and researchers about what constitutes effective password policies. This lack of consensus has led to high variance in PCP implementations across websites, leaving both developers and users uncertain. Current approaches lack a theoretical foundation for evaluating and comparing different password composition policies. We show that a structure-based policy, such as the three-random words recommended by UK’s National Cyber Security Centre (NCSC), can improve password security. We demonstrate this using an empirical evaluation of labelled password datasets and a new theoretical framework. Using these methods we demonstrate the feasibility and security of multi-word password policy and extend the NCSC’s recommendation to five words to account for nonuniform word selection. These findings provide an evidence-based framework for password policy development and suggest that current web authentication systems should adjust their minimum word requirements upward while maintaining usability.

    • Chi-en Amy Tai (University of Waterloo), Urs Hengartner (University of Waterloo), Alexander Wong (University of Waterloo)

      Passwords are a ubiquitous form of authentication that is still present for many online services and platforms. Researchers have measured password creation policies for a multitude of websites and studied password creation behaviour for users who speak various languages. Evidence shows that limiting all users to alphanumeric characters and select special characters resulted in weaker passwords for certain demographics. However, password creation policies still concentrate on only alphanumeric characters and focus on increasing the length of passwords rather than the diversity of potential characters in the password. With the recent recommendation towards passphrases, further concerns arise pertaining to the potential consequences of not being inclusive in password creation. Previous work studying multilingual passphrase policies that combined English and African languages showed that multilingual passphrases are more user-friendly and also more difficult to guess than a passphrase based on a single language. However, their work only studied passphrases based on standard alphanumeric characters. In this paper, we measure the password strength of using a multilingual passphrase that contains characters outside of the standard alphanumeric characters and assess the availability of such multilingual passwords for websites with free account creation in the Tranco top 50 list and the Semrush top 20 websites in China list. We find that password strength meters like zxcvbn and MultiPSM surprisingly struggle with correctly assessing the strength of non-English-only passphrases with MultiPSM encountering an encoding issue with non-alphanumeric characters. In addition, we find that half of all tested valid websites accept multilingual passphrases but three websites struggled in general due to imposing a maximum password character limitation.

    • Maxime Huyghe (Univ. Lille, Inria, CNRS, UMR 9189 CRIStAL), Clément Quinton (Univ. Lille, Inria, CNRS, UMR 9189 CRIStAL), Walter Rudametkin (Univ. Rennes, Inria, CNRS, UMR 6074 IRISA)

      Web browsers have become complex tools used by billions of people. The complexity is in large part due to its adaptability and variability as a deployment platform for modern applications, with features continuously being added. This also has the side effect of exposing configuration and hardware properties that are exploited by browser fingerprinting techniques.

      In this paper, we generate a large dataset of browser fingerprints using multiple browser versions, system and hardware configurations, and describe a tool that allows reasoning over the links between configuration parameters and browser fingerprints. We argue that using generated datasets that exhaustively explore configurations provides developers, and attackers, with important information related to the links between configuration parameters (i.e., browser, system and hardware configurations) and their exhibited browser fingerprints. We also exploit Browser Object Model (BOM) enumeration to obtain exhaustive browser fingerprints composed of up to 16, 000 attributes.

      We propose to represent browser fingerprints and their configurations with feature models, a tree-based representation commonly used in Software Product Line Engineering (SPLE) to respond to the challenges of variability, to provide a better abstraction to represent browser fingerprints and configurations. With translate 89, 486 browser fingerprints into a feature model with 35, 857 nodes from 1, 748 configurations. We show the advantages of this approach, a more elegant tree-based solution, and propose an API to query the dataset. With these tools and our exhaustive configuration exploration, we provide multiple use cases, including differences between headless and headful browsers or the selection of a minimal set of attributes from browser fingerprints to re-identify a configuration parameter from the browser.

    • Dzung Pham, Jade Sheffey, Chau Minh Pham, and Amir Houmansadr (University of Massachusetts Amherst)
  • 15:10 - 15:40
    Afternoon Break
    Pacific Ballroom D
  • 15:40 - 16:40
    Keynote talk by Frederik Braun (Mozilla)
    Pacific Ballroom
    • In this talk, we will examine web security through the browser's perspective. Various browser features have helped fix transport security issues and increase HTTPS adoption: Encouragements in the form of providing more exciting APIs exclusively to Secure Context or deprecating features (like with Mixed Content Blocking) have brought HTTPS adoption to over 90% in ten years.

      With these successful interventions as the browser's carrots and sticks - rewards for secure practices and penalties for insecure ones - we will then identify what academia and the industry can do to further apply security improvements. In particular, we will look at highly prevalent security issues in client code, like XSS and CSRF. In the end, we will see how the browser can play an instrumental role in web security improvements and what common tactics and potential issues exist.:

      Speaker's Biography: Frederik Braun builds security for the web and Mozilla Firefox in Berlin. As a contributor to standards, Frederik is also improving the web platform by bringing security into the defaults with specifications like the Sanitizer API and Subresource Integrity. Before Mozilla, Frederik studied IT-Security at the Ruhr-University in Bochum where he taught web security and co-founded the CTF team fluxfingers.

    16:40 - 17:50
    Session 3: Web3 and Work in Progress
    Pacific Ballroom
    • Iori Suzuki (Graduate School of Environment and Information Sciences, Yokohama National University), Yin Minn Pa Pa (Institute of Advanced Sciences, Yokohama National University), Nguyen Thi Van Anh (Institute of Advanced Sciences, Yokohama National University), Katsunari Yoshioka (Graduate School of Environment and Information Sciences, Yokohama National University)

      Decentralized Finance (DeFi) token scams have become one of the most prevalent forms of fraud in Web-3 technology, generating approximately $241.6 million in illicit revenue in 2023 [1]. Detecting these scams requires analyzing both on-chain data, such as transaction records on the blockchain, and off-chain data, such as websites related to the DeFi token project and associated social media accounts. Relying solely on one type of data may fail to capture the full context of fraudulent transparency inherent in blockchain technology, off-chain data often disappears alongside DeFi scam campaigns, making it difficult for the security community to study these scams. To address this challenge, we propose a dataset comprising more than 550 thousand archived web and social media data as off-chain data, in addition to on-chain data related to 32,144 DeFi tokens deployed on Ethereum blockchain from September 24, 2024 to January 14, 2025. This dataset aims to support the security community in studying and detecting DeFi token scams. To illustrate its utility, our case studies demonstrated the potential of the dataset in identifying patterns and behaviors associated with scam tokens. These findings highlight the dataset’s capability to provide insights into fraudulent activities and support further research in developing effective detection mechanisms.

    • Ryusei Ishikawa, Soramichi Akiyama, and Tetsutaro Uehara (Ritsumeikan University)
    • Gayatri Priyadarsini Kancherla and Abhishek Bichhawat (Indian Institute of Technology Gandhinagar)
    • Zihan Qu (Johns Hopkins University), Xinyi Qu (University College London), Xin Shen, Zhen Liang, and Jianjia Yu (Johns Hopkins University)
  • 17:50 - 18:00
    Awards and Closing Remarks
    Pacific Ballroom