Cormac Herley (Microsoft), Stuart Schechter (Unaffiliated)

Online guessing attacks against password servers can be hard to address. Approaches that throttle or block repeated guesses on an account (e.g., three strikes type lockout rules)
can be effective against depth-first attacks, but are of little help against breadth-first attacks that spread guesses very widely. At large providers with tens or hundreds of millions
of accounts breadth-first attacks offer a way to send millions or even billions of guesses without ever triggering the depth-first defenses.
The absence of labels and non-stationarity of attack traffic make it challenging to apply machine learning techniques.

We show how to accurately estimate the odds that an observation $x$ associated with a request is malicious. Our main assumptions are that successful malicious logins are a small
fraction of the total, and that the distribution of $x$ in the legitimate traffic is stationary, or very-slowly varying.
From these we show how we can estimate the ratio of bad-to-good traffic among any set of requests; how we can then identify subsets of the request data that contain least (or even no) attack traffic; how
these least-attacked subsets allow us to estimate the distribution of values of $x$ over the legitimate data, and hence calculate the odds ratio.
A sensitivity analysis shows that even when we fail to identify a subset with little attack traffic our odds ratio estimates are very robust.

View More Papers

Neuro-Symbolic Execution: Augmenting Symbolic Execution with Neural Constraints

Shiqi Shen (National University of Singapore), Shweta Shinde (National University of Singapore), Soundarya Ramesh (National University of Singapore), Abhik Roychoudhury (National University of Singapore), Prateek Saxena (National University of Singapore)

Read More

Anonymous Multi-Hop Locks for Blockchain Scalability and Interoperability

Giulio Malavolta (Friedrich-Alexander University Erlangen-Nürnberg), Pedro Moreno Sanchez (TU Wien), Clara Schneidewind (TU Wien), Aniket Kate (Purdue University), Matteo Maffei (TU Wien)

Read More

Statistical Privacy for Streaming Traffic

Xiaokuan Zhang (The Ohio State University), Jihun Hamm (The Ohio State University), Michael K. Reiter (University of North Carolina at Chapel Hill), Yinqian Zhang (The Ohio State University)

Read More

TextBugger: Generating Adversarial Text Against Real-world Applications

Jinfeng Li (Zhejiang University), Shouling Ji (Zhejiang University), Tianyu Du (Zhejiang University), Bo Li (University of California, Berkeley), Ting Wang (Lehigh University)

Read More