Virgil D. Gligor (Carnegie Mellon University), Maverick S. L. Woo (Carnegie Mellon University)

Root-of-Trust (RoT) establishment ensures either that the state of an untrusted system contains all and only content chosen by a trusted local verifier and the system code begins execution in that state, or that the verifier discovers the existence of unaccounted for content. This ensures program booting into system states that are free of persistent malware. An adversary can no longer retain undetected control of one's local system.

We establish RoT {em unconditionally}; i.e., without secrets, trusted hardware modules and instructions, or bounds on the adversary's computational power. The specification of a system's chipset and device controllers, and an external source of true random numbers, such as a commercially available quantum RNG, is all that is needed. Our system specifications are those of a concrete Word Random Access Machine (cWRAM) model -- the closest computation model to a real system with a large instruction set.

We define the requirements for RoT establishment and explain their differences from past attestation protocols. Then we introduce a RoT establishment protocol based on a new computation primitive with concrete (non-asymptotic) optimal space-time bounds in adversarial evaluation on the cWRAM. The new primitive is a randomized polynomial, which has $k$-independent uniform coefficients in a prime order field. Its collision properties are stronger than those of a $k$-independent (almost) universal hash function in cWRAM evaluations, and are sufficient to prove existence of malware-free states before RoT is established. Preliminary measurements show that randomized-polynomial performance is practical on commodity hardware even for very large $k$.

To prove the concrete optimality of randomized polynomials, we present a result of independent complexity interest: a Horner-rule program is uniquely optimal whenever the cWRAM execution space and time are simultaneously minimized.

View More Papers

The Crux of Voice (In)Security: A Brain Study of...

Ajaya Neupane (University of California Riverside), Nitesh Saxena (University of Alabama at Birmingham), Leanne Hirshfield (Syracuse University), Sarah Elaine Bratt (Syracuse University)

Read More

ML-Leaks: Model and Data Independent Membership Inference Attacks and...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (Swiss Data Science Center, ETH Zurich/EPFL), Pascal Berrang (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Read More

Ginseng: Keeping Secrets in Registers When You Distrust the...

Min Hong Yun (Rice University), Lin Zhong (Rice University)

Read More

Unveiling your keystrokes: A Cache-based Side-channel Attack on Graphics...

Daimeng Wang (University of California Riverside), Ajaya Neupane (University of California Riverside), Zhiyun Qian (University of California Riverside), Nael Abu-Ghazaleh (University of California Riverside), Srikanth V. Krishnamurthy (University of California Riverside), Edward J. M. Colbert (Virginia Tech), Paul Yu (U.S. Army Research Lab (ARL))

Read More