Alexandra Weber (Telespazio Germany GmbH), Peter Franke (Telespazio Germany GmbH)

Space missions increasingly rely on Artificial Intelligence (AI) for a variety of tasks, ranging from planning and monitoring of mission operations, to processing and analysis of mission data, to assistant systems like, e.g., a bot that interactively supports astronauts on the International Space Station. In general, the use of AI brings about a multitude of security threats. In the space domain, initial attacks have already been demonstrated, including, e.g., the Firefly attack that manipulates automatic forest-fire detection using sensor spoofing. In this article, we provide an initial analysis of specific security risks that are critical for the use of AI in space and we discuss corresponding security controls and mitigations. We argue that rigorous risk analyses with a focus on AI-specific threats will be needed to ensure the reliability of future AI applications in the space domain.

View More Papers

Towards Automated Regulation Analysis for Effective Privacy Compliance

Sunil Manandhar (IBM T.J. Watson Research Center), Kapil Singh (IBM T.J. Watson Research Center), Adwait Nadkarni (William & Mary)

Read More

Merge/Space: A Security Testbed for Satellite Systems

M. Patrick Collins (USC Information Sciences Institute), Alefiya Hussain (USC Information Sciences Institute), J.P. Walters (USC Information Sciences Institute), Calvin Ardi (USC Information Sciences Institute), Chris Tran (USC Information Sciences Institute), Stephen Schwab (USC Information Sciences Institute)

Read More

QUACK: Hindering Deserialization Attacks via Static Duck Typing

Yaniv David (Columbia University), Neophytos Christou (Brown University), Andreas D. Kellas (Columbia University), Vasileios P. Kemerlis (Brown University), Junfeng Yang (Columbia University)

Read More