Call for Papers: Workshop on Artificial Intelligence System with Confidential Computing (AISCC 2024)
The call for papers is now closed. Find the list of accepted papers.
The transformative power of Artificial Intelligence (AI) is reshaping industries, from healthcare and finance to transportation and entertainment, ushering in a new era of innovation and efficiency. However, with the rapid advancements in AI, there is an escalating concern regarding the profound implications for its security, privacy, and safety. In particular, it poses challenges in addressing critical vulnerabilities and risks, including the leakage of training data, the emergence of adversarial samples in machine learning models, and the threat of backdoor attacks on AI systems.
On the other hand, Confidential Computing has witnessed significant growth in recent years. This technology aims to safeguard data during its use by establishing an isolated execution environment that is encrypted. Such a design is particularly beneficial for AI applications where elements like the model, training data, or inference tasks possess sensitive or private details.
Following the release of NVIDIA’s H100 this year, deploying ML systems on TEEs that require real-world GPU support has become more practical. Identifying ways to utilize TEEs to address concerns in AI security, privacy, and safety has emerged as a hot topic. It is not only urgently needed for in-depth discussion but also represents a promising new research direction. However, there is limited interaction between practitioners and researchers in Confidential Computing and AI security; the two fields currently operate somewhat in isolation. Through this workshop, our aim is to bridge this gap by bringing together practitioners and researchers from both domains. By issuing a call for publications for this workshop, we hope to advance the use of TEEs in addressing AI security challenges and to explore new issues arising in the convergence of these two domains.
Submission Guidelines for Papers/Posters
We accept (1) regular papers up to 8 pages, (2) short papers or work-in-progress (WIP) papers up to 5 pages, and (3) poster papers up to 1 page. All submissions should be in the double-column NDSS format, including both references and appendices. Additionally, we welcome Systemization of Knowledge (SoK) papers, which can be up to 12 pages in length, excluding references and clearly marked appendices. Please note that reviewers are not obligated to read the appendices or any supplementary material provided. Authors must adhere to the NDSS format without altering the font size or margins. For regular papers, submissions that are concise will not be at a disadvantage. As such, we encourage authors to submit papers that reflect the depth and breadth of their research contribution, without undue length.
Papers should be set to US letter size (not A4). Use a two-column layout with each column not exceeding 9.25 in. in height and 3.5 in. in width. The text should be in Times font. Font size should be 10-point or larger and line spacing should be 11-point or larger. Authors are required to use NDSS templates for their submissions. The templates for NDSS 2024 are available at https://www.ndss-symposium.org/ndss2024/submissions/templates/. All submissions must be in Portable Document Format (.pdf). Ensure that any special fonts, images, and figures are correctly rendered. When printed in black and white using Adobe Reader, all components should be clear and legible. All submissions should be anonymized for the review process. Special Categories: If your paper falls under the Short/WIP/SoK/poster category, please prefix your title with “Short:”, “WIP:”, “SoK:”, or “Poster:” respectively.
- All accepted submissions will be presented at the workshop and included in the NDSS workshop proceedings.
- One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
- For any questions, please contact one the workshop organizers at [email protected]
The submission portal for papers/posters is: https://ndss24aiscc.hotcrp.com/
Areas of Interest
The primary themes for the workshop’s call for papers include but not limited to:
- Basics of Confidential Computing in AI Context.
An overview of confidential computing principles tailored to AI applications, emphasizing its significance in safeguarding AI models and data. - Protecting AI Models with Confidential Computing.
Demonstrating how confidential computing can shield AI models against threats, from the training phase to deployment. - Challenges and Limitations of Confidential Computing for AI.
Discussing the specific challenges of integrating confidential computing in AI systems, such as computational overheads and potential scalability concerns. - Holistic Approaches to AI Security.
Going beyond just confidential computing to explore comprehensive strategies for AI protection, including hardware, software, and human-centric measures. - Combination of Confidential Computing and Other AI Defense Strategies.
Investigating how confidential computing can synergize with other defense techniques, such as differential privacy, for robust AI protection. - Integration of Confidential Computing into the AI Lifecycle.
Addressing the role of confidential computing throughout the AI model lifecycle, ensuring security from development and training to deployment and updates.
- Data Privacy Assurance via Confidential Computing.
Evaluating the potency of confidential computing in maintaining data privacy, especially in sectors with highly sensitive data. - Enforcing Data Policies with Confidential Computing.
Demonstrating the capability of confidential computing to uphold strict data policies, ensuring AI processes data in line with organizational and legal standards. - Confidential Computing in the AI Supply Chain.
Emphasizing the integration of confidential computing principles in the broader AI supply chain, from data acquisition to model deployment. - AI-Driven Confidential Computing.
Highlighting the strategies for ensuring trustworthy computing of AI models and the middleware tools that can facilitate this. - AI-centric Trustworthiness Metrics.
Proposing and refining metrics to evaluate the trustworthiness and reliability of AI systems running within TEEs. - Side Channel Attacks in AI Systems.
Delving into the vulnerabilities, implications, and countermeasures of side channel attacks, particularly when AI systems are executed within TEEs. - Ethic and Usability in the AI Confidential Computing.
Important Dates
- Paper Submission Deadline: 13 January 2024 (All Deadlines are Anywhere-on-earth (AoE), UTC-12)
- Reviews Due: 27 January 2024
- Review Released and Acceptance Notification: 31 January 2024
- Camera Ready Due: 12 February 2024
- Workshop: 26 February 2024
Double and Concurrent Submissions
Technical papers should not have substantially overlap with papers that are either already published or concurrently submitted to another journal or a conference/workshop with proceedings. Any instances of double-submission will lead to the paper being rejected immediately. To identify such cases, the Program Committee reserves the right to exchange information with chairs of other conferences and editors of journals.
Ethical Considerations
Human Subjects Research: For papers involving human subjects, analyzing data derived from such subjects, potentially endangering humans, or introducing other ethical or legal implications that might concern the VehicleSec community, authors should indicate whether an ethical review (e.g., IRB approval) took place. Additionally, the paper should elaborate on how ethical and legal issues were addressed.
Vulnerability Disclosure: When a paper uncovers a potentially high-impact vulnerability, authors are expected to outline their strategy for responsible disclosure. Should there be any concerns, the chairs will reach out to the authors. Please note that the Program Committee retains the discretion to reject submissions that don’t adequately demonstrate the proper handling of ethical or relevant legal matters.
Conflicts of Interest
Authors and Program Committee (PC) members must declare any conflicts of interest and specify their nature. Conflicts of interest exist in the following cases:
- Between advisors and their advisees.
- Between authors and PC members who share an institutional affiliation.
- Professional collaborations in the past 2 years, regardless of whether they resulted in a publication or funding.
- Close personal relationships.
PC members, including chairs, with a conflict of interest concerning a particular paper will be completely excluded from evaluating that paper.
Special Note on “Fake Conflicts”: It is prohibited to declare conflicts of interest merely to evade certain PC members who would otherwise have no conflict. Engaging in such behavior may result in paper rejection. The PC Chairs hold the authority to inquire further about any declared conflict. If authors feel uncertain about the impartiality of their paper’s treatment, they should directly communicate with the chairs, outlining substantial reasons for any special consideration they seek.