NDSS Symposium 2025 Call for Artifacts

NDSS Symposium 2025 adopts an Artifact Evaluation (AE) process, allowing authors to submit an artifact alongside accepted papers. The artifact may include source code, scripts, datasets, models, test suites, benchmarks, and/or any other material underlying the paper’s contributions. Each submitted artifact will be reviewed by the NDSS Artifact Evaluation Committee (AEC).

The AE process promotes the reproducibility of experimental results and the dissemination of artifacts to benefit our community as a whole. Publishing an artifact benefits, among others, how easily peers can build on it, use it as a comparison point, or solve questions about cases not considered by the original authors.

Authors of NDSS papers have the option of submitting their artifacts shortly after the notification of the (conditional) acceptance of their papers. Papers that pass artifact evaluation will include a 2-page appendix detailing the artifact and have evaluation badges on their first page.

The AE process recognizes authors who devote effort to make their work reusable and reproducible by others. This includes making artifacts publicly available, documenting and packaging their work in a way that facilitates reuse, and structuring experiments such that they can be repeated and the results reproduced by other researchers. The AEC will consider outstanding artifacts for Distinguished Artifact Awards.

Call for Artifacts

Before submitting your artifact, please check the information and submission guidelines provided on the Artifact Evaluation website.

Important Dates

Summer Deadline

  • Thu, 20 June 2024: Paper notification to authors
  • Thu, 27 June 2024: Artifact registration deadline
  • Thu, 4 July 2024: Artifact submission deadline
  • Mon, 8 July to Mon, 15 July 2024: Kick-the-tires stage (answering AEC preliminary questions)
  • Mon, 9 September 2024: Artifact decisions
  • Thu, 12 September 2024: Camera-ready deadline for papers

Fall Deadline

  • Thu, 19 September 2024: Paper notification to authors
  • Thu, 26 September 2024: Artifact registration deadline
  • Thu, 3 October 2024: Artifact submission deadline
  • Mon, 7 October to Mon, 14 October 2024: Kick-the-tires stage (answering AEC reviewer questions)
  • Mon, 2 December 2024: Artifact decisions
  • Thu, 5 December 2024: Camera-ready deadline for papers

Evaluation Process

Authors are invited to submit artifacts soon after receiving the paper notification. At least one contact author must be reachable and respond to questions in a timely manner during the entire evaluation period to allow round trip communications between the AEC and the authors. Artifacts can be submitted only in the AE time frame associated with the paper submission round.

In addition to accepted papers, papers that receive a major or minor revision decision are eligible for AE: at artifact submission time, their authors should justify the necessary changes that they intend to carry out on the initially submitted paper and how such changes relate to the submitted artifact. 

At submission time, authors choose which badges (see below) they want to be evaluated for. Members of the AEC will evaluate each artifact using the artifact appendix and instructions as guides, as detailed later in this page. Evaluators will communicate anonymously with authors through HotCRP to resolve minor issues and ask clarifying questions.

Evaluation starts with a kick-the-tires period during which evaluators ensure they can access their assigned artifacts and perform basic operations such as building and running a minimal working example. Artifact evaluations include feedback about the artifact, giving authors the option to address any significant blocking issues for AE work using this feedback. Communication after the kick-the-tires stage end can address interpretation concerns for the produced results or minor syntactic issues in the submitted materials.

For prospective authors: The target should be to present and document your artifact in a way that AEC members can use it and complete the evaluation successfully with minimal (and ideally no) interaction. To ensure that your instructions are complete, we suggest that you run through them on a fresh setup prior to submission, following exactly the instructions you have provided.

Badges

Authors can request their artifact to be evaluated towards one, two, or all of the following badges:

Available. To earn this badge, the AEC must judge that the artifact associated with the paper has been made available for retrieval permanently and publicly. As an artifact undergoing AE often evolves as a consequence of AEC feedback, authors can use mutable storage for the initial submission, but must commit to uploading their materials to public services (e.g., Zenodo, FigShare, Dryad) for permanent storage backed by a Digital Object Identifier (DOI) if the badge is awarded. The artifact appendix prepared for publication will have to mention the artifact DOI. Authors are welcome to report additional sources, like GitHub and GitLab, that may ease the dissemination of the artifact and possible future updates. Furthermore, for this badge, authors should provide a README file referencing the paper and a LICENSE file for the materials.

Functional. To earn this badge, the AEC must judge that the artifact conforms to the expectations set by the paper for functionality, usability, and relevance. Also, an artifact must be usable on other machines than the authors’, including cases where specialized hardware is required (for example, paths, addresses, and identifiers must not be hardcoded.) The AEC will particularly consider three aspects:

  • Documentation: is the artifact sufficiently documented to be exercised by readers of the paper?
  • Completeness: does the submitted artifact include all of the key components described in the paper?
  • Exercisability: does the submitted artifact include the scripts and data needed to run the experiments described in the paper, and can the software be successfully executed?

Reproduced. To earn this badge, the AEC must judge that they can use the submitted artifact to obtain the main results presented in the paper. In short, is it possible for the AEC to independently repeat the experiments and obtain results that support the main claims made by the paper? The goal of this effort is not to reproduce the results exactly, but instead to generate results independently within an allowed tolerance such that the main claims of the paper are validated. For example, in the case of lengthy experiments, scaled-down versions can be proposed if clearly and convincingly explained for their significance.

Artifact Evaluation Committee

Artifact Evaluation Co-chairs

Daniele Cono D’Elia (Sapienza University of Rome) and Mathy Vanhoef (KU Leuven)

See the list of committee members.