NSPW 2020

Virtual Conference
October 26-29, 2020

Due to COVID-19, NSPW 2020 will be a virtual conference, taking place on October 26-29, 2020 as planned. See below for an example of a virtual session.

The New Security Paradigms Workshop (NSPW) seeks embryonic, disruptive, and unconventional ideas on information and cyber security that benefit from early, in-depth, and constructive feedback. Submissions typically address current limitations of information security, directly challenge long-held beliefs or the very foundations of security, or discuss problems from an entirely novel angle, leading to new solutions. We welcome papers both from computer science and other disciplines that study adversarial relationships, as well as from practice. The workshop is invitation-only; all accepted papers receive a 1 hour plenary time slot for presentation and discussion. In order to maximize diversity of perspectives, we particularly encourage submissions from new NSPW authors, from Ph.D. students, and from non-obvious disciplines and institutions.

In 2020, NSPW invites theme submissions relating to “Automated Reasoning for Security” in addition to regular submissions. Computers are making ever more decisions on behalf of humans. This recent growth in deployment of automated reasoning has also led to the development and application of technologies for automated reasoning in security. At NSPW 2020, we invite authors to consider how the cybersecurity community should deal with the rise of automation: how do we secure automated reasoning, and how should it be used for security of other systems?

NSPW is interested in methods of securing automated reasoning; applications of technology for automated reasoning (e.g., machine learning (ML)) to security; the implications of such applications; and how it might create or affect new security paradigms, including how we understand human reasoning before automating it. Any attack papers should follow guidelines for writing up case studies, and should clearly explain why understanding of this particular attack is transferable, trustworthy, and contributes to more general understanding of automated reasoning and security.

Possible topics include, but are not limited to:

  • prevention (e.g., program verification to provide improved security)
  • protection (e.g., anti-virus or intrusion detection systems containing ML)
  • attacker use of automation
  • adversarial ML (all the different ways that machine learning can be attacked and protected)
  • the vulnerabilities in ML or automated reasoning systems, and how they are coordinated, disclosed, and remediated
  • understanding human reasoning