Security and Privacy of Machine Learning
Date: June, 2019
Location: Long Beach, CA, USA
As machine learning has increasingly been deployed in critical real-world applications, the dangers of manipulation and misuse of these models has become of paramount importance to public safety and user privacy. In applications such as online content recognition to financial analytics to autonomous vehicles all have shown the be vulnerable to adversaries wishing to manipulate the models or mislead models to their malicious ends.
This workshop will focus on recent research and future directions about the security and privacy problems in real-world machine learning systems. We aim to bring together experts from machine learning, security, and privacy communities in an attempt to highlight recent work in these area as well as to clarify the foundations of secure and private machine learning strategies. We seek to come to a consensus on a rigorous framework to formulate adversarial attacks targeting machine learning models, and to characterize the properties that ensure the security and privacy of machine learning systems. Finally, we hope to chart out important directions for future work and cross-community collaborations.
Call For Papers
Submission deadline: May 10, 2019 Anywhere on Earth (AoE)
Notification sent to authors: June 1, 2019 Anywhere on Earth (AoE)
Submission server: https://easychair.org/conferences/?conf=spml19
Submissions to this track will introduce novel ideas or results. Submissions should follow the ICML format and not exceed 4 pages (excluding references, appendices or large figures).
The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation .
We invite submissions on any aspect of machine learning that relates to computer security and privacy (and vice versa). This includes, but is not limited to:
- Test-time (exploratory) attacks: e.g. adversarial examples
- Training-time (causative) attacks: e.g. data poisoning attack
- Differential privacy
- Privacy preserving generative models
- Game theoretic analysis on machine learning models
- Manipulation of crowd-sourcing systems
- Sybil detection
- Exploitable bugs in ML systems
- Formal verification of ML systems
- Model stealing
- Misuse of AI and deep learning
- Interpretable machine learning
(Listed by alphabetical order)