Security and Privacy of Machine Learning
Date: June 14, 2019
Location:Long Beach Convention Center, Room 104B, Long Beach, CA, USA
As machine learning has increasingly been deployed in critical real-world applications, the dangers of manipulation and misuse of these models has become of paramount importance to public safety and user privacy. In applications such as online content recognition to financial analytics to autonomous vehicles all have shown the be vulnerable to adversaries wishing to manipulate the models or mislead models to their malicious ends.
This workshop will focus on recent research and future directions about the security and privacy problems in real-world machine learning systems. We aim to bring together experts from machine learning, security, and privacy communities in an attempt to highlight recent work in these area as well as to clarify the foundations of secure and private machine learning strategies. We seek to come to a consensus on a rigorous framework to formulate adversarial attacks targeting machine learning models, and to characterize the properties that ensure the security and privacy of machine learning systems. Finally, we hope to chart out important directions for future work and cross-community collaborations.
Schedule
Schedule
Poster Session #1 (2:00pm-2:45pm)
- Shiqi Wang, Yizheng Chen, Ahmed Abdou and Suman Jana. Enhancing Gradient-based Attacks with Symbolic Intervals
- Bo Zhang, Boxiang Dong, Hui Wendy Wang and Hui Xiong. Integrity Verification for Federated Machine Learning in the Presence of Byzantine Faults
- Xinyun Chen, Wenxiao Wang, Yiming Ding, Chris Bender, Ruoxi Jia, Bo Li and Dawn Song. Leveraging Unlabeled Data for Watermark Removal of Deep Neural Networks
- Qian Lou and Lei Jiang. SHE: A Fast and Accurate Deep Neural Network for Encrypted Data
- Matt Jordan, Justin Lewis and Alexandros G. Dimakis. Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes
- Aria Rezaei, Chaowei Xiao, Bo Li and Jie Gao. Protecting Sensitive Attributes via Generative Adversarial Networks
- Saeed Mahloujifar, Mohammad Mahmoody and Ameer Mohammed. Universal Multi-Party Poisoning Attacks
- Hongge Chen, Huan Zhang, Si Si, Yang Li, Duane Boning and Cho-Jui Hsieh. Verifying the Robustness of Tree-based Models
- Congyue Deng and Yi Tian. Towards Understanding the Trade-off Between Accuracy and Adversarial Robustness
- Zhi Xu, Chengtao Li and Stefanie Jegelka. Exploring the Robustness of GANs to Internal Perturbations
- Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent Ghaoui and Michael Jordan. Theoretically Principled Trade-off between Robustness and Accuracy
- Pang Wei Koh, Jacob Steinhardt and Percy Liang. Stronger Data Poisoning Attacks Break Data Sanitization Defenses
- Bokun Wang and Ian Davidson. Improve Fairness of Deep Clustering to Prevent Misuse in Segregation
- Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine and Stuart Russell. Adversarial Policies: Attacking Deep Reinforcement Learning
- Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong and Tao Wei. Attacking Multiple Object Tracking using Adversarial Examples
- Joseph Szurley and Zico Kolter. Perceptual Based Adversarial Audio Attacks
Poster Session #2 (5:15pm-6:00pm)
- Felix Michels, Tobias Uelwer, Eric Upschulte and Stefan Harmeling. On the Vulnerability of Capsule Networks to Adversarial Attacks
- Zhaoyang Lyu, Ching-Yun Ko, Tsui-Wei Weng, Luca Daniel, Ngai Wong and Dahua Lin. POPQORN: Quantifying Robustness of Recurrent Neural Networks
- Avishek Ghosh, Justin Hong, Dong Yin and Kannan Ramchandran. Robust Heterogeneous Federated Learning
- Ruoxi Jia, Bo Li, Chaowei Xiao and Dawn Song. Delving into Bootstrapping for Differential Privacy
- Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu and Bin Dong. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
- Mark Lee and Zico Kolter. On Physical Adversarial Patches for Object Detection
- Venkata Gandikota, Raj Kumar Maity and Arya Mazumdar. Private vqSGD: Vector-Quantized Stochastic Gradient Descent
- Dimitrios Diochnos, Saeed Mahloujifar and Mohammad Mahmoody. Lower Bounds for Adversarially Robust PAC Learning
- Nicholas Roberts and Vinay Prabhu. Model weight theft with just noise inputs: The curious case of the petulant attacker
- Ryan Webster, Julien Rabin, Frederic Jurie and Loic Simon. Generating Private Data Surrogates for Vision Related Tasks
- Joyce Xu, Dian Ang Yap and Vinay Prabhu. Understanding Adversarial Robustness Through Loss Landscape Geometries
- Kevin Shi, Daniel Hsu and Allison Bishop. A cryptographic approach to black-box adversarial machine learning
- Haizhong Zheng, Earlence Fernandes and Atul Prakash. Analyzing the Interpretability Robustness of Self-Explaining Models
- Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Sicun Gao, Dean Tullsen and Hadi Esmaeilzadeh. Shredder: Learning Noise for Privacy with Partial DNN Inference on the Edge
- Chaowei Xiao, Xinlei Pan, Warren He, Bo Li, Jian Peng, Mingjie Sun, Jinfeng Yi, Mingyan Liu, Dawn Song. Characterizing Attacks on Deep Reinforcement Learning
- Sanjam Garg, Somesh Jha, Saeed Mahloujifar and Mohammad Mahmoody. Adversarially Robust Learning Could Leverage Computational Hardness
- Horace He, Aaron Lou, Qingxuan Jiang, Isay Katsman, Serge Belongie, Ser-Nam Lim. Adversarial Example Decomposition
Poster Size
ICML Workshop posters paper should be roughly 24" x 36" in portrait orientation. There will be no poster board; you will tape your poster directly to the wall. Use lightweight paper. We provide the tape.
Organizing Committee
(Listed by alphabetical order)
Program Committee
Call For Papers
Submission deadline: May 20, 2019 Anywhere on Earth (AoE)
Notification sent to authors: June 3, 2019 Anywhere on Earth (AoE)
Submission server: https://easychair.org/conferences/?conf=spml19
Submissions to this track will introduce novel ideas or results. Submissions should follow the ICML format and not exceed 4 pages (excluding references, appendices or large figures).
The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation .
Submissions need to be anonymized. The workshop allows submissions of papers that are under review or have been recently published in a conference or a journal. The workshop will not have any official proceedings.
We invite submissions on any aspect of machine learning that relates to computer security and privacy (and vice versa). This includes, but is not limited to:
- Test-time (exploratory) attacks: e.g. adversarial examples
- Training-time (causative) attacks: e.g. data poisoning attack
- Differential privacy
- Privacy preserving generative models
- Game theoretic analysis on machine learning models
- Manipulation of crowd-sourcing systems
- Sybil detection
- Exploitable bugs in ML systems
- Formal verification of ML systems
- Model stealing
- Misuse of AI and deep learning
- Interpretable machine learning