Security and Privacy of Machine Learning

Date: June 14, 2019

Location:Long Beach Convention Center, Room 104B, Long Beach, CA, USA

As machine learning has increasingly been deployed in critical real-world applications, the dangers of manipulation and misuse of these models has become of paramount importance to public safety and user privacy. In applications such as online content recognition to financial analytics to autonomous vehicles all have shown the be vulnerable to adversaries wishing to manipulate the models or mislead models to their malicious ends.

This workshop will focus on recent research and future directions about the security and privacy problems in real-world machine learning systems. We aim to bring together experts from machine learning, security, and privacy communities in an attempt to highlight recent work in these area as well as to clarify the foundations of secure and private machine learning strategies. We seek to come to a consensus on a rigorous framework to formulate adversarial attacks targeting machine learning models, and to characterize the properties that ensure the security and privacy of machine learning systems. Finally, we hope to chart out important directions for future work and cross-community collaborations.


8:40am-9:00am Opening Remarks (Dawn Song)
Session 1: Security Vulnerabilities of Machine Learning Systems
9:00am-9:30am Invited Talk #1: Patrick McDaniel. A systems security perspective on adversarial machine learning
9:30am-10:00am Inivted Talk #2: Abdullah Al-Dujaili. Flipping Sign Bits is All You Need to Craft Black-Box Adversarial Examples
10:00am-10:20am Contributed Talk #1: Enhancing Gradient-based Attacks with Symbolic Intervals
10:20am-10:30am Spotlight Presentation #1: Adversarial Policies: Attacking Deep Reinforcement Learning
10:30am-10:45am Coffee Break
Session 2: Secure and Private Machine Learning in Practice
10:45am-11:15am Invited Talk #3: Le Song. Adversarial Attack on Graph Structured Data
11:15am-11:45am Invited Talk #4: Sergey Levine. Robust Perception, Imitation, and Reinforcement Learning for Embodied Learning Machines.
11:45am-12:05pm Contributed Talk #2: Private vqSGD: Vector-Quantized Stochastic Gradient Descent
12:05pm-1:15pm Lunch
Session 3: Provable Robustness and Verifiable Machine Learning Approaches
1:15pm-1:45pm Invited Talk #5: Ziko Kolter. Provable Robustness beyond Region Propagation: Randomization and Stronger Threat Models
1:45pm-2:05pm Contributed Talk #3: Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes
2:05pm-2:45pm Poster Session #1
Session 4: Trustworthy and Interpretable Machine Learning Towards
2:45pm-3:15pm Invited Talk #6: Alexander Madry. Robustness beyond Security
3:15pm-3:45pm Invited Talk #7: Been Kim. Towards interpretability for everyone: Testing with Concept Activation Vectors
3:45pm-4:05pm Contributed Talk #4: Theoretically Principled Trade-off between Robustness and Accuracy
4:05pm-4:15pm Spotlight Presentation #2: Model Weight Theft with just Noise Inputs: The Curious Case of the Petulant Attacker
4:15pm-5:15pm Panel discussion
5:15pm-6:00pm Poster Sesson #2


Poster Session #1 (2:00pm-2:45pm)

Poster Session #2 (5:15pm-6:00pm)

Poster Size

ICML Workshop posters paper should be roughly 24" x 36" in portrait orientation. There will be no poster board; you will tape your poster directly to the wall. Use lightweight paper. We provide the tape.

Organizing Committee

(Listed by alphabetical order)

Program Committee

  • Chaowei Xiao (University of Michigan)
  • Xinyun Chen (University of California, Berkeley)
  • Mingjie Sun (Tsinghua University)
  • Linyi Li (University of Illinois at Urbana-Champaign)
  • Ruoxi Jia (University of California, Berkeley)
  • Kimin Lee (Korea Advanced Institute of Science and Technology)
  • Yunhan Jia (Baidu X-Lab)
  • Dimitris Tsipras (Massachusetts Institute of Technology)
  • Qi Alfred Chen (University of California, Irvine)
  • Huan Zhang (University of California, Los Angeles)
  • Yizheng Chen (Georgia Institute of Technology)
  • Hadi Abdullah (UF-FICS)
  • Octavian Suciu (University of Maryland)
  • Sixie Yu (Washington University in St. Louis)
  • Jonathan Uesato (Deepmind)
  • Liang Tong (Vanderbilt University)
  • Yigitcan Kaya (University of Maryland)
  • Kaizhao Liang (University of Illinois, Urbana Champaign)
  • Matthew Wicker (University of Georgia)
  • Anand Bhattad (University of Illinois at Urbana-Champaign)
  • Xinchen Yan(University of Michigan, Ann Arbor)
  • Karl Ni (Google LLC)
  • Huichen Li (Shanghai Jiao Tong University)
  • Li Erran Li ( and Columbia University)
  • Lin Zhuo (Shanghai Jiao Tong Univerisity)
  • Hadi Salman (Microsoft Research AI)
  • Pengchuan Zhang (Microsoft)
  • Jerry Li (Microsoft)
  • Chen Qian (tencent)
  • Weilin Xu (University of Virginia)
  • Warren He (University of California, Berkeley)
  • Varun Chandrasekaran (University of Wisconsin-Madison)
  • Yuxin Wu (Tsinghua University)
  • Xingxing Wei (Tsinghua University)
  • Kathrin Grosse (CISPA, Saarland University)
  • Min Jin Chong (University of Illinois at Urbana-Champaign)
  • Chao Yan (Chinese Academy of Sciences)
  • Shreya Shankar (Stanford University)
  • Eric Wong (Carnegie Mellon University)
  • Mantas Mazeika (University of Chicago)
  • Fartash Faghri (University of Toronto)
  • Pin-Yu Chen (IBM)
  • Yulong Cao (University of Michigan)
  • Xiaowei Huang (University of Liverpool)
  • Hongge Chen (Massachusetts Institute of Technology)
  • Greg Yang (Microsoft)
  • Call For Papers

    Submission deadline: May 20, 2019 Anywhere on Earth (AoE)

    Notification sent to authors: June 3, 2019 Anywhere on Earth (AoE)

    Submission server:

    Submissions to this track will introduce novel ideas or results. Submissions should follow the ICML format and not exceed 4 pages (excluding references, appendices or large figures).

    The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation .

    Submissions need to be anonymized. The workshop allows submissions of papers that are under review or have been recently published in a conference or a journal. The workshop will not have any official proceedings.

    We invite submissions on any aspect of machine learning that relates to computer security and privacy (and vice versa). This includes, but is not limited to: