Adversarial Machine Learning in Relational Domains

Prof. Daniel Lowd, University of Oregon

12:00-1:00 Tuesday, 22 March 2016, ITE 325b, UMBC

Many real-world domains, such as web spam, auction fraud, and counter-terrorism, are both adversarial and relational. In adversarial domains, a model that performs well on training data may do poorly in practice as adversaries modify their behavior to avoid detection. Previous work in adversarial machine learning has assumed that instances are independent from each other, both when manipulated by an adversary and labeled by a classifier. Relational domains violate this assumption, since object labels depend on the labels of related objects as well as their own attributes.

In this talk, I will present two different methods for learning relational classifiers that are robust to adversarial noise. Our first approach assumes that related objects have correlated labels and that the adversary can modify a certain fraction of the attributes. In this case, we can incorporate the adversary’s worst-case manipulation directly into the learning problem and find optimal weights in polynomial time. Our second method generalizes to any relational learning problem where the perturbations in feature space are bounded by an ellipse or polyhedron. In this case, we show that adversarial robustness can be achieved by a simple regularization term or linear transformation of the feature space. These results form a promising foundation for building robust relational models for adversarial domains.

 

Daniel Lowd is an Assistant Professor in the Department of Computer and Information Science at the University of Oregon. His research interests include learning and inference with probabilistic graphical models, adversarial machine learning, and statistical relational machine learning. He received his Ph.D. in 2010 from the University of Washington. He has received a Google Faculty Award, an ARO Young Investigator Award, and the best paper award at DEXA 2015.

Host: Cynthia Matuszek,