What is adversarial machine learning and how it will revolutionize cybersecurity?

We are pleased to introduce Prof. Patrick McDaniel (Pennsylvania State University) as one of the keynote speakers at SecureComm 2017, 13th EAI International Conference on Security and Privacy in Communication Networks, which will take place between October 22-24, 2017 in Niagara Falls, Canada. We talked privacy and security, adversarial machine learning, and why people need to be involved in these evolving technologies.

Could you give a summary of your current work and some insights of what you are coming to share on Securecomm 2017?

One of the most important trends in technology, certainly in computer science, has been the rapid evolution of machine learning for the past five years. Its ability to process massive amounts of information, infer facts, learn correlations and situational awareness in ways that we didn’t think were possible. This has really opened the door to new applications not just simple operations like analytics but also for more complex things like autonomous vehicles.  The advances of analytics, for example, are changing the working practices in healthcare and the financial industry.
However, there is a dark side, which is the uncertainty and vulnerability of these learning systems towards manipulation by the adversary. For this reason, in the last two years, I’ve been working with organizations like Google, OpenAI, and various parts of the U.S. government in worldwide research on something called adversarial machine learning.

Adversarial machine learning analyzes attacks and defences for these machine learning systems. Specifically, it looks at what kinds of imports can I provide a machine learning system that would cause it to have the wrong reaction. A good example of this is having a stop sign that we as humans would agree it is an actual stop sign. But I can manipulate that image of the stop sign in very subtle ways so that a machine system, which learned what a stop sign was from numerous images, to misclassify it as say a yield sign. I will go through some examples in my conference talk of the different kind of manipulations and the kind of damage that can be caused by the misuse.

What are the biggest issues concerning security and privacy for the future?

The reality is that the economics of computation and intelligent computer systems are going to drive more and more machine learning into our physical systems. We see it nowadays with things like face recognition in authentication or autonomous vehicles used for delivery. These technologies are much more cost effective which would translate into a more physical domain and more intelligent systems being used for services and authentication.
The issue is that there are going to be people who are going to act as adversaries with the idea to manipulate the systems to work to their advantage – deliver packages to the wrong place, get access to facilities they should not have access, etc. So, I think what we are going to see with the more physical domain is groups of people who are trying to manipulate or hack these learning systems with the use of some fairly sophisticated techniques to do so.

How would the automation processes impact the workforce? Would this be a major concern?

It is more economical to have intelligent systems driven by machine learning than to have a human in some cases. Business organizations will do it just because the economics mandates it and that is going to change the workforce significantly. One of the limitations is that we can make systems such as autonomous delivery drones work but we don’t know how they would respond to the presence of an adversary. Can we make them work consistently in these situations? So, I think people are a little bit naive when they are saying everything is going to be automatic immediately just because they can make a system works.
The problem is once you add an adversary that wants to manipulate that system, it becomes a lot more challenging to do. I think the presence of adversaries is going to be a limiting factor in the integration of the systems and their impact on the workforce. We will still need humans because humans frankly in many ways are harder to ‘hack’ at least in a trivial sense than this emerging intelligence is now. Part of the science will be evolving the new technology to be able to protect itself from adversary manipulation.

What would you say are the main trends in this area that are showing promise?

There is really an interesting dialogue going on. In this field, people didn’t think this sort of adversary manipulation was possible as long as ten years ago. It is only within the last couple of years which has emerged as a science. We are really at the very early stage of the science and as often happens when a new technology comes forth, we are seeing a tug of war between stronger and stronger attacks and stronger and stronger defences. I think the reality is that we are beginning to understand the emerging trends, what the attack algorithms are, and how good we can make them. Conversely, once we understand what the algorithms are, how to generate them more quickly and more efficiently, we would then understand why these adversarial samples exist.
Therefore, we can start designing defences which are going to ameliorate these limitations.  What is really important is to continue the dialogue that goes back and forth because that is what is really driving this. One of the main advantages perhaps more than any other technology that I have observed in my career is the involvement of various bodies and parties. The industry is deeply involved, the government is deeply involved, everyone is extraordinarily involved in the evolution of the science because everyone can see just sort of on the face of it how important this technology is going to evolve into.

Learn more and register for SecureComm 2017.