Machine learning: Chapter 2

By Kellie Ottoboni

November 25, 2018

You leave your phone unlocked on the table and someone nearby plays a stream of white noise, causing your phone to open a website with malware. A malicious person has tricked the phone by strategically embedding a hidden message in the white noise. It is possible to manipulate machines by figuring out how they make decisions, without any knowledge of the inner workings of the software.

An attacker can make small changes to data inputs that users cannot detect with the naked eye, but that cause electronic devices to act differently than expected. The study of these kinds of hacks is called adversarial machine learning. Algorithms make critical decisions in our everyday technology, like targeted advertisements, fingerprint recognition, and voice command software, so it is imperative to understand their security risks.

Researchers in adversarial machine learning address the problem from two angles: attacks and defenses against them. Attack research reveals the security vulnerabilities of algorithms by producing adversarial examples, data that are indistinguishable to the human eye but that the algorithm misclassifies. Defense research attempts to protect against adversarial examples by making existing machine learning algorithms more robust to small data perturbations. These two subfields build upon each other; if a researcher publishes an algorithmic defense, another researcher will devise an adversarial attack to bypass it.

Attack research gives insight into the ways that algorithms can be manipulated. Recently, researchers have shown that voice-recognition algorithms in our devices are vulnerable to outside influence. Nicholas Carlini, a recent graduate from UC Berkeley, is lead author on several papers that have revealed ways to create adversarial attacks on speech-to-text software. The initial study embedded hidden messages in white noise that would cause phones to open apps or dial 911. More recently, Carlini and his co-authors have embedded hidden messages into other phrases and music. By producing adversarial sounds that are 99.9 percent similar to the original audio, the authors tricked voice command software into transcribing arbitrary phrases.

A hacker usually doesn’t know how an algorithm works. They launch a “black box” attack by repeatedly probing the algorithm until finding an input that produces an undesirable output. Carlini is more interested in “white box” attacks, where the hacker has access to information about how the model works. Carlini has developed both black box and white box attacks on voice command software. Studying white box attacks gives insight into the model’s vulnerabilities and how they can be made more secure.

An unlocked phone might be vulnerable to hackers hiding commands in white noise.

For a hacker to launch an attack on a voice command device, they must be in close proximity to a device with the voice feature enabled. Physically securing the device, like using password protection, can help prevent a voice command attack. But it’s unclear where to draw the line between physical security and algorithmic security: this is where defense research comes in.

Carlini says that defense research is growing quickly; hundreds of defense papers have been posted to the open source repository, arXiv, in the past two years. Many of these defenses use the same general strategy: they modify existing machine learning algorithms to make it more difficult for attackers to identify which data perturbations can be used to create adversarial examples. Other papers provide “certificates of robustness”: mathematical guarantees about how many errors that an attack could produce in the long run. Even if a malicious person accessed an unlocked phone, these algorithmic defenses could make it harder for them to launch an undetected adversarial attack.

Unfortunately, defenses and certificates have their limitations. Defenses are usually designed to block well-studied types of attacks. Jacob Steinhardt, an assistant professor of Statistics who studies certificates, says that the “families [of attacks being studied] are too restrictive to give you the security guarantees you want.” For instance, if cell phone companies created defenses against Carlini’s voice command attacks, they still could be vulnerable to a different kind of attack not yet seen in published research.

How will the arms race between attacks and defense end? Carlini says that before proposed defenses can be credible, they must be evaluated against broader ranges of attacks. Eventually, he imagines that things will “stabilize to a point where...[models are] not perfectly secure but we know what level of security they’ve reached.” Until then, keep your phone locked.


Kellie Ottoboni is a graduate student in statistics.

This article is part of the Fall 2018 issue.

Notice something wrong?

Please report it here.