Defaced street signs

A team of eight researchers has discovered that by altering street signs, an adversary could confuse self-driving cars and cause their machine-learning systems to misclassify signs and take wrong decisions, potentially putting the lives of passengers in danger.

The idea behind this research is that an attacker could (1) print an entirely new poster and overlay it over an existing sign, or (2) attach smaller stickers on a legitimate sign in order to fool the self-driving car into thinking it's looking at another type of street sign.

While scenario (1) will trick even human observers and there's little chance of stopping it, scenario (2) looks like an ordinary street sign defacement and will likely affect only self-driving vehicles.

Street sign defacements fool cars in 67% to 100% of cases

For example, the images above show various street sign vandalism types that researchers devised to fool self-driving cars.

Researchers say that the first image on the left, the one with the "love" and "hate" words fooled a self-driving car's machine learning system into misclassifying the classic "Stop" sign as a "Speed Limit 45" sign in 100% of cases.

In the second and third images, stickers or graffiti led to the same result — a "Speed Limit 45" classification — but with a 67% success rate.

Poster-printed camouflage graffiti as seen in the fourth image caused the self-driving car's machine learning system to misclassify a "Right Turn" as a "Stop" sign in 100% of cases.

Some countermeasures exist

As self-driving car technologies will become more prevalent, keeping street signs clear of any visual clutter will become a mandatory task of any smart city administration across the globe.

Researchers say that authorities can fight such potential threats to self-driving car passengers by using an anti-stick material for street signs. In addition, car vendors should also take into account contextual information for their machine learning systems. For example, there's no reason to have a certain sign on certain roads (Stop sign on an interstate highway).

More details are available in the research team's paper entitled Robust Physical-World Attacks on Machine Learning Models, authored by eight researchers from the University of Washington, University of Michigan, Stony Brook University, and the University of California, Berkeley.

This is not the first research that has shown that self-driving cars can be hacked or at least disturbed from their normal mode of operation. In September 2015, Jonathan Petit, a security researcher at Security Innovation, Inc., has revealed he can easily fool the LiDAR sensors on any self-driving car to slow down or abruptly stop by targeting it with a laser pulse sent from a simple homemade electronics kit.