
A team of eight researchers has discovered that by altering street signs, an adversary could confuse self-driving cars and cause their machine-learning systems to misclassify signs and take wrong decisions, potentially putting the lives of passengers in danger.
The idea behind this research is that an attacker could (1) print an entirely new poster and overlay it over an existing sign, or (2) attach smaller stickers on a legitimate sign in order to fool the self-driving car into thinking it's looking at another type of street sign.
While scenario (1) will trick even human observers and there's little chance of stopping it, scenario (2) looks like an ordinary street sign defacement and will likely affect only self-driving vehicles.
Street sign defacements fool cars in 67% to 100% of cases
For example, the images above show various street sign vandalism types that researchers devised to fool self-driving cars.
Researchers say that the first image on the left, the one with the "love" and "hate" words fooled a self-driving car's machine learning system into misclassifying the classic "Stop" sign as a "Speed Limit 45" sign in 100% of cases.
In the second and third images, stickers or graffiti led to the same result — a "Speed Limit 45" classification — but with a 67% success rate.
Poster-printed camouflage graffiti as seen in the fourth image caused the self-driving car's machine learning system to misclassify a "Right Turn" as a "Stop" sign in 100% of cases.
Some countermeasures exist
As self-driving car technologies will become more prevalent, keeping street signs clear of any visual clutter will become a mandatory task of any smart city administration across the globe.
Researchers say that authorities can fight such potential threats to self-driving car passengers by using an anti-stick material for street signs. In addition, car vendors should also take into account contextual information for their machine learning systems. For example, there's no reason to have a certain sign on certain roads (Stop sign on an interstate highway).
More details are available in the research team's paper entitled Robust Physical-World Attacks on Machine Learning Models, authored by eight researchers from the University of Washington, University of Michigan, Stony Brook University, and the University of California, Berkeley.
This is not the first research that has shown that self-driving cars can be hacked or at least disturbed from their normal mode of operation. In September 2015, Jonathan Petit, a security researcher at Security Innovation, Inc., has revealed he can easily fool the LiDAR sensors on any self-driving car to slow down or abruptly stop by targeting it with a laser pulse sent from a simple homemade electronics kit.
Comments
cdcjb - 6 years ago
Easy solution: smart vehicles upload data about the signs that they encounter to a database and your database should know where all the various signs are supposed to be.
If a sign that is considered important (like a stop sign) suddenly changes or disappears then you can alert local authorities to check it out.
Also, if the database proves reliable you can have a backup system. Maybe if the car is expecting a stop sign that is missing, perhaps the car could slow down and make sure it is safe to continue, at least until the sign can be fixed or it can be determined that there should no longer be a sign there.
Wolfger - 6 years ago
Just eliminate human drivers, and you will eliminate the need for human-visible street signs.
Ybother - 6 years ago
This only proves again that the biggest issue is programmer failure.
Stop signs are octagonal in shape all over the world, and are the ONLY octagonal-shaped road signs - for a reason. It ensures that the driver will immediately recognize the stop sign, no matter what.
If the software wasn't programmed to recognize the octagon shape of the sign, then whoever fed the program the sign parameters should have lost his/her job right there and then. It's such a basic descriptor!
cdcjb - 6 years ago
@Ybother, if somebody followed your simple rules -- every stop sign is an octagon or any octagon is a stop sign -- then the AI would be even easier to trick.
Ybother - 6 years ago
Well, actually, yes, every stop sign IS an octagon, but you misunderstood what I meant.
In most of the world, traffic signs use symbols, not words, to convey their intent. The shape and color of the sign defines its usage. Thus, warning signs are triangular with a red border, prohibitory signs are round with a red border or completely red, informational signs are square, etc. The Stop sign is almost universally octagonal, red, and has the word STOP or an upraised palm) on it. A Yield is the only triangular sign positioned with the base up.
Proper programming must include error checking and handling. Obviously, the software did not some basic error checking built into it. If I see a red triangle, but can't make out what's inside, I'll still know it's a warning of some sort, and not think it's a speed limit sign. This software saw a sign, ignored the fact that it was octagonal in shape, and decided it was a speed limit sign - and a very specific one at that.
So obviously the shape of the sign can't be used, by itself, to determine what the sign is, but taking the shape into consideration when trying to figure out the sign would have prevented this error. Of course, we can continue tricking the program by adding parts to the sign to alter its shape, but that can also confuse a human. And if the AI finds that it can't reconcile the discrepancies, it should go into error handling mode, which should be to bring the vehicle safely to a stop and give the driver control back.
Asimov should have added a 0'th Law of Robotics: Recognize that as an AI you are only an ARTIFICIAL intelligence, meant to mimic humans, but are never smarter than them...
WaggsWolf - 6 years ago
This is bull, look at the white paper behind this. The algorithm they are using is a research version and has nothing to do with the actual version used in self-driving cars. Those will be far more advanced, far better trained. They didn't even test with any brand's car. They have no idea if this problem affects any company with cars out there in development or otherwise. Very misleading.