Google's recently launched video classification API is not as smart as people expected, according to new research published by a three-man team from the Univerisity of Washington.

In a paper published last Friday, researchers presented a method that successfully fools Google's new Cloud Video Intelligence API, a machine learning system the company launched exactly a month ago.

This new API, currently in beta testing, uses powerful deep-learning models, built using frameworks like TensorFlow, to analyze videos and classify them based on their content.

Normal video classifiecation

The trick, according to researchers, was to insert an unrelated image inside the video at every two seconds.

These photos were enough to fool Google's new API, which detected the images as dominant among the rest of the video frames and used them to classify the video in the wrong categories.

Researchers used the following images during their tests.

Images inserted in test videos

The results speak for themselves, as the Google video classification AI tagged the video primarily on the fake images secretly inserted in the video feed.

Results for manipulated videos

Currently, even if in beta, this new AI-based video classification system is under testing with companies such as Disney (entertainment), Airbus (avionics), or Ocado (supermarket chain).

Flaws have real world impact if left unfixed

Researchers say they carried out this experiment because this flaw, if left inside the Google API, would allow an adversary to bypass the video classification system.

For example, this flaw could be used to mask ISIS propaganda videos uploaded on YouTube. Misclassifying these videos would result in the videos reaching a wider audience when they're presented to users as related video suggestions.

"Note that we could deceive the Google’s Cloud Video Intelligence API, without having any knowledge about the learning algorithms, video annotation algorithms or the cloud computing architecture used by the API," researchers said. "The success of the image insertion attack shows the importance of designing the system to work equally well in adversarial environments."

Bleeping Computer readers interested in the researchers' work can read their paper in full here. The paper is entitled "Deceiving Google's Cloud Video Intelligence API Built for Summarizing Videos," and is authored by Hossein Hosseini, Baicen Xiao, and Radha Poovendran.

Related Articles:

Nvidia Creates AI for Training Robots to Learn From Watching Humans

Droppers Is How Android Malware Keeps Sneaking Into the Play Store

Google User Content CDN Used for Malware Hosting

EU Fines Google $5 Billion for Breaching Antitrust Rules in Android

Google Chrome Testing Infobar Alerts On Data-Heavy Web Pages