Researchers at the University of Washington were able to demonstrate how easily Google's new AI tool for video searching can be deceived. The program uses machine learning to automatically analyze and label video content.
Google has recently released the Cloud Intelligence API which is a tool that helps developers to build apps that can automatically recognize objects and search for content within videos. With this, users can now search each moment of every video file in their catalog.
The tool is able to quickly annotate videos stored in Google Cloud Storage and helps users identify key nouns entities in their videos and when they occur in the footage. It also is able to separate signal from noise by getting relevant information at the video, shot or per frame.
TechXplore reported that UW electrical engineers and security researchers were able to show how the API can be deceived by slightly manipulating the videos. One can simply add subtle modifications to the video by adding an image to it so that the system returns only the labels related to the inserted image.
In the study, they inserted an image of a car once every 50 frames into a video about animals. The system returned results that suggested the video was about an Audi.
This was the same team who found that Google's machine learning-based platform, which has been designed to identify and weed out comments from Internet trolls, can be easily deceived by typos. Misspelling offensive words or adding unnecessary punctuation can lead to the system ignoring the offensive content.
Radha Poovendran, chair of the UW electrical engineering department and director of the Network Security Lab and senior author of the study, said that machine learning systems are generally designed to give the best performance in ideal situations. However, in the real world, these systems are susceptible to intelligent subversion or attacks. He added that systems should be robust and resilient to adversaries as AI products are slowly being used in everyday applications.