Handwritten notes fool state-of-the-art machine vision AI
OpenAI, the artificial Handwritten notes intelligence research company founded by Elon Musk and Sam Altman, has created state-of-the-art machine vision AI that’s capable of recognizing over 1000 different items in an image. But just one handwritten note can trick it completely, leaving researchers scratching their heads at how this could be possible. The note is much easier to read than the 1000 real images it was tested on; so why was it able to pull off such an impressive feat?
In what might be a sobering wake-up call for AI developers, OpenAI’s state-of-the-art machine vision AI has been fooled by handwritten notes. The blog post is published on the company’s website and explains in detail how someone could exploit this flaw by building a virtual note in the environment that would fool the system.
Notes trick ImageNet Challenge winner
Handwritten notes fooled a state-of-the-art machine vision AI developed by the company OpenAI. The OpenAI system consists of an AI algorithm that can analyze images and learn to recognize different objects and animate them.
When the team uploaded new handwritten notes, the system was not able to recognize them as being different from pictures of solid black ink with no additional shapes or textures.
This inability to tell the difference between handwritten notes and other images of solid black ink was caused by an improper data size reduction algorithm. The developers of this type of machine vision AI made assumptions about what types of data have been most common in current images.
How does this relate to business?
State-of-the-art machine vision AI was tricked by handwritten notes in a recent study. This is the latest example of what can happen when machine intelligence technology encounters something new. Let’s look at why this may be troubling and how you can protect your business from being fooled.
The research, conducted by engineers from Tel Aviv University in Israel and Nvidia in Silicon Valley, involved applying a watermark to images that are designed to fool deep-learning AI models. Deep learning algorithms were fed thousands of natural images with and without these watermarks, enabling them to identify the marks within their given image sets with high accuracy up until this point.
Where can you apply this?
Due to a previously-unreported vulnerability in Google’s cloud system, it is now possible for anyone with decent handwriting to fool state-of-the-art machine vision AI. We humans have become the hacker that is going to bring down Google’s machine vision AI if no one takes action against this new and never-before-seen threat. Please do not take this lightly or you might find yourself as one of the next victims of handwritten notes.
How did they do it?
# Humans had figured out how to make handwritten notes fool the AI and this is just one of the many ways in which AI needs to be safeguarded from malicious exploitation by programmers.
# The OpenAI scientists, who were working at their lab in San Francisco (CA, USA) came up with a set of 62 different handwritten Notes to pass as test data for training.
# At first look, it appeared as if these Notes had been created by other humans and not by machines. This set of Notes was given to their state-of-the-art OpenAir Vision GAN algorithm which is capable of generating new images on its own based on datasets fed into it. These samples are nothing but handwritten digits and they contain all ten digits that an average human being would be able to read just like one could when presented with numbers drawn in pen or pencil.
Limitations and implications for A.I. research
The Google Brain team created a system that solves computer vision tasks, but handwritten notes managed to fool the state-of-the-art machine vision AI. Even though this is an extreme case and does not represent broader risks, it still suggests limitations in machine vision that should be addressed in future research.
Limitations and implications for businesses
This AI can be fooled by handwritten notes and the implications for the security industry should be alarming.
The OpenAI project has shown how one of the most powerful state-of-the-art machine vision AI in existence is easily tricked into seeing handwritten notes as tools.
OpenAI’s researchers created a new framework called JANUS that was able to trick an AI trained with Tensorflow into seeing what it was not looking for.
For more info Visit Us