Nvidia scientists have created what they call a "first of its kind" AI system that can learn by observing the actions of a human, and then complete tasks based on the observed patterns.
Researchers said they developed their deep learning system to make it easier to pass instructions to smart systems and reduce the time needed to train and program other AIs or automated bots.
"The key to achieving this capability is leveraging the power of synthetic data to train the neural networks," researchers say.
"Current approaches to training neural networks require large amounts of labeled training data, which is a serious bottleneck in these systems. With synthetic data generation, an almost infinite amount of labeled training data can be produced with very little effort."
The system heavily relies on visual observation. In the proof-of-concept part of their research paper, the Nvidia team filmed a human arranging different colored cubes and then told the AI to arrange cubes in a certain order.
Only one demo from the human operator was needed for the robot to carry out the ordered task.
The Nvidia AI was able to perform the task because researchers didn't train it with images of real-world objects that would have made the AI too sensitive to camera quality and environmental noise.
Instead, they trained the AI using generic geometrical objects. When fed real-life imagery, the AI would break down the video feed to basic geometrical objects, record and learn from the human's actions, and then performed the ordered task on those basic objects. This way, the system could adapt to any video feed, image quality, or environmental noise.
Researchers say this is only the beginning of this type of work, which they detailed in a research paper entitled "Synthetically Trained Neural Networks for Learning Human-Readable Plans from Real-World Demonstrations."
This is not Nvidia's first rodeo in the world of AI. Previously, researchers created an AI bot that can generate random lifelike human faces and another one that can reconstruct holes or intentionally removed content from images.