Researchers develop robot that can learn from scratch, like a child. Machine picks up (hears) stimuli of sound and associates them to a simultaneous video feed of the speaker. Two researchers from the the Norwegian University of Science and Technology have developed a robot that can be made to learn from the scratch. The algorithms used to handle aspects of imaging and sound processing have biological roots.

Øyvind Brandtsegg, a music professor at NTNU, said that the project was still in its preliminary stages, and accurately modeling all portions of a living human brain would take a while.

The machine is known as [self]. It interprets cues of sound via a system modeled on the human ear, and recognizes images using a digital system based on the nerve cells that manage sensory impressions in the human brain. The machine is designed to learn solely from sensory stimulus and has no pre-programmed knowledge or information – replicating the learning process of a child in early years.

The project is a collaborative effort between Brandtsegg and postdoc Axel Tidemann, who works at the Department of Computer and Information Science. Since the machine requires a complex integrative system, the expertise of Tidemann as a software programmer was an important contribution.

“We understand just enough of each other’s fields of study to see what is difficult, and why,” remarked Brandtsegg.

 Stablishing Association Between Sound And Image

At the start of the project, the robot knew nothing. The robot picks up (hears) stimuli of sound and associates them to a simultaneous video feed of the speaker.

So, if a person is speaking, the sounds that he or she emphasizes on are picked up by the robot. In response, the robot plays other sounds associated with them, which represents a neural projection of how sound is linked to images in our brain. It doesn’t display images – it demonstrates how its ‘brain’ connects sounds to certain images.

How The Robot Learns?

The robot has been displayed in Trondheim and Arendal to interact with visitors, allowing researchers to see how it learned new information.

Initially, similar cues of sounds were mixed and misinterpreted, but the robot got better as its learning increased. It was able to grasp more varied impressions from different people, and could even filter incoming stimuli. Common inputs were remembered and processed during the robot’s downtime, which Brandtsegg refers to as ‘dreaming’.

Steadily, the robot could also connect images and sounds using a more complex approach, making appropriate connections on its own.

Development And Improvement

The robot requires constant development and maintenance. Between the two displays at Trondheim and Arendal, improvements in organizing memories and information were made. This makes it possible for the robot to make instructive associations.

“Every little change we make takes a lot of time, at least if we want to make sure that we don’t destroy any of the things it already has learned,” explained Brandtsegg.

Mind Of Its Own

[self] raises questions of whether it can be called a ‘living’ machine capable of thinking on its own. The robot is different from machines such as IBM’s Deep Blue or those used in industrial work because it is capable of ‘learning’ from external cues. Unlike these machines, [self] isn’t just a product of symbolic reasoning. It uses real-world scenarios to create valuable associations that are learned and remembered.

“Many artificial intelligence (AI) researchers, me included, believe that true intelligence can’t occur in a vacuum – it is a consequence of adapting and living in a dynamic environment,” explained Tidemann. “You could see our intelligence as a by-product of our adaptability. But we believe that the right way to reach for the ‘holy grail’ of AI is to implement biologically inspired models in a machine, let it operate in a physical environment and see if we can observe intelligent behaviour.”

The concept of ‘technological singularity’ describes a point where human intellect is surpassed by machines. Even though this is a far-fetched possibility, Brandtsegg and Tidemann’s aim behind [self] is to create a machine that can learn enough information to interact with humans as adequately and naturally as possible.