Did a self-learning AI just turn the Turing test on its head?

Original news release was issued by The University of Sheffield.

A fair warning is due, the latest development in Artificial Intelligence research is a tad eerie. Computer AI is now capable of learning by simple observation, no specification of “what to observe” necessary. We are officially a step closer to a successful Turing test.

Turing test is a popular experiment developed by Alan Turing in 1950, that should determine whether a computer has achieved intelligence that is indistinguishable from human. In a Turing test, an interrogator is in a conversation with two subjects – one of them is a person, and the other a computer. If the interrogator consistently fails to correctly determine which of the two is a computer after the conversation, the computer has passed the test, and is considered to have human-level intelligence.
Researchers at the University of Sheffield have turned the Turing test into an inspiration for AI development, making it possible for machines to learn how natural or artificial systems work by simply observing them, without being told what to look for. This could mean advances in the world of technology with machines able to predict, among other things, human behavior.
Dr Roderich Gross from the Department of Automatic Control and Systems Engineering at the University of Sheffield explained the advantage of the approach, called ‘Turing Learning’, is that humans no longer need to tell machines what to look for:

“Our study uses the Turing test to reveal how a given system – not necessarily a human – works. In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements. To do so, we put a second swarm – made of learning robots – under surveillance too. The movements of all the robots were recorded, and the motion data shown to interrogators.”

He added: “Unlike in the original Turing test, however, our interrogators are not human but rather computer programs that learn by themselves. Their task is to distinguish between robots from either swarm. They are rewarded for correctly categorising the motion data from the original swarm as genuine, and those from the other swarm as counterfeit. The learning robots that succeed in fooling an interrogator – making it believe their motion data were genuine – receive a reward.”
“Imagine you want a robot to paint like Picasso. Conventional machine learning algorithms would rate the robot’s paintings for how closely they resembled a Picasso. But someone would have to tell the algorithms what is considered similar to a Picasso to begin with. Turing Learning does not require such prior knowledge. It would simply reward the robot if it painted something that was considered genuine by the interrogators. Turing Learning would simultaneously learn how to interrogate and how to paint.”
“Scientists could use it to discover the rules governing natural or artificial systems, especially where behaviour cannot be easily characterised using similarity metrics,” he said.
“Computer games, for example, could gain in realism as virtual players could observe and assume characteristic traits of their human counterparts. They would not simply copy the observed behaviour, but rather reveal what makes human players distinctive from the rest.”
The discovery could also be used to create algorithms that detect abnormalities in behaviour. This could prove useful for the health monitoring of livestock and for the preventive maintenance of machines, cars and airplanes.
Turing Learning could also be used in security applications, such as for lie detection or online identity verification.