Original news release was published by KU Leuven.
It wouldn’t be an overstatement to say that development of autonomous vehicles is all the rage. Rand Corporation has recently reported that it is nigh impossible to reliably prove the complete safety of safe-driving cars just by test drives, given the huge number of test-driven miles necessary for statistically viable results. Rand has suggested the need for better methods of demonstrating safety of the vehicles, and KU Leuven researchers may just be able to contribute. New study by Jonas Kubilius and Hans Op de Beeck shows that by using deep neural networks (DNNs), machines with image recognition technology can learn to respond to unfamiliar objects like humans would, showing elementary traits of what we know as intuition.
A self-driving car may thus be able to make human-like decisions under poor visibility conditions, such as in a fog or a heavy rain, when put in front of a distorted, or unfamiliar obstacle. Currently used image recognition technology, which is trained to recognize a fixed set of objects, would struggle under such conditions, because it is unable to assess what the unfamiliar object looks like, and then act accordingly as a live person would.
“We found that deep neural networks are not only good at making objective decisions (‘this is a car’), but also develop human-level sensitivities to object shape (‘this looks like …’),” Jonas Kubilius explains. “In other words, machines can learn to tell us what a new shape – say, a letter from a novel alphabet or a blurred object on the road – reminds them of. This means we’re on the right track in developing machines with a visual system and vocabulary as flexible and versatile as ours.”
Kubilius and de Beeck have demonstrated that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. They have shown that these models explain human judgements of shape for several benchmark sets of behavioral and neural stimulus on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments.
Does that mean we may soon be able to safely hand over the wheel? “Not quite,” says Kubilius, “We’re not there just yet. And even if machines will at some point be equipped with a visual system as powerful as ours, self-driving cars would still make occasional mistakes – although, unlike human drivers, they wouldn’t be distracted because they’re tired or busy texting. However, even in those rare instances when self-driving cars would, their decisions would be at least as reasonable as ours.”