The Best Paper Award from Intetain 2015

‘A new way to interact with cultural heritage sites’, that’s what a team of researchers from Italy (Università degli Studi di Modena e Reggio Emilia) proposed at Intetain 2015, the 7th International Conference on Intelligent Technologies for Interactive Entertainment, which took place Torino, Italy on June 10–12, 2015.

The research, entitled ‘Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences’, led by Stefano Alletto, Davide Abati, Giuseppe Serra and Rita Cucchiara, won the Best Paper Award of the conference.

After researchers about the interest towards cultural cities, which is in constant growth, the researchers understood that there is a demand for new multimedia tools and applications which could enrich tourists fruition. In the paper, they propose an egocentric vision system to enhance tourist’s cultural heritage experience.

The main idea behind this framework is that a tourist may not be able to immediately identify all the details in an artwork and may have to rely on a guide to do so. For this reason, the researcher’s solution can propose to the user a detailed view of an historical building and allow the visitor to browse through architectural details. The system the researchers thought consists of two main components: the retrieval of similar images and the user’s absolute localization. To be effective, this system requires the ability to see what the user sees in a perspective that resembles his very own. To do this the researchers thought to wearable devices and head mounted cameras, systems dealing with an egocentric perspective, which are arousing a growing interest in the research community. A camera follows his path effectively providing a recording of the objects the user interacts with, people and events he focuses his attention on and, in short, events and things that are relevant to him. Being capable of seeing in real-time what the users sees allows the method to provide him with useful information that directly relates to his focus of attention, effectively guiding the visit in a natural and intuitive way. Using a wearable computing system and a glass-mounted camera, the user can ask to the method to provide him additional information about the scene he is looking at. This requires almost no effort from the tourist since the system is already looking what he is. Furthermore, using the visitor’s smartphone as a screen, the method can then display a view of the captured artwork where the noteworthy details have been highlighted.

To sum up, the researchers propose a system that can retrieve architectural details from images and can provide tourists with an augmented experience. They aimed at designing a system that is capable of assisting the visitor in an unconstrained outdoor tour by using a collection of wearable egocentric vision devices and a processing center. Further user evaluation tests confirm that the proposed system reached its goal: it proved to be an effective and enjoyable assistive technology.

The research will be published in  IEEE Xplore Digital Library.