One smart display is not enough

Today we get back to a very innovative text chosen to be the Best Paper of MobiCASE 2015. The event, the 7th EAI International Conference on Mobile Computing, Applications and Services, took place in Berlin, Germany, 12–13 of November, 2015. The paper is titled Interactively Set up a Multi-display of Mobile Devices and it proposes a truly innovative technique of display interaction. Peter Barth and Manuel Pras, authors of this work, touch upon a general trend in the smartphone industry, seeking a bigger display. Tablets became popular on the market, partly because of their larger displays. There is however, another way of getting a ‘wider’ experience.

Multiple devices can become a part of one joint display.

Smartphones are quite available and affordable technology, so this paper supposes either a group of people, where each person has a device, or a person with multiple smart devices. The vital information for creating a joint multi-display, is the knowledge of precise position of each device. The two researchers from RheinMain University of Applied Sciences propose a method that relies on the error detection capabilities of the human visual system. This way the system is easily capable of interaction between different devices, as it does not use computerized vision technologies. The communication between devices was based on Blaubot, meaning it could be done over Wi-Fi or Bluetooth, but because of the high bandwidth requirements, Wi-Fi was chosen to be the primary connection type.

For the system to work, physical width and height, as well as number of pixels (on x,y directions in portrait mode) needs to be known. Then the devices have to be placed in a row/multiple rows. User has to swipe across the devices (left to right) for initial position calculation, and vertically for additional row(s) detection. Devices can be at an angle to each other, and also with a gap up to 7 centimeters. Afterwards, colored grids appear across the devices, on the multi-display. User is then expected to fine tune the image by easy translation or rotation gestures. The changes are portrayed in real time on the used device and its neighboring devices. This is done in order to create an easy-to-use user interface. When the user is finished, image, video or games are supported across the multi-display.

For the evaluation of this design, tests were conducted by 6 individuals. Low end devices were used to portray the low technical requirements for a multi-display. The results were promising as they have shown relatively small offsets. Finally, Peter Barth and Manuel Pras suggest that further research could benefit from the use of additional sensors for speeding up the setup procedure.

Full text of the paper is available on Springer.