How Can Blockchain Prevent Fraud in Payment-Processing Services?

What Is Payment Fraud?

With the growing popularity of online marketing and business, we are unfortunately facing new types of fraud. Fraud in payment-processing services is one of the most significant threats to all e-commerce markets, as its main working principles are based upon online transactions. It involves identity and private property theft, or illegal takeover of an individual’s payment information to make purchases or remove funds. To eliminate it, companies are setting up fraud detection using blockchain technologies.
In 2017, the global fraud detection prevention market was valued at $16.8 billion U.S. Areas in which fraud detection and prevention are applied include insurance claims, money laundering, electronic payments, and banking transactions, both online and offline.
When discussing deceitful schemes in payment processing, we should stress that the most common type of scam involves credit cards. As stated above, criminals use a stolen card or card details to commit illegal purchases or transfer money. The customer whose data is stolen may file a report, and, after numerous transactions, receive the money back. In the case of an illegal purchase, a retailer or business is penalized, and loses its money. Therefore, it is crucial to take action to protect your commerce from these types of losses.

Use Blockchain to Prevent and Detect Fraud

The principles of blockchain technology allow people to keep an open, transparent, cryptographically encrypted record of all kinds of transactions committed between two pseudo-anonymous parties. As this record is maintained in an absolutely decentralized manner, it is independent of local authorities and banks. Therefore, it is difficult to tamper with. Actions like double spending, a common problem in digital money transactions, are difficult to commit due to a consensus protocol that provides trust. Because the permanency of blockchain technology stores information privately between parties, it provides better security.
Blockchain offers a wide range of opportunities, and a great number of companies use it to gain financial security. According to Statista surveys, 23% of companies are using this technology to prevent scams, and for security clearance. This percentage comes in second to those who use it for international money transfers.

Advantages of Blockchain Technology in the Prevention of Payment Fraud

It is possible to detect and prevent illegal activities in payment processing without people’s involvement by using the following features that stop fraud with blockchain:

  • Permanence. It is impossible to disable the system, as it functions on various devices worldwide at the same time. All of the gadgets storing the complete history of transactions cannot be hacked at once.
  • Transparency. As a chain with distinct blocks, the system keeps a record of all transactions in each of these blocks. If any corrections or additions occur in these records, they are to be verified and checked in the whole system of block validators, which are machines with strict rules that must be complied with.
  • Any illegal interference will be noticed promptly, and the involved parties will be disabled from making such transactions.
  • Immutability. Blockchain provides significant benefits for fraud detection. As soon as a record is entered into the system, it cannot be deleted or forged.
  • Cryptography. Blockchain technology employs widely-adopted cryptography protocols that protect users’ identity. Validation and confirmation are possible only with unique digital signatures. This information cannot be tampered with or recreated by anyone due to the random nature of its creation.
  • Postponed payments and multisignatures. If you need to pay for a certain product but do not trust the seller, Blockchain allows you to use multi-signature transactions for postponed payments. In this case, the seller receives money only when the buyer gets his goods. Delivery service (or any other trusted party) can act as an additional level of arbitration that assures the buyer has the funds and the seller sends the goods.

Though blockchain technology provides better security, it cannot protect against hacking into your digital wallet or identity theft on its own. It should be stressed that in order to increase protection, blockchains use the additional help of machine-learning capabilities. This technology works like an additional layer, analyzing the algorithms and models of users’ behaviour. For instance, personal data might be stolen or used, but no one can copy someone’s personal behaviour pattern fraudulently, as it is absolutely unique.
So, if you want to secure your digital identity and prevent it from being tampered with, blockchain technology will protect against fraud cases like these. To sum up, your personal information should be placed in a blockchain framework accessed only by authorized participants who can verify and ensure its validity. Though thefts in payment processing still occur, it is  very important to use special blockchains designed for businesses and users working with machine-learning software. Such technologies are designed to be resistant to vulnerabilities, and grant you greater security.

Insights Serious Games

Serious games in sustainable urban development – Part 2

Missed Part 1? Catch up here.
Coniuncta is an interdisciplinary research team founded in Warsaw University of Technology by professor Robert Olszewski. Main goal of the team is the research on application of gamification and design thinking mechanisms in promotion of civic engagement.
Our team perceives gamification as an answer to a problem observed in implementation of participatory mechanisms (such as participatory budgeting) in Polish cities. Research conducted by the Coniuncta team is focused on identifying key gamification and serious games mechanism which could be used to foster civic engagement, as well as provide geo-reference data on public opinion on various topics and projects.
In 2015 and 2016 our team conducted series of experiments conducted in the form of workshops, held in Warsaw (capitol of Poland) and Płock (city in central Poland of c.a. 120 000 inhabitants). The workshops were organized by the Municipality of Płock for local high school students. In April and June together around 250 students took part in these workshops. Scenario of the workshop was based on the game City Hall 2.0, using map of the Płock city center and gamified model of urbanistic problems as a main narrative axis of the game’s scenario.
Analysis of the decisions in game was base for several research papers on effects of gamification in urban planning consultations process [1]. First conclusions from this project were used in design of another serious game, Spot On, in the form of mobile browser based game, using real time GPS location data, real maps (based on OpenStreet Map API) and designed for both workshop application and outdoor gameplay.
app oneapp two
Outdoor version of the application uses gamification mechanism to motivate players to visit various points in the city and leave their opinions concerning optimal changes in those places. Other players can also see those opinions and give feedback on whether they like or dislike certain solutions proposed by game community.
The Spot On game is targeted mainly to the younger players (primary school, high school and university students) as well as other people who like mobile games and travel through the city on various occasions. It’s main educational aim is to cope with first barrier mentioned in our previous post: lack of civic engagement, due to bad experiences from the past or general low level of civic activity.
Spot On promotes activity and sharing opinions in the urban context, and presents them as fun and easy activity, and part of engaging narrative of the game. In consequence the game is meant to build positive attitude to social participation in the targeted age group, which in few years will actively take part as urban citizens in various consultations and participatory projects.
Spot On is currently undergoing final testing and finalization of development phase, and will be launched in January in series of workshops in Warsaw and Dublin.
[1] Łączyński, M., Olszewski, R., Turek, A., Urban Gamification as a Source of Information for Spatial Data Analysis and Predictive Participatory Modelling of a City’s Development, [w:] DATA 2016 Proceedings of the 5th International Conference on Data Management Technologies and Applications, Science and Technology Publications, Lda. Lizbona 2016, s. 176–182.

Insights Radio Spectrum

Physical Layer of Wireless IoT: Enablers and Issues

The current trend in telecommunications market is towards connecting all the everyday useful objects to the Internet. In this direction, Internet of Things (IoT) is receiving significant attention from industries and research communities as a key enabler for the Fifth Generation (5G) of wireless communications. It is about connecting all types of physical things/objects/devices to the Internet. The term IoT is also referred as Internet of Everything (IoE), which basically brings people, data, things and processes together in order to fulfil everyday needs of people, thus enabling a smart global community.  Among numerous application areas of IoT, some of the important ones are: (i) smart home, (ii) smart cities, (iii) smart wearables, (iv) smart grids, (v) smart healthcare, (vi) connected car, (vii) remote industrial process control, (viii) smart retail and supply chain, (ix) smart farming, and (x) smart energy management.
According to CISCO, IoT was initiated sometime between 2008 and 2009 when the number of connected devices exceeded the number of people. Several new devices having different form factors and enhanced capabilities/intelligence are emerging each year in the market. It has been forecasted that there will be around 8.2 billion handheld or personal mobile-ready devices and 3.2 billion Machine to Machine (M2M) connections by 2020. Based on CISCO whitepaper 2016, another important evolution is the massive emergence of smart wearable devices which may reach around 601 million globally by 2020, growing at the cumulative aggregated growth rate of 44 percentage. Furthermore, the ongoing trend of migrating to IPV6 with its 340 undecillion addresses will facilitate the integration of smart devices in the future wireless networks, thus making the market place and the concept of IoE feasible. Although there are other possibilities for communication between IoT devices such as Ethernet connectivity, Fieldbus and power line communication, this blog focuses on physical layer enablers and issues for wireless connectivity among IoT devices.
IoT will potentially create the integration of different wireless technologies, and subsequently will create market for new services. Some of the existing PHY layer protocols related to wireless IoT are IEEE 802.15.4, IEEE 802.15.6, Bluetooth Low Energy (BLE), EPCglobal, LTE-A, Z-Wave, 6LowPAN, and Near Field Communication (NFC). Future 5G networks may need to ensure the rapidly emerging requirements of IoT applications. Some relevant Quality of Service (QoS) requirements include spectral efficiency, energy efficiency, connectivity and latency. To meet these diverse requirements, an efficient, scalable and flexible air-interface is required and, therefore, different modules of Physical (PHY) and Medium Access Control (MAC) layers should be optimized so that they can be configured flexibly according to the technical specifications of each use case. One of the important aspects in this regard is the design of PHY layer for IoT-based wireless systems considering the practical constraints of energy efficiency, spectral efficiency, cost-effectiveness, and quality of experience.
However, the design of IoT-enabled wireless networks which can deliver a variety of services with desirable quality of experience under energy/resource constrained practical wireless scenarios is crucial. In contrast to other wireless communication paradigms, IoT has its own unique features and diverse requirements such as group-based communication, time-tolerant, small data transmission, secure connection, monitoring surrounding environment/parameters, low cost and low energy consumption. Besides, several requirements such as bandwidth, reliability and latency of different existing services are highly diverse. In terms of connectivity, it’s challenging to find out which devices need to be connected and which communication technology is suitable to connect them. Furthermore, several other issues such as dynamic resource allocation, harmful interference mitigation and interoperability of different technologies have to be investigated while devising communication technologies for IoT.
The PHY layer parameters should be effectively utilized in devising MAC layer and network layer protocols in order to design end to end reliable communication systems. The key enabling PHY layer techniques for wireless IoT are dynamic resource allocation (carrier and power), distributed beamforming/space time block code, opportunistic/cognitive techniques, orthogonal/non-orthogonal multiple access, low complexity cooperative techniques, compressive signal processing, spectrum sensing techniques, energy efficient modulation design, RF energy harvesting techniques, adaptive waveforms, and mmWave technologies. Furthermore, there are several emerging application areas of wireless IoT such as Wireless Body Area Networks (WBANs), Wireless Sensor Networks (WSNs), Device to Device (D2D), Machine to Machine (M2M), Vehicle to Vehicle (V2V), Vehicular Ad Hoc Networks (VANETs) satellite communications, LTE-advanced and 5G networks. These wireless systems have their own specific characteristics and it’s crucial to understand their PHY layer characteristics in order to deploy a reliable end to end system.
A massive amount of IoT devices may need to be fabricated in a cost effective manner. Furthermore, these devices are likely to be battery operated and located in a remote area where charging may be economically infeasible. On the other hand, IoT devices may likely be miniaturized in size and non-replaceable. This implies that cost, energy, network lifetime and space efficiency will be the critical challenges of the future IoT devices. In this regard, suitable signal processing tools from various areas such as WSNs and radar can be adopted for IoT-based wireless systems.
Future IoT enabled wireless systems require a highly scalable, reliable and available radio spectrum. The existing static spectrum allocation mechanisms which are mainly based on orthogonalization of the spectrum resources may not be viable solutions. In this regard, dynamic and non-orthogonal spectrum allocation policies are promising. One possible direction could be to allow the IoT devices to simultaneously utilize both microwave and mmWave carrier frequency bands (i.e., dual band connectivity). On the other hand, in order to support wideband IoT applications, both contiguous and non-contiguous carrier aggregation may be employed especially at the microwave frequency bands. In this context, the main challenge is how to efficiently realize wideband IoT capable of simultaneously utilizing the benefits of microwave and mmWave frequency bands.
While considering wideband spectrum utilization for IoT applications, the conventional Nyquist-based sampling is not feasible due to the requirements of very high rate and expensive ADC. In this regard, it would be interesting to exploit the inherent time, frequency sparsity caused by sporadic traffic of IoT-based systems as well as the spatial sparsity exhibited due to multipath environment, and subsequently to apply compressive signal processing in order to devise efficient techniques such as wideband sensing and channel estimation.
IoT being a complex paradigm, it faces several technical challenges in wireless communications which need to be addressed with further research and development activities. More specifically, future research efforts may need to focus in designing low cost and energy efficient transceiver and incorporating PHY layer parameters in the design of MAC and network layers to realize reliable IoT-based wireless systems.

Computational Attention Insights

Applications of Saliency Models – Part Three

Catch up on Parts One and Two.
Applications based on abnormality processing

The third category of attention-based applications concerns abnormality processing. Some applications go further than the use of the simple detection of the areas of interest. They use comparisons between the areas on the saliency maps. Application domains such as robotics or advertisement highly benefit from this category of applications.

Robotics is a very large domain of application with various needs. There are three research axes where robots can take advantage from saliency models: 1) image registration and landmarks extraction, 2) object recognition, and 3) robots action guidance.

An important need of a robot is to know where it is located. For this aim, the robot can use the data from its sensors to find landmarks (salient features extraction) and register images taken at different times (salient features comparison) to build a model of the scene. The general process of real-time building of a view of the scene is called Simultaneous Localization and Mapping (SLAM). Saliency models can help a lot in the extraction of more stable landmarks from images which can be more robustly compared [25]. Those techniques imply first the computation of saliency maps, but the results are not used directly: they need to be further processed (especially comparisons of salient areas).

Another important need of robots after they establish the scene, is to recognize the objects which are present in this scene and which might be interesting to interact with. Two steps are needed to recognize objects. First of all, the robot needs to detect the object in a scene. For this goal saliency models can help a lot as they can provide information about proto-objects [26] or areas objectness [27]. When objects are detected, they need to be recognized. In this area the main approach is to 1) extract features (SIFT, SURF or any others) from the object 2) filter the features based on a saliency map 3) perform the recognition based on a classifier (such as a SVM or others). Papers like [28] or [29] apply this technique which let a computer drastically decrease the number of needed keypoints to perform the object recognition. Another approach was used in [30] or [31]. Here the features which are mostly present in the searched object and not present in the surroundings are learned and this learning phase provides a new set of weights for bottom-up attention models. In this way, the features which are the most discriminant in the searched object will get the higher response in the final saliency map. A third approach can be found in [32] where relative position of salient points (called cliques) are used for image recognition.

Once robots know where they are (attentive visual SLAM) and they recognize objects around them (attentive object recognition), they need to decide what to do next. One of the decisions they need to make is to know where to look next and this decision is obviously taken based on visual attention. Several robots implement multi-modal attention like the iCub robot. They combine visual and audio saliency in an egosphere and this is used to point the gaze on the next location. An interesting survey on attention for interactive robots can be found in [33].

Another domain is is also part of this abnormal region processing category of applications: visual communication optimization. Marketing optimization can be applied to a large amount of practical cases such as: web sites, advertisement, product placement in supermarkets, signage, 2D and 3D objects placement in galleries.

Among the different applications of automatic saliency computation, the marketing and communication optimization is probably one of the closest to market. As it is possible to predict an image attention map, which is a map of the probability that people attend each pixel of the image, it is possible to predict where people are likely to look on a marketing material like an advertisement or a website. Attracting customer attention is the first step of the process of attracting people interest, induce desire and need for the product and finally push the client to buy it.

Feng-GUI [34] is an Israeli company mainly focusing on web pages and advertising optimization even if the algorithm is also capable to analyze video sequences. AttentionWizzard [35] is a US company mainly focusing on web pages. There are few a hints on the used algorithm, but it uses bottom-up features like: color differences, contrast, density, brightness and intensity, edges and intersections, length and width, curves and line orientations. Top-down features include face detection, skin color and text (especially big text) detection. 3M VAS [36] is the only big international player in this field. Very few details are given on the used algorithm, but it is also capable to provide video saliency. They provide attention maps for web pages optimization, but also advertisement with static images or videos, packaging or in-store merchandising. Eyequant [37] is a German company specialized in website optimization. Their algorithm use extensive eye-tracking tests to train the algorithm and make it closer to real eye-tracking for a given task. All those companies claim around 90 % accuracy for the first 3/5 viewing seconds [38]. They base their claim on different comparison between their algorithm and several existing databases using several ROC metrics. They always compare the results with the maximum ROC score obtained by the human users. Nevertheless, for real-life images and for given tasks and emotion-based communication, this accuracy dramatically drops but still remains usable.

With more and more 3D objects which are created, manipulated, sold or even printed, 3D saliency is a very promising future research direction. The main idea is to compute the saliency score of each view of a 3D model: the best viewpoint is the one where the total object saliency is maximized [39]. Mesh saliency was introduced based on adapting to the mesh structure concepts for 2D saliency [40]. The notion of viewpoint and mesh simplification are also related through the use of mesh saliency [41]. While the best viewpoint application can be used for computer graphics or even 3D mesh compression, marketing is one of the targets of this research topic: more and more 3D objects are shown even on internet and the question of how to display them in an optimal way is very interesting in marketing.


During the last two decades, significant progresses have been made in the area of visual attention.
Regarding the applications, 3 categories taxonomy is proposed here:

  • Abnormality detection: use the most salient areas detection.
  • Normality detection: use the less salient areas detection.
  • Abnormality processing: compare and further process the most salient areas.

This categories let us simplify and classify a very long list of applications which can benefit from attention models. We are just at the early stages of the use of saliency maps into computer vision applications. Nevertheless, the number of already existing applications shows a promising avenue for saliency models in improving existing applications, and for the creation of new applications. Indeed, several factors are nowadays turning saliency computation from labs to industry:

  • The models accuracy drastically increased in two decades both concerning bottom-up saliency and top-down information and learning.
  • The models working both on videos and images are more and more numerous andprovide more and more realistic results. New models including audio signals and 3D data are released and are expected to provide convincing results in the near future.
  • The combined enhancement of computing hardware and algorithms optimization led to real-time or almost real-time good quality saliency computation.

25. Frintrop, S. and Jensfelt, P. (2008) Attentional landmarks and active gaze control for visual slam. Robotics, IEEE Transactions on, 24 (5), 1054–1065.
26. Walther, D. and Koch, C. (2006) Modeling attention to salient proto-objects. Neural networks, 19 (9), 1395–1407.
27. Alexe, B., Deselaers, T., and Ferrari, V. (2010) What is an object?, in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE, pp. 73–80.
28. Zdziarski, Z. and Dahyot, R. (2012) Feature selection using visual saliency for content-based image retrieval, in Signals and Systems Conference (ISSC 2012), IET Irish, IET, pp. 1–6.
29. Awad, D., Courboulay, V., and Revel, A. (2012) Saliency filtering of sift detectors: application to cbir, in Advanced Concepts for Intelligent Vision Systems, Springer, pp. 290–300.
30. Navalpakkam, V. and Itti, L. (2006) An integrated model of top-down and bottom-up attention for optimizing detection speed, in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, vol. 2, IEEE, vol. 2, pp. 2049–2056.
31. Frintrop, S., Backer, G., and Rome, E. (2005) Goal-directed search with a top-down modulated computational attention system, in Pattern Recognition, Springer, pp. 117–124.
32. Stentiford, F. and Bamidele, A. (2010) Image recognition using maximal cliques of interest points, in Image Processing (ICIP), 2010 17th IEEE International Conference on, IEEE, pp. 1121–1124.
33. Ferreira, J.F. and Dias, J. (2014) Attentional mechanisms for socially interactive robots–a survey. Autonomous Mental Development, IEEE Transactions on, 6 (2), 110–125.
34. Feng gui website proposes automatic saliency maps for marketing material. URL
35. Attention wizzard website proposes automatic saliency maps for marketing material. URL https: //
36. 3m vas website proposes automatic saliency maps for marketing material. URL
37. Eyequant website proposes automatic saliency maps for marketing material. URL
38. Page containing the 3m vas studies showingalgorithm accuracy in general and in a marketing framework. URL
39. Takahashi, S., Fujishiro, I., Takeshima, Y., and Nishita, T. (2005) A feature-driven approach to locating optimal viewpoints for volume visualization, in Visualization, 2005. VIS 05. IEEE, IEEE, pp. 495–502.
40. Lee, C.H., Varshney, A., and Jacobs, D.W. (2005) Mesh saliency, in ACM transactions on graphics (TOG), vol. 24, ACM, vol. 24, pp. 659–666.
41. Castelló, P., Chover, M., Sbert, M., and Feixas, M. (2014) Reducing complexity in polygonal meshes with view-based saliency. Computer Aided Geometric Design, 31 (6), 279–293.

Insights Radio Spectrum

Fifth Generation of Cellular Communications (5G): A Mixture of Technologies!

Cellular technologies have evolved over time starting from the first generation (1G) to the current fourth generation (4G) with the objective of improving several factors such as spectral efficiency, capacity, coverage, power consumption, and user experience. This has been possible with the continuous advances in electronics and signal processing technologies in different segments of the cellular system architecture. Currently, we are in the stage of conceptualizing the next generation of cellular communications, i.e., 5G.

In all previous cellular generations, there was clearly a single dominating technology, i.e., Frequency Division Multiple Access (FDMA) for 1G, Time Division Multiple Access (TDMA) for 2G, Code Division Multiple Access (CDMA) for 3G, and Orthogonal Frequency Division Multiple Access (OFDMA) for 4G. However, for the upcoming 5G wireless, no clear dominating technology has been foreseen yet. Based on the current activities in industries and academia, it seems to be a mixture of technologies which are supposed to address the main emerging requirements such as high data rate, low energy consumption, low latency and the support/integration of heterogeneous devices/networks. In this regard, this blog will shed a light on some of the key technologies along with their potential advantages and challenges.

The potential techniques to meet the aforementioned requirements are ultra-densification, millimetre wave (mmWave) communications, massive Multiple Input Multiple Output (MIMO), full duplex technology, adaptive three dimensional (3D) beamforming, dynamic spectrum access and advanced multiple access schemes. Besides these techniques, several aspects such as software defined radio/networking, Internet of Things (IoT), intelligent caching, cloud computing and big data are being considered as important enablers for 5G wireless. In addition, advanced Wireless Fidelity (WiFi) networks, infrastructure sharing, integration of heterogeneous networks such as cellular networks, public switched telephone network, power line communication, electricity distribution network and satellite networks in a single platform, machine type communication, body area networks, and vehicle to vehicle communications are also emerging in the wireless community.

One potential technique of meeting the complicated requirements of 5G communications is by maximizing network densification via massive deployment of small cells of different types including licensed small cells and unlicensed WiFi access points. This densification approach has been already adopted in existing wireless cellular networks, which essentially results in a multi-tier Heterogeneous Network (HetNet). The investigation of suitable resource allocation algorithms that efficiently utilize radio resources such as bandwidth, transmission power and antenna while mitigating inter-cell and inter-user interference and guarantee acceptable Quality of Service (QoS) for active users is one of the most critical issues. In addition, design and deployment of reliable backhaul networks that enable efficient resource management and coordination with practical energy efficiency constraints are also important aspects to be studied.

Due to huge amount of network data traffic caused by the popularity of video, internet gaming and social media across a range of new devices such as tablets and smartphones, it is almost certain that this explosive traffic growth problem of cellular networks cannot be addressed by just upgrading the existing networks. Besides, several studies have shown that more than 70 % of the current traffic originates from an indoor environment. In most metropolitan indoor environments where traffic congestion is more critical, Wireless Fidelity (WiFi) Access Points (APs) are already available. Also, it has been reported in the past studies that WiFi system consumes significantly less energy than the existing 2G and 3G systems and deploying more WiFi hotspots is significantly cheaper than that of upgrading 3G or Long Term Evolution (LTE) networks. In this regard, advanced WiFi networks can be promising candidates to meet the data rate requirements of the next generation 5G wireless systems. However, existing WiFi APs are mostly equipped with a single antenna whose radiation pattern is omnidirectional. Recently, the deployment of multiple antennas on WiFi APs has received an important attention. This will enable APs to control the radiation pattern of transmitted and received radio signals adaptively which will consequently help to improve the QoS experience of the users, and to meet the capacity requirements of the future 5G wireless networks.

Dynamic spectrum access has been considered as one of the enablers to address the spectrum scarcity problem in future wireless networks. In this context, investigation of suitable techniques in order to foster the implementation of cognitive radio systems in practical scenarios is crucial. In this direction, future works are needed in order to understand the performance of cognitive radio systems in the presence of imperfect channel knowledge, asynchronous primary user traffic, and various practical inevitable imperfections such as noise uncertainty, channel uncertainty, noise/channel correlation, hardware impairments such as phase noise, frequency offset, amplifier nonlinearity, analog to digital converter inaccuracies, calibration issues, etc. Another important issue is how to tackle their harmful effects such as interference to the licensed (primary) system and the performance degradation of the unlicensed (secondary) system.

Another way of enhancing the utilization of the available spectrum resources is to enable full duplex operation on a radio node so that it can transmit and receive on the same radio channel. In a wireless system, full duplex operation can provide several benefits such as increased link capacity, wireless virtualization, improved physical layer security, reduced end-end and feedback delays, and improved spectrum utilization efficiency by allowing simultaneous sensing and transmission, and simultaneous transmission and reception. However, there exist several research problems in realizing the full duplex operation in heterogeneous wireless networks such as strong Self-Interference (SI), imperfect cancellation of SI due to residual hardware impairments, increased aggregate interference, high power consumption. In this regard, it is crucial to investigate advanced multi-antenna based signal processing techniques such as adaptive beamforming and antenna selection/switching, self-interference estimation/detection techniques and innovative power control strategies in order to handle the issues of the residual SI.

Furthermore, another key enabling technique is three dimensional (3D) beamforming which has recently received important attention in order to enhance the capacity of future wireless networks. In contrast to the conventional 2D beamforming, the 3D beamforming controls the radiation pattern in both elevation and azimuth planes, thus providing additional degrees of freedom while planning a cellular network. The main research challenge here is the investigation of low-complexity hybrid beamforming solutions which can control the radiation pattern in both elevation and azimuthal planes.

Massive MIMO and mmWave technologies provide vital means to resolve many technical challenges of the future wireless 5G Networks and they can seamlessly be integrated with the current networks and access technologies. In a rich scattering environment, massive Multiple Input Multiple Output (MIMO) technique can enable significant performance gains with simple beamforming strategies such as maximum ratio transmission or zero forcing. This technology uses a very large number of service antennas at the base station which helps to eliminate the multiuser interference with the help of very sharp beams. Despite its several other benefits such as system throughput improvement, higher energy efficiency, reduced latency, and the simplification of medium access layer, several challenges such as pilot contamination, the effect of hardware impairments, correlation and synchronization issues need to be addressed with the help of future research works.

Besides, another promising way of solving spectrum scarcity problem and meeting capacity demand of future wireless systems is to enable mobile communications using millimetre wave (mmWave) frequencies. The capacity requirement of the next-generation wireless network would inevitably demand us to exploit the mmWave frequencies ranging 30GHz-300GHz which is still under-utilized but can offer huge spectrum. Most importantly, as the mmWaves have extremely short wavelength, it becomes possible to pack a large number of antenna elements in a small form factor which consequently helps to realize massive MIMO at the base stations and user terminals. Furthermore, mmWave frequencies can be used for outdoor point-to-point backhaul links or for supporting indoor high-speed wireless applications (e.g., high-resolution multimedia streaming). However, there are several challenges to be solved including propagation issues, mobility aspects, hardware imperfections such as power amplifier non-linearity and low efficiency of radio frequency components at these frequencies.

In addition to the existing multiple access schemes such as TDMA, FDMA, CDMA, OFDMA and Space Division Multiple Access (SDMA), several multiple access schemes such as Polarization Division Multiple Access (PDMA), Interweave Division Multiple Access (IDMA), Universal Filtered Multi-Carrier (UFMC), Sparse Code Multiple Access (SCMA), Generalized Frequency Division Multiple Access (GFDMA) and Non-Orthogonal Multiple Access (NOMA) schemes are being investigated by several researchers as promising multiple access techniques for 5G wireless.

Although several aforementioned techniques are being considered as promising technologies for 5G, it’s not yet clear which combination of these technologies will define the so called fifth generation (5G) of cellular communications since 5G standardization is still in its infancy. However, it is clear that all the modified techniques and network architectures being investigated in the community will not be mature enough for 5G by the time 5G will be deployed and many of them will eventually propagate to the next generations beyond 5G.

Computational Attention Insights

Applications of Saliency Models – Part Two

Missed the Part One? We’ve got you covered.

Applications based on normality detection

In this section we focus on a second category of applications based on the locations having the lowest saliency scores. Those areas correspond with repeating and less informative regions, which might be easily compressed.

Compression is the process of converting a signal into a format that takes up less storage space or transmission bandwidth. The classical compression methods tend to distribute the coding resources evenly in an image. On the contrary, attention-based methods encode visually salient regions with high priority, while reating less interesting regions with low priority. The aim of these methods is to achieve compression without significant degradation of perceived quality.

In [1], a saliency map for each frame of a video sequence is computed and a smoothing filter is applied to all non-salient regions. Smoothing leads to higher spatial correlation, a better prediction efficiency of the encoder, and therefore a reduced bitrate of the encoded video. An extension of [1], uses a similar neurobiological model of visual attention to generate a saliency map [2]. The most salient locations are used to generate a so-called guidance map which is used to guide the bit allocation. Using the bit allocation model of [2], a scheme for attention video compression has been suggested by [3]. This method is based on visual saliency propagation (using motion vectors), to save computational time. More recently, attention-based image compression patents like [4] has been accepted, which also shows that compression algorithms are more and more efficient in real-life applications and become close to reach the market.

Compression aims in reducing the amount of data in a signal. A usual approach consist in modifying the coding rate, but other approaches can also reduce the amount of data in the signal by cropping or resizing the signal. An obvious idea which drastically compresses an image is of course to decrease its size. This size decrease can be brutal (zoom on a region and the rest of the image is discarded) or softer (the resolution of the context of the region of interest is decreased but not fully discarded).

The authors in [5] use Itti algorithm to compute the saliency map [6], that serves as a basis to automatically delineate a rectangular cropping window. The Self-Adaptive Image Cropping for Small Displays [7] is based on an Itti and Koch bottom-up attention algorithm but also on top-down considerations as face detection or skin color. According to a given threshold, the region is either kept or eliminated. A completely automatic solution to create thumbnails according to the saliency distribution or the cover rate is presented by [8]. An algorithm proposed in [9] starts by adaptively partition the input image into number of strips according to a combined map which contains both gradient information and visual saliency. The methods of intelligent perceptual zooming based on saliency algorithms become more and more interesting with the advances in saliency maps computation in terms of both real-time and spatio-temporal cues integration. Even big companies as Google [10] become more and more involved in developing applications based on perceptual zooms. The idea is to generalize the perceptual zoom for images and videos and keep the temporal coherence of the zoomed image even in case of objects of interest which might brutally appear in the image far from the previous zoom area.

Perceptual zoom does not always preserve the image structure. To keep the image structure intact several methods exist: warping and seam carving. Those methods are also used to provide data ßummarization”.

Warping is an operation that maps a position in a source image to a position in a target image by a spatial transformation. This transformation could be a simple scaling transformation [11].A retargeting method based on global energy optimization is detailed in [12] and extended to combine an uniform sampling and a structure-aware image representation [13]. A warping method which uses the grid mesh of quads to retarget the images is defined in [14]. The method determines an optimal scaling factor for regions with high content importance as well as for regions with homogeneous content which will be distorted. A significance map is computed based on the product of the gradient and the saliency map. [15] proposes an extended significance measurement to preserve shapes of both visually salient objects and structure lines while minimizing visual distortions.

The other method for image retargeting is seam carving. Seam carving [16] allows to retarget the image thanks to an energy function which defines the pixels importance. The most classical energy function is the gradient map, but other functions can be used such as entropy, histograms of oriented gradients, or saliency maps [17]. For spatio-temporal images, [18] propose to remove 2D seam manifolds from 3D space-time volumes by replacing dynamic programming method with graph cuts optimization to find the optimal seams. A saliency-based spatio-temporal seam-carving approach with much better spatio-temporal continuity than [18] is proposed by [19]. In [20], the authors describe a saliency map which takes more into account the context and proposes to apply it to seam carving. Interestingly, recent papers as [21] propose to mix seam carving and warping techniques.

Summarization of images or videos is a term which is similar to retargeting. It might be based on cropping [22]. It might also be based on carving as in [23]. The main purpose is to provide a relevant summary of a video or an image. In [24] the authors used video summarization to provide a mashup of several videos into a unique pleasant video containing the important sequences of all the concatenated videos.


1. Itti, L. (2004) Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Transactions on Image Processing, 13 (10), 1304–1318, doi:.834657.

2. Li, Z., Qin, S., and Itti, L. (2011) Visual attention guided bit allocation in video compression. Image and Vision Computing, 29 (1), 1 – 14, doi:10.1016/j.imavis.2010.07.001. URL S0262885610001083.

3. Gupta, R. and Chaudhury, S. (2011) A scheme for attentional video compression. Pattern Recognition and Machine Intelligence, 6744, 458–465.

4. Zund, F., Pritch, Y., Hornung, A.S., and Gross, T. (2013), Content-aware image compression method. US Patent App. 13/802,165.

5. Suh, B., Ling, H., Bederson, B.B., and Jacobs, D.W. (2003) Automatic thumbnail cropping and its effectiveness., in Proceedings of the 16th annual ACM symposium on User interface software and technology (UIST), pp. 95–104.

6. Itti, L. and Koch, C. (2001) Computational modelling of visual attention. Nature Reviews Neuroscience, 2 (3), 194–203.

7. Ciocca, G., Cusano, C., Gasparini, F., and Schettini, R. (2007) Self-adaptive image cropping for small displays. IEEE Transactions on Consumer Electronics, 53 (4), 1622–1627.

8. Le Meur, O., Le Callet, P., and Barba, D. (2007) Construction d’images miniatures avec recadrage automatique basé sur un modéle perceptuel bio-inspiré, in Traitement du signal, vol. 24(5), vol. 24(5), pp. 323–335.

9. Zhu, T., Wang, W., Liu, P., and Xie, Y. (2011) Saliency-based adaptive scaling for image retargeting, in Computational Intelligence and Security (CIS), 2011 Seventh International Conference on, pp. 1201–1205, doi:10.1109/CIS.2011.266.

10. Grundmann, M. and Kwatra, V. (2014), Methods and systems for video retargeting using motion saliency. URL, uS Patent App. 14/058,411.

11. Liu, F. and Gleicher, M. (2005) Automatic image retargeting with fisheye-view warping, in Proceedings of User Interface Software Technologies (UIST). URL

12. Ren, T., Liu, Y., and Wu, G. (2009) Image retargeting using multi-map constrained region warping, in ACM Multimedia, pp. 853–856.

13. Ren, T., Liu, Y., and Wu, G. (2010) Rapid image retargeting based on curve-edge grid representation, in ICIP, pp. 869–872.

14. Wang, Y.S., Tai, C.L., Sorkine, O., and Lee, T.Y. (2008) Optimized scale-and-stretch for image resizing. ACM Trans. Graph. (Proceedings of ACM SIGGRAPH ASIA, 27 (5).

15. Lin, S.S., Yeh, I.C., Lin, C.H., and Lee, T.Y. (2013) Patch-based image warping for content-aware retargeting. Multimedia, IEEE Transactions on, 15 (2), 359–368, doi:10.1109/TMM.2012.2228475.

16. Avidan, S. and Shamir, A. (2007) Seam carving for content-aware image resizing. ACM Trans. Graph., 26 (3), 10.

17. Vaquero, D., Turk, M., Pulli, K., Tico, M., and Gelf, N. (2010) A survey of image retargeting techniques, in SPIE Applications of Digital Image Processing.

18. Rubinstein, M., Shamir, A., and Avidan, S. (2008) Improved seam carving for video retargeting. ACM Transactions on Graphics (SIGGRAPH), 27 (3), 1–9.

19. Grundmann, M., Kwatra, V., Han, M., and Essa, I. (2010) Discontinuous seam-carving for video retargeting, in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 569–576, doi:10.1109/CVPR.2010.5540165.

20. Goferman, S., Zelnik-Manor, L., and Tal, A. (2012) Context-aware saliency detection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34 (10), 1915–1926.

21. Wu, L., Cao, L., Xu, M., and Wang, J. (2014) A hybrid image retargeting approach via combining seam carving and grid warping. Journal of Multimedia, 9 (4). URL

22. Ejaz, N., Mehmood, I., Sajjad, M., and Baik, S.W. (2014) Video summarization by employing visual saliency in a sufficient content change method. International Journal of Computer Theory and Engineering, 6 (1), 26.

23. Dong, W., Zhou, N., Lee, T.Y., Wu, F., Kong, Y., and Zhang, X. (2014) Summarization-based image resizing by intelligent object carving. Visualization and Computer Graphics, IEEE Transactions on, 20 (1), 1–1.

24. Zhang, L., Xia, Y., Mao, K., Ma, H., and Shan, Z. (2015) An effective video summarization framework toward handheld devices. Industrial Electronics, IEEE Transactions on, 62 (2), 1309–1316.

Insights Serious Games

Serious games in sustainable urban development – Part 1

Concepts of sustainable urban development, “smart city” and civic engagement are becoming more and more popular among researchers and people in charge of municipal planning. With their growing popularity, there are also ideas to include gamification mechanisms and serious games in the course of their implementation.

Key reasons of this growing attention towards serious gaming are the main barriers preventing wide acceptance of social innovations such as participatory budgeting, civic consultations or various technologies used in sustainable urban development. Those two main barriers are:

  • Lack of civic engagement, due to bad experiences from the past or general low level of civic activity;
  • Beliefs and attitudes which inhibit the acceptance of social or technological innovations crucial to the development of a sustainable urban community;

Gamificiation and serious games are perceived as a possible remedy for those two problems, because of their potential of increasing people engagement and activity, as well as using this engagement to educate them and change attitudes towards various new behaviors.

From around 2010 a lot of ideas emerged on how to gamify civic engagement in modern urban communities. Some representative cases of those games and game related projects are:

  • Trash Tycoon – a social network game by Guerillapps, running from 2011 to 2012, which focused on issues like recycling and upcycling in modern cities(1);
  • Invisible playground – a series of urban games held initially in Berlin and now all across Europe, aimed as a form of leisure activity, but also as a medium for increasing social engagement across urban areas (2);
  • Community PlantIt – created by Emmerson’s College Engagement Lab, a serious game and a platform that enables municipal authorities to communicate with citizens. The aim of the game is to gather opinions and feedback from community dwellers and foster their engagement in social consultations (3);
  • Gamefull Urban Mobility – a research project held at Games & Experimental Entertainment Laboratory of RMIT University. The aim of this project is to assess the potential of gamification when applied to urban mobility (4);

Furthermore, Community PlantIt is an example of a recently emerging approach towards gamification that merges the concepts of social engagement and sustainable urban development with data-oriented focus typical for a “smart city”. In this kind of approach, a game not only educates, engages and promotes certain attitudes and behaviors, but is also a source of data on citizens opinion and activity, as well as feedback on various projects planned by municipal authorities.

In the second part of this post we’d like to present the results of “CONIUNCTA” project held at the Warsaw University of Technology. Our project is based on a similar approach as Community PlantIt, but it also integrates several layers of geospatial data used to gather feedback from city dwellers during the course of dedicated serious games City Shaper and City Hall 1.0.


Insights Radio Spectrum

How can 5G wireless benefit from Cognitive Radio principles?

Several enabling technologies such as ultra-densification, millimetre wave communications, massive Multiple Input Multiple Output (MIMO), full duplex technology, and dynamic spectrum access are being investigated in industrial and academic communities in order to foster the deployment of the fifth generation (5G) of wireless communications. In this regard, time has come to think about how Cognitive Radio (CR) principles, which have been investigated in the community for almost one and a half decade, can be incorporated in 5G wireless communications.

CR technology, which can address the spectrum scarcity problem by means of dynamic spectrum access and spectrum sharing, has been motivated by the fact that a significant amount of the wireless spectrum remains under-utilized over a wide range of radio frequencies in the temporal and spatial domains. In addition, this solution does not require the acquisition of the additional expensive radio frequency resource, hence reducing the overall capital and operational expenditure for a wireless operator.

Although recent technical advances in the areas of Software Defined Radio (SDR), and wideband transceivers have led to the possibility of utilizing the available spectrum in a dynamic manner, there are still several challenges to be addressed from the deployment perspective. In one hand, there are technical issues in dealing with several practical imperfections such as noise uncertainty, channel/interference uncertainty, signal uncertainty, transceiver hardware imperfections, and synchronization issues. On the other hand, there are several regulatory and business challenges in order to realize dynamic spectrum access in future wireless networks. In this context, this blog provides a framework on how CR principles can be incorporated in 5G wireless networks without the need of significant upgrades in the existing network architecture.

One way of incorporating CR principles in 5G wireless networks is to enable the spectral coexistence of two or more than two heterogeneous wireless networks in different dimensions such as time, frequency, spatial, polarization, and geographical space by utilizing several advanced interference mitigation and dynamic resource allocation techniques such as cognitive beamforming, cognitive interference alignment, adaptive power control, carrier aggregation, dynamic carrier/bandwidth allocation.

Various practical coexistence scenarios can be considered under this application category: (i) Coexistence of small cells and Macrocells, (ii) Coexistence of unlicensed-WiFi and small cells, (iii) Coexistence of C-band satellite system with LTE/WiMax networks, (iv) Coexistence of future cellular with Ka-band Fixed Satellite Service (FSS) system, (v) Coexistence of terrestrial microwave backhaul links with the FSS satellite, (vi) Coexistence of satellite backhaul links with the terrestrial backhauls, (vii) Coexistence of COMPASS (Radio determination satellite service + Radio navigation satellite service) and TD-LTE, (viii) Coexistence of TVWS (Digital Terrestrial Television (DTT) + Program Making and Special Events (PMSE) services) with different terrestrial services,  (ix) Coexistence of  radar and communication systems, and (x) Coexistence of geostationary and non-geostationary satellite systems.

Another promising way to benefit from CR principles is to incorporate intelligence in different segments of future wireless networks such as relay nodes, and base stations. Future small/micro/pico/femto-cell base stations can be made intelligent by introducing spectrum awareness capability which will enhance the overall system capacity by reducing the effect of interference and noise. Furthermore, smart antenna capabilities such as source localization and adaptive three dimensional beamforming will not only boost the system capacity, but will also help in enhancing the energy efficiency of future wireless networks.

The widely discussed Licensed Shared Access (LSA) can be implemented in a dynamic manner by taking recent advances in CR techniques, and subsequently allowing spectrum sharing on a frequency, location and time basis. Also, CR principles can be utilized in incorporating full-duplex capability in a wireless node, for example self-backhauling in cellular networks. Moreover, self-organizing small cells (wireless nodes), which are capable of carrying out self-configuration, self-optimization, and self-resilience, can be considered as important enablers for future intelligent wireless systems.

Insights Radio Spectrum

Listen-and-Talk: Enhancing Spectrum Usage by Full-Duplex Cognitive Radio

The existing and new wireless technologies, such as smart phones, tablets, and IoT apps are rapidly consuming radio spectrum. The traditional regulation of spectrum requires a fundamental reform in order to allow for more efficient and creative use of the precious airwave resources. Cognitive radio (CR) has been widely recognized as a promising technique to increase the efficiency of spectrum utilization. It allows the unlicensed secondary users (SUs) to coexist with the primary users (PUs) in licensed bands. The SUs are allowed to utilize only the unoccupied spectrum resource and leave it whenever the incumbent PUs are ready to transmit. Thus, reliable identification of the spectral holes in particular licensed frequency bands is required.

Current cognitive communication systems deploy the half-duplex (HD) radios to transmit and receive the signals by orthogonal resources. The SU communication is usually realized by the popular “Listen-before-Talk” (LBT) protocol, in which the SUs sense the target channel before transmission. Though the LBT protocol has been proved effective, it actually dissipates the precious resources by employing time-division duplexing, and thus, unavoidably suffers from two major problems: 1) transmit time decrease due to sensing, and 2) sensing accuracy impairment due to data transmission.

It would be desirable if the SUs can continuously sense the spectrum and meanwhile transmit when a spectrum hole is detected. This, however, seems impossible with the conventional half-duplex systems. A full-duplex (FD) system, where a node can send and receive the signals with the same time and frequency resources, offers the potential to achieve simultaneous sensing and transmission in CR systems. Specifically, SU can sense the target spectrum band in each time slot, judge if the band is occupied, and make decisions on whether to transmit data in the adjacent slot on the basis of the sensing result and access mechanism. As the FD technology enables to explore another dimension of the network resources in CR systems, it thus requires new designs of the network protocols, signal processing and resource allocation algorithms.

For example, one of the major challenges faced by FD-CR is how to deal with the residual self-interference issue in sensing process, beneath which lies a secondary transmit power optimization problem to maximize the system throughput. Another challenge is how to manage the resources in space, frequency, and device dimensions to improve the spectrum efficiency for the secondary network.

Further applications of FD-CR comprise many important scenarios, such as FD cognitive MIMO, FD cognitive relay, and FD cognitive access point, etc. All these present a new design paradigm for enhancing the spectrum usage for future wireless communications and networks.

Computational Attention Insights

Applications of Saliency Models – Part One

Attention modeling: a huge range of applications.

The applications of saliency maps are numerous and they can occur in many domains. For some applications, the saliency maps and their analyses are the final goal, while for others saliency maps are only an intermediary step. We propose three categories of applications in order to make a classification.

The first category of applications directly takes advantage of the detection of surprising, thus abnormal areas in the signal. We can call this class of applications “Abnormality detection”. Surveillance or events/defects detection are examples of applications domains in this category.

The second category will focus more on the opposite of the first one: as the attention maps provide us with an idea about the surprising parts of the signal, one can deduce where the normal (homogenous, repetitive, usual, etc…) signal is. We will call this category “Normality modeling”. The main application domains are in signal compression or re-targeting.

Finally, the third application category is related to the surprising parts of the signal but will go further than a simple detection. This application family will be called “Abnormality processing” and it will need to compare and further process the most salient regions. Domains such as robotics, object retrieval or interfaces optimization can be found in this category.

Applications based on abnormality detection.

In this section, applications are related to surveillance or defect detection. Some authors took into account the concept of “usual motion” either by using accumulation of motion features from videos in given regions which provide a “normality” of the motion in those regions [3] or using more complex systems as Hidden Markov Models (HMMs) to predict future normal motion [4].

While abnormal motion has been mostly used for crowd scenes, some authors like in [9] provide models which work on any general scene containing motion. Some saliency models were used [10][11] with audio data to spot unusual sounds in classical contextual sounds like a gunshot in the middle of a metro station audio ambiance.

In [13], saliency models are used for defect detection and were applied first to automatic fruit grading. In [9], in addition to video surveillance, their model can also apply to static images and find generic defects on those images. Saliency models are applied for defect detection on a wide variety of applications such as the semiconductor manufacturing and electronic production [14], metallic surfaces [15] or wafer defects [16].

In this category, we could add the use of saliency in computer graphics [37] or quality metrics [49] where the abnormal regions of the image are used to optimize graphical representation or to provide different weight to the quality metric depending on the pixels. In the next chapter, we will see the two other categories of applications of saliency models in engineering: normality detection and abnormality processing.

3. Mancas, M. and Gosselin, B. (2010) Dense crowd analysis through bottom-up and 12 top-down attention. Proc. of the Brain Inspired Cognitive Sytems (BICS).
4. Jouneau, E. and Carincotte, C. (2011) Particle-based tracking model for automatic anomaly detection, in Image Processing (ICIP), 13 2011 18th IEEE International Conference on, IEEE, pp. 513–516.
9. Boiman, O. and Irani, M. (2007) Detecting irregularities in images and in video. International Journal of Computer Vision, 74 (1), 17–31.
10. Couvreur, L., Bettens, F., Hancq, J., and Mancas, M. (2007) Normalized auditory attention levels for automatic audio surveillance, in Int. Conf. on Safety and Security Engineering.
11. Mancas, M., Couvreur, L., Gosselin, B., Macq, B. et al. (2007) Computational attention for event detection, in Proc. Fifth International Conf. Computer Vision Systems.
13. Mancas, M., Unay, B., Gosselin, B., and Macq, D. (2007) Computational attention for defect
localisation, in Proceedings of ICVS Workshop on Computational Attention & Applications.
14. Bai, X., Fang, Y., Lin, W., Wang, L., and Ju, B.F. (2014) Saliency-based defect detection in industrial images by using phase spectrum. Industrial Informatics, IEEE Transactions on, 10 (4), 2135–2145.
15. Bonnin-Pascual, F. and Ortiz, A. (2014) A probabilistic approach for defect detection based on saliency mechanisms, in Emerging Technology and Factory Automation (ETFA), 2014 IEEE, IEEE, pp. 1–4.
16. Mishne, G. and Cohen, I. (2014) Multi-channel wafer defect detection using diffusion maps, in Electrical & Electronics Engineers in Israel (IEEEI), 2014 IEEE 28th Convention of, IEEE, pp. 1–5.
37. Longhurst, P., Debattista, K., and Chalmers, A. (2006) A gpu based saliency map for high-fidelity selective rendering, in Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa, ACM, pp. 21–29.
49. Ninassi, A., Le Meur, O., Le Callet, P., and Barbba, D. (2007) Does where you gaze on an image affect your perception of quality? Applying visual attention to image quality metric, in Image Processing, 2007. ICIP 2007. IEEE International Conference on, vol. 2, vol. 2, pp. II –169 –II –172, doi:10.1109/ICIP.2007.4379119.