Categories
Interviews

Machine learning is making strides towards autonomous mobile communications

We gladly took the opportunity to talk with Gongliang Liu (Harbin Institute of Technology, Weihai, China), one of the general chairs of MLICOM 2017, 2nd EAI International Conference on Machine Learning and Intelligent Communications. MLICOM 2017 will take place in Weihai, China on August 5-6, 2017. We talked machine learning, future of mobile communications, and how the two can work together for better QoS.
What is the central topic of MLICOM 2017 and why is it important? What is this event’s vision?
The central topic of MLICOM 2017 is the up-to-date research in machine learning and its applications in communications systems. Along with the fast developing of mobile communications technologies, the amount of required high-quality wireless services is increasing exponentially. According to the prediction of Cisco VNI Mobile Forecast 2016, Global mobile data traffic will increase nearly eightfold between 2015 and 2020, and mobile network connection speeds will increase more than threefold by 2020. Hence, there are still big gaps between the future requirements and current communications technologies, even using 4G/5G. How to integrate the limited wireless resources with some intelligent algorithms/schemes and boost potential benefits are the interests of the conference.
As an emerging discipline, machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence and explores the study and construction of algorithms that can learn from and make predictions on complicated scenarios. In communication systems, the previous/current radio situations and communication paradigms should be well considered to obtain a high quality of service (QoS), such us available spectrum, limited energy, antenna configurations, and heterogeneous properties. Machine learning algorithms facilitate complicated scenarios analysis and prediction, and thus to make optimal actions in OSI seven layers.
What have been the recent developments in machine learning and intelligent communications?
Recently, the research hot-spots in communication area have been extended from traditional human-to-human communication systems to human-to-machine and machine-to-machine systems. With the advent of the Internet, people have become increasingly interconnected at an unprecedented scale. However, due to the proliferation of short-range networks and the prevalence of devices connected to these networks, a seamless interconnection between devices is gradually being created. Next-generation communication systems are expected to learn the diverse and colorful characteristics of both the users’ ambiance as well as human behavior, in order to autonomously determine the optimal system configurations. These smart mobile terminals have to rely on sophisticated learning and decision-making. Machine learning, as one of the most powerful artificial intelligence tools, constitutes a promising solution.
What does the future of this research domain look like? What topics can we expect to see covered at this year’s MLICOM?
In our view, the combination of machine learning and intelligent communication is a well-reasoned and promising area. The integrating of machine learning algorithms into communication systems will improve the QoS and make the systems smart, intelligent, and efficient. The topics of interest for the conference include, but are not limited to:

  • Intelligent cloud-support communications
  • Intelligent spectrum (or resource block) allocation schemes
  • Intelligent energy-aware/green communications
  • Intelligent software defined flexible radios
  • Intelligent cooperative networks
  • Intelligent antennas design and dynamic configuration.
  • Intelligent Massive MIMO communication systems
  • Intelligent positioning and navigation systems
  • Intelligent cooperative/distributed coding
  • Intelligent wireless communications
  • Intelligent wireless sensor networks
  • Intelligent underwater sensor networks
  • Intelligent satellite communications
  • Machine learning algorithm & cognitive radio networks
  • Machine learning and information processing in wireless sensor networks
  • Data mining in heterogeneous networks
  • Machine learning for multimedia
  • Machine learning for IoT
  • Decentralized learning for wireless communication systems

What are your expectations for MLICOM 2017?
We hope to attract the attention for the research in applying machine learning algorithms to intelligent communication systems and greatly improve the QoS and adaptation of the systems. Meanwhile, we expect to enrich the theory of machine learning itself.

MLICOM 2017 is still accepting papers, workshops, and special sessions! Learn more here.

Categories
Call for papers Conferences

MLICOM 2017 is calling for papers

MLICOM 2017, the 2nd EAI International Conference on Machine Learning and Intelligent Communications will take place in Weihai, People’s Republic of China on 5-6 August, 2017.

The Conference invites high quality original research papers describing recent and expected challenges or discoveries along with potential intelligent solutions for future mobile communications and networks. We welcome both theoretical and experimental papers. We expect the papers of the conference to serve as valuable references for a large audience from both academia and industry. Both original, unpublished contributions and survey/tutorial types of articles are encouraged.

This conference focuses on applying machine learning algorithms in communication systems, in order to improve the quality of service and make the systems smart, intelligent, and efficient. The topics of interest for the conference include, but are not limited to:

·           Intelligent cloud-support communications

·           Intelligent spectrum (or resource block) allocation schemes

·           Intelligent energy-aware/green communications

·           Intelligent software defined flexible radios

·           Intelligent cooperative networks

·           Intelligent antennas design and dynamic configuration.

·           Intelligent Massive MIMO communication systems

·           Intelligent positioning and navigation systems

·           Intelligent cooperative/distributed coding

·           Intelligent wireless communications

·           Intelligent wireless sensor networks

·           Intelligent underwater sensor networks

·           Intelligent satellite  communications

·           Machine learning algorithm & cognitive radio networks

·           Machine learning and information processing in wireless sensor networks

·           Data mining in heterogeneous networks

·           Machine learning for multimedia

·           Machine learning for IoT

·           Decentralized learning for wireless communication systems

All accepted papers will be published by Springer and made available through SpringerLink Digital Library, one of the world’s largest scientific libraries. Proceedings are submitted for inclusion to the leading indexing services: Elsevier (EI), Thomson Scientific (ISI), Scopus, Crossref, Google Scholar, DBLP.

The authors of the best papers will be invited to submit an extended version of their work through a special issue of Mobile Networks and Applications.

Important dates

Full Paper Submission deadline:  20 March 2017
Notification deadline:  1 June 2017
Camera-ready deadline:  1 July 2017

For further information about MLICOM 2017, visit the official website.

Categories
News

Quantifying street safety opens avenues for AI-assisted urban planning

Urban revitalization is getting an update that combines crowdsourcing and machine learning.

Theories in urban planning that infer a neighborhood’s safety based on certain visual characteristics have just received support from a research team from MIT, University of Trento and the Bruno Kessler Foundation, who have developed a system that assigns safety scores to images of city streets.
The work stems from a database of images that MIT Media Lab was gathering for years around several major cities. These images have now been scored based on how safe they look, how affluent, how lively, and so on.
Adjusted for factors such as population density and distance from city centers, the correlation between perceived safety and visitation rates was strong, but it was particularly strong for women and people over 50. The correlation was negative for people under 30, which means that males in their 20s were actually more likely to visit neighborhoods generally perceived to be unsafe than to visit neighborhoods perceived to be safe.
César Hidalgo, one of the senior authors of the paper, has noted that their work is connected to two urban planning theories – the defensible-space theory of Oscar Newman, and the eyes-on-the-street theory of Jane Jacobs.
Jacobs’ theory, Hidalgo says, is that neighborhoods in which residents can continuously keep track of street activity tend to be safer; a corollary is that buildings with street-facing windows tend to create a sense of safety, since they imply the possibility of surveillance. Newman’s theory is an elaboration on Jacobs’, suggesting that architectural features that demarcate public and private spaces, such as flights of stairs leading up to apartment entryways or archways separating plazas from the surrounding streets, foster the sense that crossing a threshold will bring on closer scrutiny.
Researchers have identified features that align with these theories, confirming that buildings with street-facing windows appear to increase people’s sense of safety, and that in general, upkeep seems to matter more than distinctive architectural features.
Hidalgo’s group launched its project to quantify the emotional effects of urban images in 2011, with a website that presents volunteers with pairs of images and asks them to select the one that ranks higher according to some criterion, such as safety or liveliness. On the basis of these comparisons, the researchers’ system assigns each image a score on each criterion.
So far, volunteers have performed more than 1.4 million comparisons, but that’s still not nearly enough to provide scores for all the images in the researchers’ database. For instance, the images in the data sets for Rome and Milan were captured every 100 meters or so. And the database includes images from 53 cities.
So three years ago, the researchers began using the scores generated by human comparisons to train a machine-learning system that would assign scores to the remaining images. “That’s ultimately how you’re able to take this type of research to scale,” Hidalgo says. “You can never scale by crowdsourcing, simply because you’d have to have all of the Internet clicking on images for you.”
To determine which features of visual scenes correlated with perceptions of safety, the researchers designed an algorithm that selectively blocked out apparently continuous sections of images — sections that appear to have clear boundaries. The algorithm then recorded the changes to the scores assigned the images by the machine-learning system.

Categories
News

Did a self-learning AI just turn the Turing test on its head?

Original news release was issued by The University of Sheffield.

A fair warning is due, the latest development in Artificial Intelligence research is a tad eerie. Computer AI is now capable of learning by simple observation, no specification of “what to observe” necessary. We are officially a step closer to a successful Turing test.

Turing test is a popular experiment developed by Alan Turing in 1950, that should determine whether a computer has achieved intelligence that is indistinguishable from human. In a Turing test, an interrogator is in a conversation with two subjects – one of them is a person, and the other a computer. If the interrogator consistently fails to correctly determine which of the two is a computer after the conversation, the computer has passed the test, and is considered to have human-level intelligence.
Researchers at the University of Sheffield have turned the Turing test into an inspiration for AI development, making it possible for machines to learn how natural or artificial systems work by simply observing them, without being told what to look for. This could mean advances in the world of technology with machines able to predict, among other things, human behavior.
Dr Roderich Gross from the Department of Automatic Control and Systems Engineering at the University of Sheffield explained the advantage of the approach, called ‘Turing Learning’, is that humans no longer need to tell machines what to look for:

“Our study uses the Turing test to reveal how a given system – not necessarily a human – works. In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements. To do so, we put a second swarm – made of learning robots – under surveillance too. The movements of all the robots were recorded, and the motion data shown to interrogators.”

He added: “Unlike in the original Turing test, however, our interrogators are not human but rather computer programs that learn by themselves. Their task is to distinguish between robots from either swarm. They are rewarded for correctly categorising the motion data from the original swarm as genuine, and those from the other swarm as counterfeit. The learning robots that succeed in fooling an interrogator – making it believe their motion data were genuine – receive a reward.”
“Imagine you want a robot to paint like Picasso. Conventional machine learning algorithms would rate the robot’s paintings for how closely they resembled a Picasso. But someone would have to tell the algorithms what is considered similar to a Picasso to begin with. Turing Learning does not require such prior knowledge. It would simply reward the robot if it painted something that was considered genuine by the interrogators. Turing Learning would simultaneously learn how to interrogate and how to paint.”
“Scientists could use it to discover the rules governing natural or artificial systems, especially where behaviour cannot be easily characterised using similarity metrics,” he said.
“Computer games, for example, could gain in realism as virtual players could observe and assume characteristic traits of their human counterparts. They would not simply copy the observed behaviour, but rather reveal what makes human players distinctive from the rest.”
The discovery could also be used to create algorithms that detect abnormalities in behaviour. This could prove useful for the health monitoring of livestock and for the preventive maintenance of machines, cars and airplanes.
Turing Learning could also be used in security applications, such as for lie detection or online identity verification.

Categories
Call for participation Conferences

Participate in MLICOM 2016!

MLICOM 2016. the EAI International Conference on Machine Learning and Intelligent Communications, is going to take place on August 26-28, 2016 in Shanghai, People’s Republic of China.

The conference will focuse on applying machine learning algorithms in communication systems, in order to improve the quality of service and make the systems smart, intelligent, and efficient.

Topics of interest to the conference include:

  • Intelligent  machine learning algorithm & cognitive radio networks;
  • Intelligent cloud-support communications;
  • Intelligent spectrum (or resource block) allocation schemes;
  • Intelligent data mining in heterogeneous networks;
  • Intelligent energy-aware/green communications;
  • Intelligent software defined flexible radios;
  • Intelligent cooperative networks;
  • Intelligent antennas design and dynamic configuration;
  • Intelligent Massive MIMO communication systems;
  • Intelligent machine learning algorithms for IoT;
  • Intelligent positioning and navigation systems;
  • Intelligent cooperative/distributed coding.

All accepted papers will be published by Springer and made available through SpringerLink Digital Library, one of the world’s largest scientific libraries. Proceedings are submitted for inclusion to the leading indexing services: Elsevier (EI), Thomson Scientific (ISI), Scopus, Crossref, Google Scholar, DBLP. Authors of the Best Papers will be invited to submit an extended version of their work through the EAI Endorsed Transactions on Cognitive Communications.

Important dates:

Full Paper Submission Deadline: 20th June, 2016
Notification Deadline: 15th July, 2016
Camera-ready Deadline: 1st August, 2016

Register to the Conference here!

For more information about MLICOM 2016, visit the conference official website!

Categories
News

Google is putting extra thought into AI safety concerns

Via Google Research Blog.
Development of machine learning and artificial intelligence is moving forward steadily. And with significant progress come significant risks and public interest in the safety of advanced AI. Even if we forget the worst case scenarios that have been done and overdone in fiction, the practical problems that we are facing while trying to develop a capable and autonomous AI are numerous.
We have reported previously that Google is well aware of the dangers when it comes to advanced AI, but they have now gone one step further. Together with researchers from OpenAI, Stanford, and Berkeley, they have boiled the emerging issues down to five distinct categories. As they have stated on their Research Blog, these are issues that may seem trivial, even irrelevant right now, but they are the corner stones of current and future development of AI. The goal is simple – to ground the debate and frame it into tangible and quantifiable problems for both the engineers, and the public. There are no nightmare scenarios to be found here, only sure-fire ways of avoiding them.
It would be a stretch to put the paper that they have published yesterday, Concrete Problems in AI Safetyright next to Asimov’s Three Laws of Robotics, but the two have a lot in common. Google’s perspective is a little more narrow, as they focus on accidents in machine learning systems, but safety is still the central theme. As they state in the paper, an accident constitutes “unintended and harmful behavior that may emerge from machine learning systems when we specify the wrong objective function, are not careful about the learning process, or commit other machine learning-related implementation errors.” This definition surely covers enough ground to raise the interest of anyone interested in AI safety.
And here are the five problems, which authors have described as forward thinking and long-term:

  • Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
  • Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
  • Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
  • Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
  • Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.

That is a tall order. We wish the entirety of Google’s Brain division best of luck.
 

Categories
Call for papers Conferences

MLICOM 2016 calls for papers!

MLICOM 2016. the EAI International Conference on Machine Learning and Intelligent Communications, is going to take place on August 26-28, 2016 in Shanghai, People’s Republic of China.

The conference will focuse on applying machine learning algorithms in communication systems, in order to improve the quality of service and make the systems smart, intelligent, and efficient.

Topics of interest to the conference include:

  • Intelligent  machine learning algorithm & cognitive radio networks;
  • Intelligent cloud-support communications;
  • Intelligent spectrum (or resource block) allocation schemes;
  • Intelligent data mining in heterogeneous networks;
  • Intelligent energy-aware/green communications;
  • Intelligent software defined flexible radios;
  • Intelligent cooperative networks;
  • Intelligent antennas design and dynamic configuration;
  • Intelligent Massive MIMO communication systems;
  • Intelligent machine learning algorithms for IoT;
  • Intelligent positioning and navigation systems;
  • Intelligent cooperative/distributed coding.

All accepted papers will be published by Springer and made available through SpringerLink Digital Library, one of the world’s largest scientific libraries. Proceedings are submitted for inclusion to the leading indexing services: Elsevier (EI), Thomson Scientific (ISI), Scopus, Crossref, Google Scholar, DBLP. Authors of the Best Papers will be invited to submit an extended version of their work through the EAI Endorsed Transactions on Cognitive Communications.

Important dates:

Full Paper Submission Deadline: 1st June, 2016
Notification Deadline: 15th July, 2016
Camera-ready Deadline: 1st August, 2016

For more information about MLICOM 2016, visit the conference official website!

Categories
News

Georgia Tech students got tricked hard by this AI

Original news release was issued by Georgia Institute of Technology by Jason Maderer.

At Georgia Tech, Knowledge Based Artificial Intelligence (KBAI) is a mandatory class when pursuing online master’s degree in Computer Science. It is taught by Professor Ashok Goel and as it happens with popular classes, the students have questions even before they start. For the most part, it is the job of teaching assistants (TA) to answer them, but even then it might not be enough to answer 10 000 questions asked about Ashok Goel’s class. So far, professor Goel had the help of eight TAs, but this semester, he ‘hired’ another one.

Model Release-NO
Ashok Goel in the classroom

Her name is Jill Watson. Naturally she inherited her surname after her parent – IBM’s Watson Platform. Jill is a computer, a virtual teaching assistant, first of its kind. Jill’s origins go back to the last year, when Ashok Goel, together with graduate students, got access to all the questions and answers on the forum of KBAI course. They they let Miss Jill Watson to go over them, and prepare for the start of the next semester.

“Initially her answers weren’t good enough because she would get stuck on keywords,” said Lalith Polepeddi, one of the graduate students who co-developed the virtual TA. “For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons — same keywords — but different context.”

This was predicted, and that is why her initial answers were not visible to the students. Jill just had to get accustomed to the job, and soon she was answering with 97 percent certainty that she is correct. Goel and his team concluded in March, that she could interact directly with the students, if the certainty is at least 97%. Other replies were supervised by human TAs. On April 26, Goel told his students, that they were unknowingly interacting with an AI, while studying it.

One student (out of three hundred) however, had some suspicions about Jill back in February.

“We were taking an AI course, so I had to imagine that it was possible there might be an AI lurking around,” said Tyson Bailey, who lives in Albuquerque, New Mexico.

The realization was overwhelming for the students, even mind-blowing for some. The goal of Jill is to answer 40% of all the questions by the end of this year. It is also noted that next semester she will return under a different name. Virtual teaching assistant technology could be used on many different schools, saving a lot of valuable time.

Categories
News

Deep neural networks equip self-driving cars with intuition

Original news release was published by KU Leuven.

It wouldn’t be an overstatement to say that development of autonomous vehicles is all the rage. Rand Corporation has recently reported that it is nigh impossible to reliably prove the complete safety of safe-driving cars just by test drives, given the huge number of test-driven miles necessary for statistically viable results. Rand has suggested the need for better methods of demonstrating safety of the vehicles, and KU Leuven researchers may just be able to contribute. New study by Jonas Kubilius and Hans Op de Beeck shows that by using deep neural networks (DNNs), machines with image recognition technology can learn to respond to unfamiliar objects like humans would, showing elementary traits of what we know as intuition.

A self-driving car may thus be able to make human-like decisions under poor visibility conditions, such as in a fog or a heavy rain, when put in front of a distorted, or unfamiliar obstacle. Currently used image recognition technology, which is trained to recognize a fixed set of objects, would struggle under such conditions, because it is unable to assess what the unfamiliar object looks like, and then act accordingly as a live person would.

“We found that deep neural networks are not only good at making objective decisions (‘this is a car’), but also develop human-level sensitivities to object shape (‘this looks like …’),” Jonas Kubilius explains. “In other words, machines can learn to tell us what a new shape – say, a letter from a novel alphabet or a blurred object on the road – reminds them of. This means we’re on the right track in developing machines with a visual system and vocabulary as flexible and versatile as ours.”

Kubilius and de Beeck have demonstrated that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. They have shown that these models explain human judgements of shape for several benchmark sets of behavioral and neural stimulus on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments.

Does that mean we may soon be able to safely hand over the wheel? “Not quite,” says Kubilius, “We’re not there just yet. And even if machines will at some point be equipped with a visual system as powerful as ours, self-driving cars would still make occasional mistakes – although, unlike human drivers, they wouldn’t be distracted because they’re tired or busy texting. However, even in those rare instances when self-driving cars would, their decisions would be at least as reasonable as ours.”

Categories
Call for papers Conferences

Submit your paper to MLICOM 2016!

The EAI International Conference on Machine Learning and Intelligent Communications (MLICOM 2016) is going to take place on August 26-28, 2016 in Shanghai, People’s Republic of China.

The conference focuses on applying machine learning algorithms in communication systems, in order to improve the quality of service and make the systems smart, intelligent, and efficient.
Topics of interest to the conference include:

  • Intelligent  machine learning algorithm & cognitive radio networks;
  • Intelligent cloud-support communications;
  • Intelligent spectrum (or resource block) allocation schemes;
  • Intelligent data mining in heterogeneous networks;
  • Intelligent energy-aware/green communications;
  • Intelligent software defined flexible radios;
  • Intelligent cooperative networks;
  • Intelligent antennas design and dynamic configuration;
  • Intelligent Massive MIMO communication systems;
  • Intelligent machine learning algorithms for IoT;
  • Intelligent positioning and navigation systems;
  • Intelligent cooperative/distributed coding.

All accepted papers will be published by Springer and made available through SpringerLink Digital Library, one of the world’s largest scientific libraries. Proceedings are submitted for inclusion to the leading indexing services: Elsevier (EI), Thomson Scientific (ISI), Scopus, Crossref, Google Scholar, DBLP. Authors of the best papers will be invited to submit an extended version of their work through the EAI Endorsed Transactions on Cognitive Communications.

Important dates:

Full Paper Submission Deadline: 1st June, 2016
Notification Deadline: 15th July, 2016
Camera-ready Deadline: 12th August, 2016

For more information about MLICOM 2016, visit the conference official website!