Stanford researchers reveal surprisingly sensitive information from phone metadata

Original press release was issued by Stanford University.

The collection of phone metadata (numbers dialed and length of calls), as controversial as it is on general principle, has never really been a subject of significant public backlash as a serious breach of privacy. As a society, we just don’t suppose that what government agencies can learn from our metadata is in some way sensitive private information. Which is why this information can be accessed without a warrant.

This might soon change in the light of research performed by a team at Stanford University, who have managed to easily infer suprisingly sensitive and accurate personal information – such as health details – from metadata alone. Additionally, the reach of such surveillence has been demonstrated to be larger than previously thought – following metadata “hops” from one person’s communications can involve thousands of people. The findings provide the first empirical data on the privacy properties of telephone metadata.

The researchers set out to fill knowledge gaps within the National Security Agency’s current phone metadata program, which has drawn conflicting assertions about its privacy impacts. The law currently treats call content and metadata separately and makes it easier for government agencies to obtain metadata, in part because it assumes that it shouldn’t be possible to infer specific sensitive details about people based on metadata alone.

“I was somewhat surprised by how successfully we inferred sensitive details about individuals,” said study co-author Patrick Mutchler, a graduate student at Stanford. “It feels intuitive that the businesses you call say something about yourself. But when you look at how effectively we were able to identify that a person likely had a medical condition, which we consider intensely private, that was interesting.”

From a small selection of the users, the Stanford researchers were able to infer, for instance, that a person who placed several calls to a cardiologist, a local drugstore and a cardiac arrhythmia monitoring device hotline likely suffers from cardiac arrhythmia. Another study participant likely owns an AR semiautomatic rifle, based on frequent calls to a local firearms dealer that prominently advertises AR semiautomatic rifles and to the customer support hotline of a major firearm manufacturer that produces these rifles.

The computer scientists built an app that retrieved the previous call and text message metadata from more than 800 volunteers’ smartphone logs. In total, participants provided records of more than 250,000 calls and 1.2 million texts. The researchers then used a combination of inexpensive automated and manual processes to illustrate both the extent of the reach – how many people would be involved in a scan of a single person – and the level of sensitive information that can be gleaned about each user.

By extrapolating participant data, the researchers estimated that the NSA’s current authorities could allow for surveilling roughly 25,000 individuals – and possibly more – starting from just one “seed” phone user.

Although the results are not surprising, the researchers said that the raw, empirical data provide a better-informed starting point for future conversations between privacy interest groups and policymakers.

“If we’re going to pick a sweet spot as society, where we want the privacy vs. security tradeoff to lie, it’s important to understand the implications of the polices that we have,” Mutchler said. “In this paper, we have empirical data, which I think will help people make informed decisions.”


Jet-powered hoverboard sets an absurdly high bar for personal flight

Source for this article was published by Engadget and written by Steve Dent.

Chances are that you have seen last month’s ridiculously impressive footage from the first test of a personal hoverboard developed by Frank Zapata, which can allegedly fly for as long as 10 minutes, as far as 10,000 feet (just over 3 km), and achieve top speed of 150 km/h (watch the video below). The Flyboard Air, as its name goes, has just broken the world record for the fartherst hoverboard flight, reaching the distance of 2252 metres, surpassing the previous record nearly ten times, setting the new benchmark for personal hoverboards. That can sound crazy if the last hoverboard you have seen was used by Green Goblin in the 2002 Spider-man movie.

Flyboard Air operates untethered, unlike the current mainstream packs that use water to propel its user. This one is using four 250-horsepower jet engines fueled by a backpack worth of kerosene, and controlled by a hand-held remote. The steering is handled entirely by the weight-shifting of the user, and its modest dimensions make it far more practical that one would expect judging just by its performance.

“It’s impossible to ride it before you have a minimum of 50 or 100 hours in the original Flyboard with water. Also, if you want to try it, you must have seven lives, like the cat,” said Frank Zapata for The Verge.

However, as much as Flyboard Air looks like an almost fully developed commercial product, we shouldn’t expect fleets of hoverboard users whizzing through the air just yet. As Zapata stated, it is very difficult to maintain control over their current prototype. At least 50 hours of flight time on the water-powered model is highly recommended before even attempting the jet version.

We can, however, expect more footage and information in the near future, as the company moves forward with commercialization and talks with numerous companies. Zapata sees governments and security forces as the first prospective users of the Flyboard Air, but he is also working on a new model that would be more suitable for the general public: “If everybody wants a Flyboard Air, we have to work with the government, we have to work with liability, we have to work on a thousand things. But why not?”


Deep neural networks equip self-driving cars with intuition

Original news release was published by KU Leuven.

It wouldn’t be an overstatement to say that development of autonomous vehicles is all the rage. Rand Corporation has recently reported that it is nigh impossible to reliably prove the complete safety of safe-driving cars just by test drives, given the huge number of test-driven miles necessary for statistically viable results. Rand has suggested the need for better methods of demonstrating safety of the vehicles, and KU Leuven researchers may just be able to contribute. New study by Jonas Kubilius and Hans Op de Beeck shows that by using deep neural networks (DNNs), machines with image recognition technology can learn to respond to unfamiliar objects like humans would, showing elementary traits of what we know as intuition.

A self-driving car may thus be able to make human-like decisions under poor visibility conditions, such as in a fog or a heavy rain, when put in front of a distorted, or unfamiliar obstacle. Currently used image recognition technology, which is trained to recognize a fixed set of objects, would struggle under such conditions, because it is unable to assess what the unfamiliar object looks like, and then act accordingly as a live person would.

“We found that deep neural networks are not only good at making objective decisions (‘this is a car’), but also develop human-level sensitivities to object shape (‘this looks like …’),” Jonas Kubilius explains. “In other words, machines can learn to tell us what a new shape – say, a letter from a novel alphabet or a blurred object on the road – reminds them of. This means we’re on the right track in developing machines with a visual system and vocabulary as flexible and versatile as ours.”

Kubilius and de Beeck have demonstrated that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. They have shown that these models explain human judgements of shape for several benchmark sets of behavioral and neural stimulus on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments.

Does that mean we may soon be able to safely hand over the wheel? “Not quite,” says Kubilius, “We’re not there just yet. And even if machines will at some point be equipped with a visual system as powerful as ours, self-driving cars would still make occasional mistakes – although, unlike human drivers, they wouldn’t be distracted because they’re tired or busy texting. However, even in those rare instances when self-driving cars would, their decisions would be at least as reasonable as ours.”


MIT researchers are monitoring a city's worth of human waste in the name of urban health

Original press release was issued by MIT Senseable City Lab and Alm Lab.

It isn’t glamorous, but these robots could be the future of public health management. Meet Mario and Luigi, the plumber probes developed by researchers at the MIT Senseable City Lab, who are currently wading through the sewage of the city of Cambridge, MA, monitoring the health of local citizens at an unprecedented level. The research project is aptly named – Underworlds.

Mario and Luigi are taking very close look at the feces and urine that flow through the city sewers, studying different species of bacteria, viruses, and chemical compounds that live in the human gut and converge in the city’s sewage – our collective microbiome. Tapping into this vast reservoir of information can help monitor urban health patterns, shaping more inclusive public health strategies, and pushing the boundaries of urban epidemiology.

Underworlds began as a conversation between professors Carlo Ratti, Director of the Senseable City Lab in the Department of Urban Studies and Planning, and Eric Alm, Director of a laboratory in the Department of Biological Engineering ­ before evolving into a multi­departmental research endeavor, involving Civil and Environmental Engineering and the Computer Science and Artificial Intelligence Laboratory.

The first application of Underworlds will be contagious disease monitoring and prediction. Early warnings in relation to the presence of enteric disease outbreaks in urban centers could ultimately reduce a community’s medical costs and even help mitigate outbreaks.

“We can reveal the invisible in a city,” explains Professor Ratti. “For every cell in the human body there are around ten bacterial cells, constituting the human microbiome which has recently been recognized as a key determinant of an individual’s health and wellness – how can we measure something like the microbiome at the scale of an entire city, such as Cambridge?”

Image: MIT Microbiome Centre

The MIT team imagines a future where sewage is mined for information that can inform health practitioners, policy makers, and communities alike. “The availability of real-­time or near real­-time data that measures the presence of important pathogens could change how public health responds to these threats,” says Sam Lipson, of environmental health for the City of Cambridge. “Early intervention provides leverage to reduce these impacts, but this kind of surveillance information has generally not been available to public health in the past.”

The implications of Underworlds extend beyond just surveillance to the development of a new type of human population census. Analyzed in tandem with demographic data, a front­end data platform will be created to better understand and visualize the particular health of a neighborhood.


Smart Cities or Silent Cities

Experts from academia, industry and institutions met in October 2015 in Bratislava at the SmartCity 360° Summit, to explore today’s issues of sustainability, infrastructures, smart urban planning and citizenship.

David Mair, from the European Commission, and Dagmar Caganova, from the Slovak University of Technology, both speakers at the event, explain what is the challenge related to smart cities and why it cannot be delayed.

Do you want to know more about the SmartCity 360° Summit? Visit the official website.


Aim high with Smart City 360° Summit

When we talk about the cities of the future we talk about Smart Cities. But what is the meaning behind the attribute? What does it mean for a city to be smart? On the pathway towards Innovation, an action seems to be the key of success: Networking.

Smart cities imply smart communities, which are the real engine of the social and economic growth in Europe.

Let’s say it through the voices of the participants in the Smart City 360° Summit 2015, which took place last October in Bratislava and Toronto.

Eager to know detailed info about the event? Visit the official website and read our report here.



Reflecting on the Smart City 360° Summit 2015

Four days, three venues and an almost overwhelming amount of ideas floating around – that was the Smart City 360° Summit 2015. Between 13 and 16 October 2015, experts from around the world met in Bratislava and Toronto to explore and discuss the future of Smart Cities. Organized by EAI in collaboration with the Slovak University of Technology, and with the patronage of the Vice President of the European Commission Maros Sefcovic, and the Ministry of Economy of the Slovak Republic, the summit was one of the major international events on Smart Cities.

The Summit speaker list included such distinguished names as Robert Redhammer (Rector of the Slovak University of Technology) with his opening speech, David Mair (Acting Director for Policy Support Coordination, DG Joint Research Centre, European Commission), Rastislav Chovanec (State Secretary, Ministry of Economy of the Slovak Republic), Tamas Vahl (Smarter Cities and EMF Pursuit Lead, IBM Central and Eastern Europe), Harald Baur (Networked Society Evangelist and Strategic Marketing Manager, Ericsson), Pavol Adamec (Director, Assurance, Risk Management, Information Technology, PwC Slovakia), Simon Sicko (CEO of Pixel Federation), Manfred Schrenk (Competence Center for Urban and Regional Planning, Vienna), Mario Paroha (Head of Research at the GreenWay Operator), Lukas Stockinger (Communications Officer, Smart City Wien Agency), Massimo Craglia (Digital Earth and Reference Data Unit, Joint Research Centre, EC), and many others.

If there was any one idea that kept coming up repeatedly during nearly every session, it was that we need to think much bigger when we think about Smart Cities. Many speakers and panellists have made a strong point that when we talk about Smart Cities, too often do we focus on the technology behind smart devices and smart infrastructure, but in the process neglect the concept of smart people.

Developing conscious citizenship, educating individuals ready for the future job market and encouraging social innovation in an increasingly technology-driven world – these have been relentlessly put forward as the corner stones of safe, sustainable, and smart society.

The second over-arching theme of the Smart City 360° Summit was a general uncertainty with regards to the definition of a “Smart City”, as well as an emphasis on the fact that there is no one universal recipe for a city to become smart. Many speakers have pointed out the multi-disciplinary nature of smart cities, and challenges that come up when governments, private sector, and citizens are required to collaborate. Many warnings regarding silo mentality were expressed, again emphasizing the need for mutual understanding and efficient collaboration between the many facets that together form a Smart City.

The main takeaway from the Smart City 360° Summit was that creating a Smart City is a long, arduous and problematic process that poses many complex challenges. This, in the end, only affirms the need for events such as Smart City 360°. Meeting people from different areas of expertise and public life, and putting heads together to make progress in sustainability and smart urban development is, as it appears, absolutely essential.

Take a look at the photo gallery from the Smart City 360° Summit here.


Download SmartCity360° app

The SmartCity360° app is now available for download. Check out the program, who is coming, join community activities and meet new colleagues and potential partners.

meetWhile attending the SmartCity360° Summit participants can easily set up meetings using their mobile phones. Users can select either Meet the Speaker or Meet the Colleague, choose the person they would like to meet and follow the simple screen instructions.

exhibitsExhibits are the showcase of innovative excellence. Exhibitors will share their ideas and projects with partners, innovators and potential investors. Participate to the Summit activities and vote on-site for the best presentations and exhibits.

The app is available for iOS or Android, or you can even check it out online.


What do you expect from the Smart City 360° Summit?

The SmartCity360 Summit is taking place simultaneously in Bratislava and Toronto from the 13th to the 16th of October. It is a special event bringing together researchers, industry and urban leaders to discuss smart cities in a 360˚ degree perspective. The main event features 9 co-located conferences that give space to selected scientific projects on the topics of smart cities, mobility and healthcare. Thanks to a revolutionary Community program, members will have the opportunity to meet colleagues and speakers, and to be active participants during the Birds of feather discussions, the Forecaster platform and the Community Voted presentations. Furthermore, Exhibit spaces are available to share and vote the most innovative projects and ideas.

For further info about the Summit, please visit the official website.


Big data analysis for Smart Cities

Improving the quality of life in our cities is a challenge for the next years. To analyze, interpret and understand society’s trends and make decisions we need to develop new algorithms and visualization techniques and we need to reinforce the interdisciplinary nature of many problems of data analytics. During our International Conference on Big Data and Analytics for Smart Cities, Prof. Cercone will be presenting how a renewed effort on big data analytics and visualization methods can affect Smart Cities-focused activities. In the following interview, he talks to us about the challenges related to Big Data Analytics.

Read more about the Smart City 360 Summit taking place on 13-16 October 2015 in Toronto/Bratislava and the first International Conference on Big Data and Analytics for Smart Cities taking place on 13 October 2015 in Toronto.

BIGDasc 2015, the first International Conference on Big Data and Analytics for Smart Cities, will take place on October 13. As General Chair, what do you expect from the event?

I expect to see an interplay between various stakeholders, academic, public sector, private sector and government representatives, to discuss issues about how to make our cities (including the GTA soon to be a megacity of 10 million) move livable, more serviceable, less dangerous, accessible, and generally improve quality of life for its residents. This is the first of hopefully many important venues on this topic and hopefully BIGDasc2015 will pave the way for successful venues in the future.

The research area related to Big Data analysis requires huge efforts, due to the inadequacy of the traditional data processing applications. Could you give us an overview of the present status of the progresses in the field?

Prof. Nick Cercone, General Chair at BIGDasc 2015
Prof. Nick Cercone, General Chair at BIGDasc 2015

There is renewed effort on big data analytics and visualization methods undergoing research and development in almost all aspects of our society that affect our lives. Some examples abound with streaming data – data that accumulates so fast that it cannot be analyzed by traditional data mining algorithms (such is the case for most transaction data). Thus data from mobile medical devices, from traffic signals and cameras, from genomics, social media, energy data, etc. require new algorithms and visualization techniques to analyze, interpret and understand trends, data utility, and make decisions.

In your opinion, what are the key-challenges to deal with in the next future, in order to positively address the research and industry sectors involved in smart cities-focused activities?

The interdisciplinary nature of many problems of data analytics needs to be reinforced. Teams of researchers and practitioners working on related problems but often speaking a different language may provide breakthroughs that individual researchers may miss. For example, statisticians, computer scientists, engineers, mathematicians and subject area specialists could form a formidable team to work on many types of big data analytics problems. Working with industry (including forming new start-ups) is imperative; I have always believed that academics can provide advanced prototypes but that product development, marketing, business intelligence etc. are best left to the private sector.