Upscaling cellular processes to smart cities

We had the chance to speak with Jane Hillston, Professor of Quantitative Modeling in the School of Informatics at the University of Edinburgh, and a member of the Laboratory for Foundations of Computer Science. We are excited to welcome her as a keynote speaker at BICT 2017, 10th EAI International Conference on Bio-inspired Information and Communications Technologies, which will take place on March 15-16, 2017 in Hoboken, New Jersey. Read on to hear about what she will bring to the table at this year’s BICT and what it is cities should be doing to become more “smart”.
Could you summarize the scope of your current work and what you are coming to share with everyone at this event?

Jane Hillston (University of Edinburgh)
Jane Hillston (University of Edinburgh)

I work on what we call quantified formal methods, looking to verify systems but not only from the functional behavior but also looking at quantified aspects, such as timing and resource use. It’s very important not just that a system such as a nuclear reactor shuts down when you want it to, but also that it shuts down quickly.
Similarly, sometimes when you’re using resources it’s not sufficient that something just behaves correctly, you don’t want it to be inefficient in how much energy is required for that functional behavior. So we developed techniques that look at behaviors and characterize them in terms of states, interactions, and communications. Also, associate timing and cost with those so we can check if the server is approaching high-utilization, whether uses will get a good response time, etc. We’ve been doing that kind of analysis for about 20 years, and what we found is that over that time systems have become more complex and the languages we use to describe them must expand and grow in that complexity as well.
Most recently, we’ve been focusing on different forms of interaction because when we have these large collective systems, it’s a lot less predictable whom you plan to be communicating with. Now we look much more at things like attribute-based communication when you don’t know who’s going to be in the system but you want to talk to somebody, you send out a message and can get picked up by anyone who matches that. This is a much more appropriate way of thinking of communication when we have modern systems with pervasive components that switch on or switch off, their battery dies, etc. This method gives you much more flexibility.
Space has become really important, which is another key difference in the languages we’ve been developing. Previously, it’s been largely in a containment relationship so that we have barriers of different levels of an organization. For example, within a cell (you have the nucleus, membrane and then outside the cell), the way things can interact are affected by those barriers so that you cannot have a protein outside the cell contacting a gene inside the nucleus. Space was abstract. When you start to look at things like collective adaptive systems in nature, or how humans in society function, distance and real physical space become pertinent as well. So it’s not enough to have this logical idea of domain source or regions within space, we must know how far things are apart. Combine this feature with the attribute-based communication you can limit to only communicate those people within a certain distance of you. And so we can capture a lot of what’s going on in reality where we have these specially distributed systems.
What would you say are the main trends in biological systems that are showing promise? Or if there are any specific areas that could benefit greatly from Information Communication Technology (ICT) but their potential has gone mostly untapped?
The idea of pervasive systems.
We’ve started out with looking at computer systems and their properties such as response time and utilization of the disk; the same kind of modeling and consideration of resources comes up in many different domains. We’ve looked at biological processes in a cellular level, even plants interact in regards to the resource of land. The work we’ve been performing in the quantical project in recent years is looking at how we can take these techniques and this certain view of the world and apply it to smart cities. We specifically focused on transportation in smart cities, so things like bike-sharing systems and bus companies.
These are also collective adaptive systems, because each bike and/or bus is competing for road space, and so they then delay each other through that. The process of adapting to that is much slower in a sense that the bus company needs time to analyze the data, change the routes and rearrange the stops. So we have adaptations in different time scales according to how you look at them and which stakeholder’s point of view you consider; What would a make a good system for a bus user’s POV is not necessarily the same as the bus company’s POV. For the bus user, essentially what you’d like is a taxi and for the bus company what you want is as few buses as possible with standard routes.
The computers in these cases are invisible to people, which makes it even more important in some ways that such systems are formally verified that we’re sure they’ll behave in the right way. Quite often the public won’t understand that there’s a computer in there that they’re interacting with and so things like bike-sharing systems, which is very much a smart city kind of system, is not generally thought of as being computer-based. People are starting to realize this with IoT with the POV of security but there are many other aspects of verification that need to be taken into consideration: resource usage fairness as well as security and functional correctness.
In regards to the relationship between smart cities and ICT from the perspective of ICT usage, what are the essential components for making a city “smart” and how do we speed up the process of becoming a smart city?
What people mean when they refer to smart cities varies a great deal. For me the key thing that makes it smart is the flow of information, where the system is collecting information from its users but also reflecting that information back to the users so that they can make better use of the system. This is what makes it adaptive. For transportation systems, people can see when the buses are arriving and with apps that can predict their journeys in a much better way. This is what makes it smart – how the system alters people’s behaviors because they have better information, and how to design systems so that you control the information flow and also make a system as suitable as possible to users.
I’m not sure so much as accelerating the process as ensuring that we’re doing it right. The danger is that we don’t take enough care ensuring these systems behave correctly. They tend to be implemented without the modeling and verification that I’m concerned with. There was the big scare of the IoT data leakage, in which IoT devices could be used to essentially get into people’s private data. This is an exact example of someone being driven by the technology of wanting to implement something without doing the modeling and analysis to check if the behavior is suitable. The more that these things are not so obvious to people that they’re using a computer system, the more there’s a need for us to try to ensure that it’s doing the right thing.