Mike Sanderson, the Director of Strategy at 1Spatial, provides a perspective piece on the rise of big spatial data and the importance of being able to base management decisions on correct real-world data.
I came across a presentation from Tim Willoughby recently. This got me thinking how can we use the current explosion in data as an opportunity to better everyday life? So what is it about Big Data as a phenomenon that is different from any other information technology fad? Well nothing if this is only to be about selling more servers or more software, but everything if we do something more intelligent with the resource that is now available to us.
The state we are in is this. There is now so much data from the social networking domain that in order to make sense of it, we have to imbue our computers with intelligence. This intelligence comes from our own knowledge, because we are the only ones that know what is significant in our everyday and working lives. Before Big Data, in our GIS world we could sit and look at a map and draw conclusions. That approach no longer makes sense when all this Big Data comes with co-ordinate references. What we have to do in this Big Data world is to decide what is significant and give our computers the knowledge to tell us when these conditions are met. So how do we give our computers this intelligence?
1Spatial has been working on this knowledge management paradigm since 2006. Being successful in the spatial big data world comes from a number of things. Amongst these I will talk about two here. The first is standards and the second is encoding our own knowledge of what is significant. The GIS industry has achieved a lot in the standards domain over the last two decades and the Geographic Markup Language (GML) is a key standard in the spatial big data world. At a basic level this allows intelligence to be associated with objects. We can use this intelligence to examine big data and find data that are associated with the objects we deem significant. So before we would pull together a map tile (essentially a square or diameter around a point). Now if we have a need to say examine objects that might be subject to flooding along a watercourse between two points we can just pull data that are strongly connected (by position) to the watercourse. You can see this in the map.
Why is this really important? If we had pulled the data from the big data world on the map sheet paradigm, then data that we weren’t really interested in would have been returned. Transmitting data we aren’t really interested in, in real-time as we manage our complex world is not intelligent. It goes further than this, if we need to edit some of this data. We should only be transmitting the changes we make – this is the real intelligence.
A few years back we did some work with Queensland Main Roads. The sensors they have on their big road trains transmit a number of parameters every 30 seconds. These include position. Not much data in big data terms. However fixing position and transmitting data relies on satellite and mobile telephony coverage. When these go off-line, the data is stored on the truck chips for later transmission. When the data feed recommences the alarms are tripped in the back-office because of the break in transmission. The need to add extra intelligence to data collected by sensors is a key role in making sure decisions can be taken on the correct data.
In the spatial big data world we can no longer afford to be so cavalier with our queries and risk returning, as we did in past times, 11milion web pages (which, let’s be frank, we rarely looked beyond the first page of anyway!).
Looking forward to Smart Cities there will be lots to do to provide algorithms to enable sensors and computers to analyse spatial big data to enable intelligent decision-making, but this is for another blog.
By Mike Sanderson, Director of Strategy, 1Spatial