Machine learning has transformed many fields and the way we conduct research. For GIS and the spatial sciences, one way this is true is in the area of computer vision and its use in “deep learning”. Many approaches to integrating imagery have focused on classifying images or scenes that are accomplished through “deep learning” techniques that train image classifiers and try to auto-classify or semi-auto classify features. Common approaches used include neural networks models such as coevolution neural networks (CNNs) and other similar algorithms that derive from CNNs or neural networks. These models investigate thousands of images before then being fine tuned to specific datasets or applied to new imagery. Most of the focus for the geosciences has been on remote sensing applications of satellite and aerial imagery, including hyper-spectral, multispectral and natural light images, including high-resolution imagery.
Automated Pattern Recognization in Satellite Imagery Analysis
Satellite imagery analysis, including automated pattern recognition in urban settings, is one area of focus in deep learning. What is driving some of this is now large image repositories, such as ImageNet, can be used to train image classification algorithms such as CNNs along with large and growing satellite image repositories. ImageNet can be fine-tuned with more specified datasets such as Urban Atlas. In effect, many urban patterns across the world show similarities where that variation in the morphology of features allows programs to understand expected variability for a class of feature. This helps the categorization models to be aware of different spatial signatures for features such as built-up areas, roads, airports, parklands, and other features. Deep convolutional neural networks (DNNs) are also a method that has become practical for identifying and extracting high resolution aerial and satellite imagery. As one example, the extraction of roads from imagery can allow the mapping of development and likely vectors of developed areas in the future to be more rapidly understood long before an area even builds up.
Scene Segmentation in Remote Sensing Data
Another developed technique in imagery, such as remote sensing data, is scene segmentation into different parts and it is these segmented parts that can be split and merged into different combinations that are compared to desired classifications. When a combination of split and merged segments is found to match with a desired object, this forms then the desired set in which other features can be compared to. The benefit of this approach is it can be ascribed to different resolutions using also a hierarchy of spatial resolutions and relationships. In effect, the method is useful for large or small scenes in imagery.
Further Deep Learning Advances in Spatial Sciences
While a lot has been accomplished in the area of remote sensing, another area that spatial sciences could contribute and is seeing rapid advancements using deep learning is medical imaging. Similar to remote sensing, CNNs and related algorithms can be used to rapidly classify diseases and even estimate potential sickness before it has happened by using raw imagery, ranging from x-rays, scans, and photographs. Spatial patterning, similar to remote sensing data, informs on what is likely present in a patient and what could occur based on previous case histories.
While much of current research has focused on satellite and aerial imagery, other avenues could more greatly benefit from deep learning techniques. For instance, spatial classification of small objects such as complex shapes, faces and small areas could aid geospatial studies in rapid identification of these smaller objects. Although face recognition software, on popular sites such as Facebook, are more generally known, other disciplines and complex objects could now also be more easily identifiable using large image repositories such as ImageNet as a way to teach computers to better recognize what objects are. We are, effectively, only at the beginning of a major change in many sciences that apply imagery.
Deep learning has a potential to transform image classification and its use for the spatial sciences, including GIS. With large repositories now available that contain millions of images, computers can be more easily trained to automatically recognize and classify different objects. In effect, this area of research and application could be highly applicable to many types of spatial analyses.
 For examples of imagery classification using deep learning, see: Zhao, Wenzhi, and Shihong Du. 2016. “Learning Multiscale and Deep Representations for Classifying Remotely Sensed Imagery.” ISPRS Journal of Photogrammetry and Remote Sensing 113 (March):155–65. https://doi.org/10.1016/j.isprsjprs.2016.01.004.
 For more on the use of ImageNet and the use of computer vision classifiers in urban regions for satellite imagery, see: Albert, Adrian, Jasleen Kaur, and Marta C. Gonzalez. 2017. “Using Convolutional Networks and Satellite Imagery to Identify Patterns in Urban Environments at a Large Scale.” In , 1357–66. ACM Press. https://doi.org/10.1145/3097983.3098070.
 For more on deep convolutional neural networks for road extraction, see: Wang, Jun, Jingwei Song, Mingquan Chen, and Zhi Yang. 2015. “Road Network Extraction: A Neural-Dynamic Framework Based on Deep Learning and a Finite State Machine.” International Journal of Remote Sensing 36 (12):3144–69. https://doi.org/10.1080/01431161.2015.1054049.
 For more on image segmentation and information extraction, see: Wang, Jun, Qiming Qin, Zhoujing Li, Xin Ye, Jianhua Wang, Xiucheng Yang, and Xuebin Qin. 2015. “Deep Hierarchical Representation and Segmentation of High Resolution Remote Sensing Images.” In , 4320–23. IEEE. https://doi.org/10.1109/IGARSS.2015.7326782.
 For more on how deep learning is used in medicine and spatial understanding of imagery, see: Greenspan, Hayit, Bram van Ginneken, and Ronald M. Summers. 2016. “Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique.” IEEE Transactions on Medical Imaging 35 (5):1153–59. https://doi.org/10.1109/TMI.2016.2553401.