Image recognition software and algorithm development is perhaps one of the largest areas of focus for industry, including giants such as Google and Microsoft, and academic research in computer and data science. This has extended to its utility within geospatial analysis and practical applications for the daily users.
One area future growth is seen is providing mobile application users real-time information about places they are visiting or plan to shortly visit. Software created by Jetpac allowed public Instagram data to be captured, where it then analyzed pictures about that location to then provide users with information. This could be as simple as analyzing who is visiting a restaurant to getting a sense of volume of users. Google purchased this company in 2014, where it is likely the functionality will be extended to provide other real-time data using photos to better sense what is happening in given locations. While this is currently focused on city guides, this could be extended to areas such as assessing traffic patterns or getting a better idea of customer choices for businesses. Similar software is now being developed using convolutional neural networks (CNN) to utilize video data.
Given the influence of the gaming industry, and increased interest in augmented reality (AR), image recognition is likely to continue to make advancements that will aid many industries. Companies such as CrowdOptic are providing real-time video and sensor augmentation to virtual worlds that allow viewers to analyse systems in buildings, regions, and other locations. Other companies, such as Blippar, use augmented reality in their advertising promotions. Monitoring crowd behaviour, for instance, is one area of growth, where AR can be used to navigate and manage crowded spaces, providing user and management tools.
Visual-based databases, that also encapsulate spatial data and location information, is another area of substantial potential. As new tools, such as Deja Vu, are created, which allow automated data capture and extraction of words or other information from photographs, spatial data could now also be used with that data to create rapid spatial databases by simply taking pictures. Such technology is being used to organize visual memory using mobile applications; however, it does have potential to be used in other areas of automated spatial data capture from images.
Other areas of growth include using spatial technologies to proactively take action from visual information. This is already being applied to autonomous vehicles; however, this could be enhanced for applications that guide users to specific destinations. For instance, rather than searching for a destination, a user can use an image to find the nearest store that sells something that looks like a given image. More complex analyses could even provide warnings or other proactive information using search and neural network analysis through input from photographs or videos.
While it is clear the one of the largest applications of image recognition and computer vision is in autonomous vehicles, there are new areas where the technology will likely go even beyond the next decade. For instance, driving land vehicles in the dark will likely be a challenge for current systems that mostly use visual information to determine obstacles and other hazards. New technologies by DRIVE PX 2 are being utilised to help with night-based or visually obstructive driving. Sensors that include night vision could be better integrated to provide object recognition capabilities in limited light scenarios.
What is evident is that image recognition is likely to be increasingly applied with spatial applications. We already see this happening but with integration with AI and proactive learning that allows applications to make decisions with minimal visual input, we will likely see areas of application spreading to far more fields than currently applying image recognition techniques.
 For more on video-based image capture that can be used for real-time data capture, see: Cavigelli, L., Degen, P., & Benini, L. (2017). CBinfer: Change-Based Inference for Convolutional Neural Networks on Video Data (pp. 1–8). ACM Press. https://doi.org/10.1145/3131885.3131906.
 For an example AR application, see: Gimeno, J., Portalés, C., Coma, I., Fernández, M., & Martínez, B. (2017). Combining traditional and indirect augmented reality for indoor crowded environments. A case study on the Casa Batlló museum. Computers & Graphics, 69, 92–103. https://doi.org/10.1016/j.cag.2017.09.001.
 For an example of such software and tools, see: Liu, P., Glas, D. F., Kanda, T., & Ishiguro, H. (2018). Learning proactive behavior for interactive social robots. Autonomous Robots, 42(5), 1067–1085. https://doi.org/10.1007/s10514-017-9671-8.
 For more on how limited light or night-time conditions might be addressed by autonomous driving vehicles, see: http://www.sciencemag.org/news/2017/03/researchers-teach-self-driving-cars-see-better-night