Self-driving, or autonomous, cars have been heavily covered the media. Although sometimes we do see accidents or incidents that show some dangers are present, the trend of late indicates these vehicles will be commonly present in our roads in the years to come.
Conventional self-driving cars require dense 3D map data that inform these vehicles where to go and how to react when obstacles emerge. It is one thing to avoid hitting someone or something but cars also need to decide how to proceed as they evade an emerging obstacle. This challenge has been relatively successfully addressed by major companies such as Google.
Using GPS and LiDAR to Navigate in Areas Lacking Maps
In rural regions, visual cues are more limited and potentially less predictable, as data are generally not as clear, such as lane markings, and cars often do not even have advanced knowledge of some roads such as unpaved areas. Newer technologies apply lasers and other sensors so that information about surroundings can be acquired as the car is driving. Tools such as MIT’s MapLite integrates GPS data, for general navigation, with sensor data so that obstacles and road conditions are determined in real-time rather than being set before the car sets off. In this study, LiDAR technology offers a potentially useful combination with GPS for autonomous vehicle navigation in areas with sparse topological maps.
Solutions for Autonomous Navigation in Rural Areas
In navigating rural roads, there are potential problems. First, lane marking need to be easily identifiable, where faded marks or unclear roads could be hazardous. Second, weather conditions can affect laser scans, including rain and snow conditions. Different types of visual sensors might need to be mounted in addition to laser sensors to increase visual awareness. One way this has been addressed is using sensor fusion algorithms that can integrate a variety of sensors together to better handle complex data and adverse weather than can complicate functionality or prediction. Other methods have been proposed as well, particularly for snow conditions, including using LiDAR technology, that is a form of pulsed laser, to better detect edges that then can be differentiated as to what they are as vehicles move (e.g., differentiating lane edges vs. curbs). This also uses principal component analysis (PCA) of the resulting imaging to differentiate and detect features.
Other potential solutions for rural driving include autonomous vehicles communicating with a central database or a wider network that can access communication and information as required. In essence, a type of data crowdsourcing could be used so that cars that do drive through rural or less used areas provide data to a network that can then be accessed by other cars. Such data sharing could assist in maximizing areas where less data exist, making it easier for cars to drive in rural areas over time as cars begin to learn from other cars. This could be modeled off networks such as heterogeneous vehicular networks (HetVNET) proposed.
In many ways, the use of networks could be a form of potentially greater implementation of cognitive approaches to autonomous vehicles, where communication between vehicles as well as on-road observations and predictive algorithms are applied all together. In other words, similar to humans, cars may need to combine multiple senses and communications with others to make informed choices. Autonomous cars might be required to simply use more than visual data and incorporate sound, visual and digital data so that a combination of sensory information could best be integrated in a cognitive-like approach that can help the vehicles anticipate and decide their best choices to make.
Increasingly we are seeing more media outlets predict that the future on the roads will be the use of autonomous cars. Many challenges remain, in particular driving on rural roads, which are the most common type of road encountered. Until such a challenge can be adequately addressed, most regions will be inaccessible for autonomous cars. While many potential solutions have been proposed, none of them has demonstrated a clear, universal benefit in a variety of test cases. Given this challenge, we should see many new ideas in the years to come on how cars can best drive in areas with limited road data coverage.
Ort, T., Paull, L., & Rus, D. Autonomous Vehicle Navigation in Rural Environments without Detailed Prior Maps, 2018.
Video: Self-Driving Cars for Country Roads
 For more on how vehicles use 3D mapping for navigation, see: Häne, C., Heng, L., Lee, G. H., Fraundorfer, F., Furgale, P., Sattler, T., & Pollefeys, M. (2017). 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection. Image and Vision Computing, 68, 14–27. https://doi.org/10.1016/j.imavis.2017.07.003.
 For more on MapLite and its utility for autonomous cars, see: http://news.mit.edu/2018/self-driving-cars-for-country-roads-mit-csail-0507.
 For more on the sensor fusion algorithm and its future utility, see: Lee, U., Jung, J., Shin, S., Jeong, Y., Park, K., Shim, D. H., & Kweon, I. (2016). EureCar turbo: A self-driving car that can handle adverse weather conditions (pp. 2301–2306). IEEE. https://doi.org/10.1109/IROS.2016.7759359.
 For more on how LIDAR can assist in snow conditions, see: Aldibaja, M., Suganuma, N., & Yoneda, K. (2016). Improving localization accuracy for autonomous driving in snow-rain environments (pp. 212–217). IEEE. https://doi.org/10.1109/SII.2016.7844000.
 For more on network data and communications that could aid autonomous vehicles include: Zheng, K., Zheng, Q., Yang, H., Zhao, L., Hou, L., & Chatzimisios, P. (2015). Reliable and efficient autonomous driving: the need for heterogeneous vehicular networks. IEEE Communications Magazine, 53(12), 72–79. https://doi.org/10.1109/MCOM.2015.7355569.
 For more on cognitive approaches to autonomous vehicles, see: Banks, V. A., Stanton, N. A., Burnett, G., & Hermawati, S. (2018). Distributed Cognition on the road: Using EAST to explore future road transportation systems. Applied Ergonomics, 68, 258–266. https://doi.org/10.1016/j.apergo.2017.11.013.