Predicting Future Movements of Pedestrians and Autonomous Vehicles

Ad:

As driverless cars and motion detection technologies are increasingly integrated into our everyday lives, applications are now being developed to help with predicting movements undertaken by both pedestrians and vehicles. Among other benefits, by predicting pathways taken for movement, it is hoped accidents could be diminished by these new and upcoming technologies.

Predicting the Movements of Pedestrians

In predicting pathways pedestrians undertake, many factors have to be considered, including the built environment, other individuals and objects a person is surrounded by, age, weather, their trajectories, and social behavior affecting pathway decisions. Deep learning and computer vision techniques have been used to help classify and project what movement might be taken based on previous examples, whereby different types of neural networks repeatedly take sensory data such as video images and classify given movement and then classify what pedestrians are likely to do next based on previous examples. Recent results were not only able to predict future pathways at a high degree of accuracy but also activities people would do in a future timestep would be predicted, anticipating where pedestrians would be and what they might be doing that affects their movements.[1]

Predicting a pedestrian's future path and activity: The green and yellow line show two possible future trajectories and two possible activities are shown in the green and yellow boxes. Depending on the future activity, the person (top right) may take different paths, e.g. the yellow path for “loading” and the green path for “object transfer”. Source: Liang et al., 2019.

Predicting a pedestrian’s future path and activity: The green and yellow line show two possible future trajectories and two possible activities are shown in the green and yellow boxes. Depending on the future activity, the person (top right) may take different paths, e.g. the yellow path for “loading” and the green path for “object transfer”. Source: Liang et al., 2019.

Challenges for determining human intention in pedestrian movements are not easily overcome. One approach has been to classify pedestrian intent by looking at their posture or pose as they are walking or standing. This helps to assess a likely decision on walking without knowing it in advance. For instance, a pedestrian standing on a curb and looking at the other side is likely to want to cross a road, but one who has his or her back turned on the main road may simply be looking at a nearby building or does not want to cross the road.[2]

Forecasting Driving Behavior

Similar to pedestrian movements, forecasting driving behaviors could be challenging. At some point, once all or most vehicles are fully autonomous, it may actually become easier to forecast movements, as vehicles can begin to communicate with each other. However, in a situation where some vehicles are fully or partially autonomous while other are not, then predicting behaviors of other vehicles is critical for safety. One approach has been to recreate driver-like behavior using a combination of deep learning algorithms such as using deep neural networks along with recurrent neural networks. These algorithms help to classify situations where a vehicle would need to either accelerate or decelerate and avoid hitting another moving or even stationary object. However, where this becomes difficult is that problems are rarely sequential, in that a combination of other factors, including other vehicles, could affect the objects nearest ones own vehicle. In effect, decisions are often predicated by other decisions, meaning that cascade effects by actors surrounding a vehicle may not be easily classifiable in time to react.[3] Based on these limitations, for now most autonomous vehicles are not fully autonomous. Vehicles are often classified from 0-5 in terms of their automated driving capabilities (with o involving no automation at all with the driver fully controlling the vehicle to 5 which is full automation).  With the exception of experimental vehicles operating at a level four,  most autonomous vehicles operate range around 2. [4]

The experimental vehicle with sensor setup and architecture of a human-centered risk assessment. (a) Sensor configuration of test vehicle; (b) Architecture of human-centered threat assessment. Source: Shin, Kim, Park, & Yi, 2019.

The experimental vehicle with sensor setup and architecture of a human-centered risk assessment. (a) Sensor configuration of test vehicle; (b) Architecture of human-centered threat assessment. Source: Shin, Kim, Park, & Yi, 2019.

There is still a long way to go for vehicles to be fully autonomous in a manner that makes safety standards high. Human behavior is often unpredictable, even as there are improvements in predicting future pedestrian and other vehicle movements. Additionally, near future scenarios call for a mix of autonomous vehicles mixed with pedestrians and non-autonomous vehicles, making scenarios for driving are very complex. Cascading effects, in particular, are difficult to predict without many different types of sensors. Vehicles may need to have sensors that detect not only nearby motion but also receive updates from neighboring regions at some distance, thereby predicting likely movement options for vehicles based on pedestrian and vehicle traffic flow. There is still more work required before a high degree of accurate prediction is reached that fulfils safety requirements, meaning that for now only partially autonomous vehicles are likely.

References

[1]    An example of this work and description of how this work was done can be found here:  https://next.cs.cmu.edu and Liang, J., Jiang, L., Niebles, J. C., Hauptmann, A. G., & Fei-Fei, L. (2019). Peeking into the future: Predicting future person activities and locations in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5725-5734). https://arxiv.org/pdf/1902.03748.pdf

[2]    For more on intent identification using pedestrian actions, see:  Pandey, Pranav, and Jagannath V. Aghav. 2020. “Pedestrian–Autonomous Vehicles Interaction Challenges: A Survey and a Solution to Pedestrian Intent Identification.” In Advances in Data and Information Sciences, edited by Mohan L. Kolhe, Shailesh Tiwari, Munesh C. Trivedi, and Krishn K. Mishra, 94:283–92. Singapore: Springer Singapore. https://doi.org/10.1007/978-981-15-0694-9_27.

[3]    For more on vehicle prediction by autonomous vehicles, see:  Shin, Donghoon, Hyun-geun Kim, Kang-moon Park, and Kyongsu Yi. 2019. “Development of Deep Learning Based Human-Centered Threat Assessment for Application to Automated Driving Vehicle.” Applied Sciences 10 (1): 253. https://doi.org/10.3390/app10010253.

[4]    For more on classifying autonomous vehicles, see:  Shreyas, V., Skanda N. Bharadwaj, S. Srinidhi, K. U. Ankith, and A. B. Rajendra. 2020. “Self-Driving Cars: An Overview of Various Autonomous Driving Systems.” In Advances in Data and Information Sciences, edited by Mohan L. Kolhe, Shailesh Tiwari, Munesh C. Trivedi, and Krishn K. Mishra, 94:361–71. Singapore: Springer Singapore. https://doi.org/10.1007/978-981-15-0694-9_34.


Ad:


Related


Advertising:



Support More Content on GIS Lounge

Advertising only covers a portion of the cost of running this site. Would you like to see more daily content about GIS and geospatial trends on GIS Lounge? All funds received through Ko-Fi will go directly towards hiring more contributors to write for GIS Lounge.



Like this article and want more?

Enter your email to receive the weekly GIS Lounge newsletter:

Advertising