Recent research by Motional shows that pedestrians pay more attention to the way a vehicle moves than to the gestures of a man behind the wheel.
Assessing whether it is safe to cross an open road involves the complex exchange of social cues between pedestrians and drivers. But what if there is no one behind the wheel; how will the pedestrian understand the vehicle’s intentions? Self-driving cars, according to Motional, need to become more expressive.
With the help of the CHRLX animation studio, they created a very realistic virtual reality experience, designed to test pedestrian reactions to a range of different signaling schemes. Reporting on the results in the IEEE Robotics and Automation Letters, they showed that emphasizing car movement – abrupt braking or stopping at a reasonable distance from pedestrians – was the most effective way to communicate intentions.
Based on these findings, the company is working on a movement planning system and has created a VR environment in which other researchers can experiment.
Three basic scenarios mimicked the way a driver would stop in front of a stop sign; one showed the man behind the wheel, the other had no driver or visible sensors, and the third had a large LED display.
The company then devised various ways to signal to a pedestrian that the vehicle intends to miss them; from earlier and stronger braking, through stopping at a distance from one vehicle, to adding excessive braking sounds and low revs and combining those sounds with a pronounced lowering of the car’s nose.
The most successful proved to be sudden braking and stopping at the length of the vehicle from pedestrians. The fact that pedestrians felt safest in these situations is not surprising as this approach was inspired by the way human drivers behave when slowing down to miss pedestrians.
It is surprising how small the differences were in reactions to situations without a driver. This suggests that pedestrians pay more attention to the movement of the vehicle than to the driver behind the wheel. These results are corroborated by other research, such as that conducted by the Delft University of Technology.
Namely, although most attempts have focused on screens that would replace explicit signs such as eye contact or hand gestures, studies show that people read information from implicit car behavior.
It seems that a combination of explicit and implicit signaling with the help of augmented reality could be most effective. A system has also been developed that allows driverless vehicles to communicate their intention directly with smart pedestrian goggles, which would visually show whether it is safe to cross the road or not. But this solution is not without its flaws either, as it requires all pedestrians to wear smart glasses on their heads at all times.