PopCross
Created an AR communication system that helps pedestrians understand the intent and awareness of autonomous vehicles, making street crossing more legible and trustworthy.

Type
Individual Project

Tools
Unity, Figma, Illustrator,
AfterEffect, Photoshop,

Timeframe
Nov - Dec 2025

Role
Interaction Designer
Problem
Autonomous vehicles are becoming increasingly common on city streets, yet pedestrians can no longer rely on a driver’s gaze or gestures when crossing the road.

Loss of Nonverbal Communication in Autonomous Street Crossings
In vehicles with human drivers, pedestrians could interpret situations through cues such as speed, distance, eye contact, and hand gestures. In autonomous vehicles, however, these signals disappear, reducing pedestrians’ confidence in whether the vehicle has noticed them or intends to stop.

The Limitations of Current eHMI in Supporting Pedestrian Interaction
Existing external human machine interfaces(eHMI) display a vehicle’s status and intent through external interfaces, but they offer limited opportunities for pedestrians to actively respond or interact.
Solution
Visualize and structure the invisible communication signals between autonomous vehicles and pedestrians using an AR based bidirectional interface, supporting faster and safer crossing decisions
Literature Review
Looking at prior research to understand how pedestrians make sense of autonomous vehicles in moments of uncertainty

Previous research shows that pedestrians feel the most uncertainty around autonomous vehicles when the vehicle’s status and intent are unclear. In the absence of a human driver, familiar cues such as eye contact or gestures disappear, making it harder to judge whether it is safe to cross. Studies consistently find that pedestrians mainly want to know whether a vehicle is driving autonomously and whether it intends to stop or move forward. In contrast, information about pedestrian detection is less critical and can even cause confusion.
Precedents
Looking at existing interface precedents for communicating vehicle intent to pedestrians

I also reviewed several precedents, Waymo, Mercedes, Nissan, and Human Horizons, which use visual feedback like LED displays or projections to communicate intent. However, most of these are one-way systems, telling pedestrians what the car will do, rather than creating two-way, human-like interaction.
Insights
In autonomous driving environments, pedestrians need intuitive two way visual and spatial cues that clearly show whether a vehicle has noticed them and when it is safe to cross, highlighting the potential of AR and XR interfaces to support this communication

Autonomous vehicles remove familiar nonverbal cues such as a driver’s gaze, gestures, and speed changes, making it difficult for pedestrians to intuitively judge whether a car has noticed them or intends to stop.

Most existing eHMI approaches rely on one-way communication through external vehicle displays, offering limited opportunities for pedestrians to actively express intent or interact with the vehicle.

For pedestrians, simple visual and spatial cues that clearly indicate when it is safe to cross are more important than technically detailed information, and these cues can be naturally delivered through AR and XR environments.
Design Concept
Bubble as a visible interaction medium that communicates information both ways
Pedestrian
hand gesture
From the pedestrian’s side, gestures like popping bubbles can signal their intent, for example, wanting to cross the road.

Bubble
SDV
speed, expected route, status
From the vehicle's side, the bubble conveys feedback like speed, predicted route, pedestrian detection, and driving status.
Bubble Information Architecture
Pedestrian

The pedestrian’s signal is expressed by popping a bubble. If the bubble pops, it indicates that the vehicle has detected the pedestrian and will stop; if it does not pop, it suggests that the signal was given too late.
SDV

Speed is represented by the horizontal width of the bubble. As the vehicle moves faster, the bubble becomes wider, and as the speed decreases, the width narrows.

The expected route is indicated by smaller bubbles flowing in from the front along the predicted path.

The status of ADS(Automated Driving Systems) is indicated through the speechbubble.
* Images are generated by Midjourney.
Prototypin & Usability Testing
An initial prototype to test pedestrian gestures and vehicle state changes in real time
Using p5.js and hand tracking, the interaction was designed so that when a user’s fingertip remains within the bubble area for a set duration, the bubble pops and the vehicle stops. The bubble’s behavior dynamically changes based on vehicle speed, predicted route, and ADS mode, enabling user testing of different interaction conditions.
Pedestrian to SDV

Bubble Not Popped

Bubble Popped
SDV to Pedestrian

Speed

Expected Route

Status
15 participants interacted with the prototype and discussed their experiences and suggestions for improvement in brief interviews. The test aimed to quickly assess the intuitiveness of the bubble interface and explore what additional sensory feedback or adjustments could enhance the interaction.
Key Feedback
Multi-sensory signal needs
“When the bubble pops, adding a short sound or vibration would make the warning feel more certain.”
Perspective of users with hearing impairments
“If the system could make the presence of vehicles outside the field of view feel stronger, it would also be helpful for people with hearing impairments.”
Control over usage context
“Instead of always being on, it would be better to have an option to turn it on and off only when needed.”
Ideas for expanding application
“It would be great if this system could be applied not only to autonomous vehicles but also to conventional cars with dark or tinted windows where the driver is hard to see.”
Design Implications
1.
Moving beyond a single visual cue, the design considers multisensory alerts using sound effects to increase the speed and clarity of perception.
2.
Adding context-based ON/OFF controls that automatically activate near crosswalks or allow users to manually toggle the system as needed.
3.
Identifying the potential for the interface to extend beyond autonomous vehicles into a broader solution for any situation where a driver’s intent is not visible, rather than a system limited to autonomous driving alone.
Final Outcome
An initial prototype to test pedestrian gestures and vehicle state changes in real time

Road Detected
crossing assist activated

Vehicle approaching
speed & intent activated

Vehicle approaching
expected route activated

Pedestrian gesture detected
vehicle acknowledges
Takeaways
PopCross reimagines street crossing as a two way communication problem, showing how AR can make autonomous vehicle intent more legible, interactive, and reassuring for pedestrians.
PopCross translated invisible driving signals into a bidirectional AR interface that helped pedestrians understand whether a vehicle had noticed them, how it was moving, and whether it was safe to cross. By structuring speed, expected route, ADS status, and pedestrian acknowledgment through a shared bubble system, the project moved beyond one way eHMI displays toward a more interactive crossing experience.
Autonomous vehicles remove familiar nonverbal cues such as eye contact, gesture, and subtle changes in driver behavior. This project suggests that pedestrians do not need more technical information, but clearer and more intuitive communication about intent and safety. In that context, AR becomes valuable not just as an overlay, but as a way to rebuild trust in urban mobility through human centered visual interaction.
Future iterations could focus on when and how the interface should appear in real urban contexts. Rather than remaining a constant overlay, the system could become more situational, responding only at moments of uncertainty such as vehicle approach, hesitation, or ambiguous right of way. This would make the experience feel less like added information and more like timely support for safer, more confident crossing.