top of page
스크린샷 2025-12-17 오전 1.00.00.png

POPCROSS

Overview.

Individual Project (Academic)

Nov 2025- Dec 2025

Role

UX/UI & XR Design

Tools

Unity · Figma · AfterEffect · Photoshop · Illustrator

Autonomous vehicles are becoming increasingly common on city streets, yet pedestrians can no longer rely on a driver’s gaze or gestures when crossing the road. PopCross is an experimental experience that helps pedestrians intuitively understand a vehicle’s intent and awareness through an AR interface, reimagining trust and safety in urban mobility.

Design Process.

Define

Loss of Nonverbal Communication in Autonomous Street Crossings

무제-6.png

In vehicles with human drivers, pedestrians could interpret situations through cues such as speed, distance, eye contact, and hand gestures. In autonomous vehicles, however, these signals disappear, reducing pedestrians’ confidence in whether the vehicle has noticed them or intends to stop.

The Limitations of Current eHMI in Supporting Pedestrian Interaction

무제-7.png

Existing external human machine interfaces(eHMI) display a vehicle’s status and intent through external interfaces, but they offer limited opportunities for pedestrians to actively respond or interact.

Goal

Visualize and structure the invisible communication signals between autonomous vehicles and pedestrians using an AR based bidirectional interface, supporting faster and safer crossing decisions

Research

Literature Review

Research has examined the use of augmented reality (AR) interfaces and vehicle-mounted external interfaces to support communication between autonomous vehicles and pedestrians.

Frame 12.png

Previous research shows that pedestrians feel the most uncertainty around autonomous vehicles when the vehicle’s status and intent are unclear. In the absence of a human driver, familiar cues such as eye contact or gestures disappear, making it harder to judge whether it is safe to cross. Studies consistently find that pedestrians mainly want to know whether a vehicle is driving autonomously and whether it intends to stop or move forward, while information about pedestrian detection is less critical and can even cause confusion.

From a design perspective, interfaces that clearly combine status and intent using simple visual cues are the most effective in building trust and a sense of safety. Colors, icons, and subtle animations based on familiar traffic signal conventions support faster understanding than text-heavy messages. Recent work further suggests that AR interfaces can enhance this communication by placing intent-related cues directly in the pedestrian’s field of view, helping reduce cognitive load in complex traffic situations.

Precedents

Frame 11.png

One study found that AR interfaces can intuitively visualize whether it is safe to cross by overlaying information directly into pedestrians’ field of view, enabling users to understand vehicle intent effectively regardless of age or gender. Another study demonstrated that wearable AR can clearly convey vehicle responses even in complex environments with multiple autonomous vehicles, reducing pedestrians’ cognitive load and supporting more stable crossing decisions.

Insights

eyes2'.png

Autonomous vehicles remove familiar nonverbal cues such as a driver’s gaze, gestures, and speed changes, making it difficult for pedestrians to intuitively judge whether a car has noticed them or intends to stop.

oneway2.png

Most existing eHMI approaches rely on one-way communication through external vehicle displays, offering limited opportunities for pedestrians to actively express intent or interact with the vehicle.

Intuitive2.png

For pedestrians, simple visual and spatial cues that clearly indicate when it is safe to cross are more important than technically detailed information, and these cues can be naturally delivered through AR and XR environments.

In autonomous driving environments, pedestrians need intuitive two way visual and spatial cues that clearly show whether a vehicle has noticed them and when it is safe to cross, highlighting the potential of AR and XR interfaces to support this communication.

Design Concept

Q1.

How might we design an AR interface that rebuilds mutual trust and communication between pedestrians and self-driving vehicles in the absence of human drivers?

Q2.

What types of bidirectional cues, such as AR gestures, spatial signals, or visual feedback, can effectively mediate two-way interaction between pedestrians and autonomous vehicles?

Design Question

Design challenge is about rebuilding mutual trust and two-way communication between pedestrians and self-driving vehicles especially through Augmented Reality. AR can reveal invisible layers of interaction that already exist between pedestrians and vehicles, like attention, distance, or intent, and make them visible in shared physical space. It allows both sides to communicate through spatial and sensory cues.

Core Concept

The bubble acts as a visible interaction medium that communicates information both ways.

Pedestrian

Hand Gesture

Speed
Expected Route

ADS Status

SDV

BUBBLE

From the pedestrian’s side, gestures like popping bubbles can signal their intent, for example, wanting to cross the road.

From the vehicle’s side, the bubble conveys feedback like speed, predicted route, pedestrian detection, and driving status.

bubbles-png-11414.png
bubbles-png-11414.png

Bubble Information Architecture

Pedestrian

Frame 15.png

The pedestrian’s signal is expressed by popping a bubble. If the bubble pops, it indicates that the vehicle has detected the pedestrian and will stop; if it does not pop, it suggests that the signal was given too late.

SDV

Frame 16.png
Frame 17.png
Frame 18.png

Speed is represented by the horizontal width of the bubble. As the vehicle moves faster, the bubble becomes wider, and as the speed decreases, the width narrows.

The expected route is indicated by smaller bubbles flowing in from the front along the predicted path.

The status of ADS(Automated Driving Systems) is indicated through the speechbubble.

* Images are generated by Midjourney.

Interaction Design & Prototyping

An initial prototype was developed based on the Bubble Information Architecture to test pedestrian gestures and vehicle state changes in real time.

Pedestrian

to SDV

Bubble Not Popped

Bubble Popped

Frame 19.png
Frame 19.png

SDV

to Pedestrian

Speed

Expected Route

Frame 19.png
Frame 19.png

Status

Frame 19.png

User Test & Iteration

Using p5.js and hand tracking, the interaction was designed so that when a user’s fingertip remains within the bubble area for a set duration, the bubble pops and the vehicle stops. The bubble’s behavior dynamically changes based on vehicle speed, predicted route, and ADS mode, enabling user testing of different interaction conditions.

그림1.png
  • 15 participants interacted with the prototype and discussed their experiences and suggestions for improvement in brief interview sessions.

  • The test aimed to quickly assess the intuitiveness of the bubble interface and explore what additional sensory feedback or adjustments could enhance the interaction.

Key Feedback

Multi-sensory signal needs

Perspective of users with hearing impairments

Control over usage context

Ideas for expanding application

“When the bubble pops, adding a short sound or vibration would make the warning feel more certain.”

“If the system could make the presence of vehicles outside the field of view feel stronger, it would also be helpful for people with hearing impairments.”

“Instead of always being on, it would be better to have an option to turn it on and off only when needed.”

“It would be great if this system could be applied not only to autonomous vehicles but also to conventional cars with dark or tinted windows where the driver is hard to see.”

Design Implications

1.

Moving beyond a single visual cue, the design considers multisensory alerts using sound effects to increase the speed and clarity of perception.

2.

Adding context-based ON/OFF controls that automatically activate near crosswalks or allow users to manually toggle the system as needed.

3.

Identifying the potential for the interface to extend beyond autonomous vehicles into a broader solution for any situation where a driver’s intent is not visible, rather than a system limited to autonomous driving alone.

Final Experience

Frame 21.png

When the user stands near a crosswalk, AR interface detects the road environment and displays the message “AR crossing assist is now active.” The user can swipe up on the message to activate the feature.

Frame 21.png

Prototype was built in Unity to validate timing, bubble motion, and information hierarchy in a real-time 3D environment.

Using the button panel on the right side of the screen, the user can enable or disable additional information layers, including vehicle speed, predicted route, and autonomous driving status.

Frame 25.png

Road Detected
crossing assist activated

Frame 24.png

 Vehicle approaching
speed & intent activated

Frame 23.png

 Vehicle approaching
expected route activated

Frame 26.png

 Pedestrian gesture detected
vehicle acknowledges

Insights.

This project reinterprets the lost eye contact and gestures between autonomous vehicles and pedestrians through an AR bubble interface, helping pedestrians intuitively understand their safety status and make confident decisions on the road.

 

By translating vehicle speed, expected route, pedestrian detection, and driving mode into simple motions, the system reduces emotional anxiety and cognitive load, enabling a more stable and reassuring crossing experience.

Beyond a single use case, the project serves as a prototype demonstrating how XR interfaces can redesign urban mobility in the autonomous era around human-centered communication, while also suggesting future extensions for vulnerable pedestrians such as individuals with hearing impairments.

bottom of page