Search this site
Embedded Files

<- Back Home

Zenuity: Intelligent Dialogue Between Vehicle and Pedestrian

Zenuity, a joint venture of Volvo and Veoneer, develops autonomous driving technology. I participated in their Human Machine Interface (HMI) R&D that fosters intelligent Vehicle-to-Pedestrian (V2P) communication for urban safety.

2019 UX Design | User Research | Digital Prototyping

Overview


Zenuity is a joint ventures of Volvo Cars and Veoneer, specializing in automated driving and advanced driver assistance systems software development. As the transportation sector becomes increasingly autonomous, they prioritize in taking human-centered approach when designing software for self-driving vehicles.



I participated as a user researcher and designer, contributing to every stage of the solution development, for example:

  • Designed and conducted quantitative and qualitative research

  • Prototyped hardware and software solutions

  • Documented the project and drafted the research paper for presentation, expo, and awards


As a result, the research won first place in the Socially Engaged Design Awards from the University of Michigan College of Engineering, for facilitating actionable communication between vehicles and pedestrians.



See the extended abstract I drafted:


Design Thinking Approach to Pedestrian HMI Development for Autonomous Vehicles.pdf

Context


The AV industry is moving towards social acceptance of Level 4 and Level 5 automation, making traditional V2P communication methods (e.g., eye contact, waving, yelling) obsolete. The research team focused on developing effective communication between AVs and pedestrians to prevent safety hazards from miscommunication.

User Research


  • Literature review: the team analyzed 30+ publications on existing V2P technology and societal/emotional acceptance of self-driving vehicles.

  • Quantitative research: I designed an ethnographic survey sent to participants from 8 different countries.

  • Qualitative research: I co-designed the research framework with another user researcher, including:

    • 36 V2P interaction evaluations

    • Urban walk-alongs with 17 participants

    • 17 interviews and co-creation sessions

    • Affinity analysis, gathering 1200+ data points.



The research suggests pedestrians prefer a multimodal, audio-visual solution to address ambiguity and eliminate wrong assumptions about the vehicle. It must also answer two questions,



"Did the car see me?"

"What is the car going to do next?"

↑ The two questions derived from narrowing down the affinity diagram.

Prototyping and Development


I answered the questions through 2 low-fidelity prototypes, 4 solution documents, and 1 high-fidelity prototype.



Low-Fidelity Prototype

  • Graphic renderings of the conceptual prototype using CAD

  • Audio solution using sound engineering studio


Solution Documents (Co-authored with an engineer)

  • LED displays for visual communication

  • Laser galvanometers for visual communication

  • Gesture recognition

  • Bluetooth beacon for alarming pedestrians

Technical Design Doc_Gesture Detection.docx
Technical Design Doc_Pixel Matrix.docx
Technical Design Doc_Laser Galvo.docx
Technical Design Doc_Bluetooth.docx

High Fidelity Prototype

  • A device to mount on a vehicle, integrating an LED display, camera, and 2 speakers (Shell was CNC milled using high-density foam, with 3D printed parts in PLA plastic)

↑ The prototype colors have been reversed based on the usability testing insights.

Field Testing


I collaborated with a user researcher and an electrical engineer to design and conduct 3 tests with 18 participants


  • Audio-Visual Meaning Test (Emotional acceptance): Audio passed 100%, and 1/3 visual passed with 100% success.

  • Visual Legibility-Recognition Test (Interpretability): All 5 scenarios passed with 80-100% success.

  • Audio-Visual Range Test (Technical evaluation): Audio passed 100%, and visual passed 95%.

Conclusion


The V2P communication system achieved 91% success in emotional acceptance and interpretability in average, ensuring reliable interaction between pedestrians and autonomous vehicles.


Looking ahead, I aim to refine the desirability of the hardware and enhance the Convolutional Neural Network (CNN) for pedestrian gesture detection training in collaboration with the engineers on the team.

<- Back Home

Google Sites
Report abuse
Google Sites
Report abuse