Home PublicationsData Innovators 5 Q’s with Cornel Amariei, CEO of .lumen

5 Q’s with Cornel Amariei, CEO of .lumen

by David Kertai
by

The Center for Data Innovation recently spoke with Cornel Amariei, CEO of .lumen, a Romania-based company developing a wearable assistive headset—similar in shape to a pair of VR glasses—for visually impaired individuals. Amariei explained how the headset combines an AI system, computer vision, and on‑device processing to guide users through their surroundings.

David Kertai: What inspired .lumen’s creation?

Cornel Amariei: I grew up in a family where everyone except me had a disability, which gave me an early understanding of how limited mobility options are for people with visual impairments. That experience motivated me to develop a wearable headset that could provide the same kind of spatial awareness and guidance as a cane or a guide dog, but in a more scalable and intuitive form. My goal became creating a tool that restores independence by giving users real-time information about their surroundings and serving as a standalone mobility aid.

Kertai: How does .lumen’s glasses work? 

Amariei: As the user moves, the headset’s six cameras—two that capture color and four that use infrared to sense depth—constantly scan and collect information about the area. They detect curbs, potholes, parked cars, and overhead obstacles such as branches or signs, as well as key landmarks like stairs, doors, and pedestrian crossings. The cameras feed this visual data directly into an on-device computer vision engine, where a semantic segmentation model—a type of software that breaks an image into parts and labels each one, such as people, stairs, and cars—analyzes the environment in real-time. By processing everything on the device rather than in the cloud, the headset reacts quickly and works without internet access, reducing latency.

Once the system analyzes the environment, it converts that information into guidance the user can immediately act on. Instead of relying primarily on voice commands, which can be confusing or difficult to hear in noisy settings, the headset communicates mainly through haptic feedback. Gentle vibrations on the forehead signal the safest direction to move, while stronger pulses help users orient themselves or avoid nearby obstacles. In more complex situations, the device adds short audio cues for extra clarity. Together, these signals give users a continuous, intuitive sense of where to go, even in unfamiliar settings.

Kertai: How do users learn to use the headset? 

Amariei: The headset includes built-in audio tutorials that guide users step by step through the core features at their own pace. For example, one tutorial teaches users how different vibration patterns signal obstacles directly ahead versus open pathways to the left or right. These short tutorials help users understand how the headset communicates and how the haptic cues indicate safe movement. 

Kertai: How do you validate your glass’s safety and reliability? 

Amariei: We validate safety and reliability through continuous, real-world testing with visually impaired users. So far, we have worked with more than 500 individuals across 30 countries, testing the system in both controlled settings and unpredictable environments. We rely on multiple layers of redundancy, on-device processing to eliminate latency risks, and strict testing protocols that let us observe how the system performs in everyday situations rather than simulations alone. We also incorporate user feedback at every stage of development and have completed formal clinical trials as part of the medical device certification process.

Kertai: What are the biggest challenges in developing .lumen? 

Amariei: The first major challenge is hardware. Unlike a self-driving car, which has ample space for sensors and computing power, we have to deliver the same core functions of perception, prediction, and decision-making in a device small and light enough to sit comfortably on someone’s head, while using only a fraction of the power and energy that a vehicle would.

The second challenge is the environment itself. Pedestrian spaces are far more unpredictable than roads. Sidewalks change width, intersections are cluttered with obstacles, and trails often have uneven surfaces. As a result, our system cannot rely on predefined infrastructure and instead must interpret each environment in real-time with the same depth of understanding a self-driving car applies to roads, while processing everything instantly on-device to keep users safe at every step.

You may also like

Show Buttons
Hide Buttons