top of page
1_HomeAgain_1112180.png

Meta Reality Labs

At Meta, I began as a Design Manager in research, focusing on input methods for AR and haptic gloves. When the Orion AR glasses transitioned from research to the product organization, I established and led the design team for this emerging product. I guided the Experiences Design Team in defining Orion’s value proposition and built an Input Team to develop innovative interaction methods for the glasses.

 

As the engineering team initiated work on Orion’s successor, I formed a new design team to support its development. Recognizing the growing importance of AI, I shifted my focus to integrating generative AI into AR and smart glasses, driving the next evolution of intelligent wearable technology.

Orion hardware.webp

AR Glasses


Defining the future of augmented reality

 

When Orion evolved from research project to product, I assembled a team of UX, ID and animation designers to define the value proposition and user experiences for the AR Glasses. One of the outcomes was a day-in-the-life video (two clips shown here) that continued to inspire the product team for years.


Creating a new input system

 

Since AR glasses lack a physical surface for interaction, we needed to invent new ways for users to engage with content. Drawing on my prior experience with input design before working on Orion, I established an input team to develop prototypes and evaluate them through user research. Our initial explorations focused on hand tracking, including direct manipulation, pointing, and virtual keyboards. While hands proved to be a compelling input method, we expanded our exploration to include innovative approaches such as the EMG wristband and eye gaze tracking. After numerous iterations refining the EMG and gaze models, we ultimately identified a combination of the wristband and gaze tracking as the primary input method for Orion.


Exploring AR experiences

 

The majority of my work on Orion focused on defining the experiences that formed the foundation of its user value. My team explored a broad range of possibilities, including communications, contextual applications, virtual objects, and more. We produced experience videos to guide technology development and gather user feedback. For deeper evaluation, we developed fully interactive prototypes using proprietary headsets. Our designs spanned from AR adaptations of existing Meta apps to entirely new methods of communication, digital content consumption, and interaction with the world.

Design for AR

One of the key challenges for the Orion glasses was developing design tools for creating digital 3D artifacts in physical space. We explored a wide range of concepts using methods that spanned from quick sketches to fully interactive prototypes. Quick video sketches, like those shown on the right, proved especially effective. These concepts highlight spatial communication and guided instructions, offering a glimpse into the transformative potential of AR.​​​

AI

Loading_15_Loop_v01_04.gif
GenUX image02.png


Generative UX

 

I believe that in the near future, AI will dynamically generate user interfaces by leveraging app services and functioning as an intelligent agent. These interfaces will be tailored to individual needs, preferences, and environments, offering a personalized, customized, and contextualized experience.

​

To illustrate this capability to leadership and engineering, I developed a Node.js/React prototype that renders live UI by integrating multiple service APIs to fetch content. This image of the prototype demonstrates step-by-step instructions entirely generated by AI, which autonomously determined text treatments, the placement and timing of images, and the column layout. The prototype also supports various formats and elements, such as lists, buttons, and maps, showcasing the potential for versatile, AI-driven UI generation.


Contextual AI

 

Combining AI with sensors on glasses creates a powerful platform for delivering compelling user experiences. As a manager and design lead, I led teams in developing numerous prototypes, defining the scenarios, interactions, and visual designs to showcase these capabilities.

​

In this example on Orion, AI utilizes object recognition and location data, employing world-locking to seamlessly position UI elements within the environment. The scenario demonstrates how AI recognizes food items and suggests a recipe in real time.

Orion recipe.webp


Meta AI

 

Meta AI has taken its first steps in integrating AI into glasses experiences, enabling on-the-go functionality powered by input from the point-of-view camera. As the design lead, I played a key role in crafting the experience for Meta AI on Orion. Additionally, I led the design for innovative AI use cases for the successors to Ray-Ban Meta smart glasses.

​​​

bottom of page