Their website says:
Reeveal is a Shopping Assistant that helps you choose products that meet your expectations
The main goal is to answer three user questions about any potential purchase:
1. Is it good for ME?
2. Is it good for the PLANET?
3. Is it good for my WALLET?
The team had an AR app with all the functionality and they got selected for Nreal and T-mobile accelerator program in Hubraum. In this program, they had to convert their iOS app into an MR app for Nreal glasses. They hired me to create the product design of this MR app.
Sounds easy, huh?
I like this type of engagement — it has a very clear definition of the goal, broad scope of responsibilities and clear evaluation metrics.
I knew that MR interactions are spatial by nature, so they're closer to VR. And AR interactions are more related to regular flat UI.
But I couldn't imagine how different they are until I tried to map them one to one.
For Food and Cosmetic products the app is constantly trying to recognize a bar code in the camera view, and once it's found, it runs a search and immediately shows the results in the UI.
Quite natural and flawless interaction that wouldn't work for mixed reality. We definitely don't want to show results to the users no matter what they're looking at.
Other lenses, like Books, or Movies, couldn't run continuously and required input from the user. In the mobile app, users had to take a photo of the book cover to get info about the book. MR glasses don't have "borders" like mobile screens, so it was impossible to predict where exactly users were looking.
In addition, of course, we wanted to reduce the use of the controller and make the interactions feel as flawless as possible.
In the shop, users need to hold the product they are considering buying, then most probably they have a shopping cart or a basket and just don't have any free hands for the controller.
The mobile app had a lot of info for each of the products. It worked OK on flat UI, but it would be impossible to navigate and read in MR.
What do you do when you want to know more about something? You bring it closer and focus on it.
What do you do when you don't want to see the product? Yeah, you just put it away.
So my first proposal was based on bringing and taking the items in/out of the viewing area.
Timeout and loading states keep users aware of what the app is doing.
At this time, I didn't have the hardware to test if this would work, so I came up with alternative solutions that wouldn't require precise product tracking and was easier to implement.
At first, I tried to articulate these ideas to the PM and team using static mockups, as it's the fastest way to deliver something visual.
But the team didn't have experience with spatial interaction and had a hard time understanding the differences.
Then I tried to combine mockups into a deck to show interactions step by step.
It still wasn't clear enough, so I made a few short videos with the key moments.
And it helped!
For this project, videos were unexpectedly helpful for getting everyone on the same page.
Of course, no video can replace a prototype, but the client never worked with a designer/prototyper before and expected me to deliver only "designs" and to rely on the dev team to "code".
But I didn't plan on surrendering..
After receiving my Nreal glasses, I was keen to dive into building the app, but the dev team was focused on the ML part.
So, I decided to play with Nreal SDK in my free time and build some prototypes using Unity.
I've explored the idea of showing photogrammetrically scanned meals in AR before, and it was natural to build it in Unity.
🤦 Nreal Light glasses' FOV was limiting for objects placed close to the user.
🤦 3DOF tracking of the controller wasn't comfortable to use for point and select interactions.
🤩 Spinning of the controller to spin the virtual object felt quite neat.
Building XR apps for learning calligraphy is a kind of hobby for me, so I wouldn't miss the chance to try it on Nreal glasses.
🤦 In close proximity, objects were clipping and quite often weren't in focus.
🤦 Glasses a had hard time recognizing a marker. I tried different combinations of complexity/size, but the marker had to cover half the sheet to keep tracking good enough.
I built a quick prototype using book covers as markers to show the team how it would work in our app.
🤦 As expected, tracking wasn't good enough, especially in non-ideal lighting conditions.
During this stage, the dev team was focused on building the app, and my part was to test, give feedback, and iterate on the designs to cover up or improve the interaction based on the new findings.
From early on, we realized that we needed to scale back our ambitions and build just what's possible.
We couldn't track the bar code and the product, so we had to show the info floating in front of the user.
Timeout indicators helped to limit user input only to the mandatory actions.
I kept designing micro-interactions like switching lenses, and selecting the book cover.
There are no asset delivery standards for MR and new Nreal SDK, so I had to stay in touch with the dev team all the time and prepare the required assets, transitions, and requirements.
Interesting initial concepts that ended up impossible even to prototype.
Nreal Light glasses don't have eye-tracking, but I explored using gaze direction as a "cursor" to select products to scan.
One more idea was to use gaze for interaction with the objects.
I tried to use a product itself as an actionable element.
Despite some interesting ideas, we ended up implementing the most basic flow because of countless technical limitations.
But the product was a blast on the demo day, and Reeveal successfully raised the next round of investments shortly after it.
Reeveal didn't plan to focus on MR wearables in the near future, but during our work on the Nreal app, there were a lot of discussions about interactions from the iOS app. So I was asked to extend my contract and help to redesign the Reeveal AR app.
In the first stage, I was working closely with the product owner of the mobile app.
I made a review of the app and then started proposing changes. With a tight feedback loop, we were able to iterate fast and efficiently. Some of the changes that were introduced to improve UX:
The old app had the concept of primary and secondary information, but it wasn't consistent across the lenses.
To begin with, I sat down with the PM and sorted the data by priority and section.
Then I made the UI clear and consistent across all lenses.
Also, we agreed to remove pages that are rarely used, like My friends. It helped us to improve and promote a more useful History page.
Instead of selecting lenses in advance on separate screens, we made all lenses available from the start, allowing users to enter camera mode immediately.
Initially, to scan the book cover, users had to point a camera and tap on a screen. Of course, the book cover wasn't taking up the whole photo and quite often was skewed. The dev team tried to solve it using ML by training an algorithm to recognize the book better. But it's nearly impossible to do, as book covers are so diverse.
I proposed giving users some "invisible" guidance on taking a perfect picture of a book.
Corners help users align the book and give a clear understanding of what part will be scanned.
To make the camera view more engaging and informative, I added a label to AR view and spinners to keep users aware of the process.
For expanded view, tabs make sense, but they felt weird on the bottom of the screen for collapsed view. I proposed to change them to buttons. In addition, by using colour we were able to make them part of data visualisation
I removed or simplified secondary elements that hindered users from completing the main task. For example, help info was in the modal that required explicit action to close it. It was replaced with integrated help that disappears automatically.
After lens types, layouts and interactions were standardised, I narrowed them down to two main flows.
Last but not least, I reviewed and updated the styles of the app. The goal was to make it clean and "invisible."
Special attention was paid to colour-coding different ranges of values for visualizing data consistently.
Also, we explored using just icons instead of lenses for the navigation. Lens circles are redundant as they're designed to combine selection and photo taking actions.
In the last iteration of the app, bar codes are recognized automatically, but we decided to postpone it for now, as it would require more effort to change.
User testing AR remotely is challenging, but the testing of scanning specific products is impossible. We had to rely only on in-person user testing sessions during lockdown.
I've been doing short user tests with everyone who visited me during my work on the project.
Our team is distributed worldwide, and it was quite interesting to learn about various regional technical features, for example, that in Japan, bar code systems are significantly different, etc.
To Q&A the app between remote team members we created a shared collection of assets, like book covers, bar codes and movie poster images.
This is one of the most important new features.
It allows users to share the product info on the web quickly.
For us, it's one more chance to ensure the app's organic growth by promoting it on the shared web page.
I made clear specs for all transitions in the app.
During development, I stayed close with the dev team, shared additional assets and clarified requirements. Eventually I prepared all required materials for the App Store.
It was an interesting series of projects where I had a chance to leverage a wide range of my skills. On one day, I had to improvise and test critical interactions on my family members; on other days, I was almost in the product owner's seat, and sometimes I had to focus on micro details and prepare all of the assets.
Also, it's interesting how I used many technical solutions to deliver and test my ideas: Figma -> After Effects -> Unity, and then back to Figma/Sketch for UI design.