Gaze for contextual interactions
Article posted on
I have always been fascinated by the idea of using gaze to improve human-computer interactions.
For example, gaze-to-select is arguably the quickest way to identify users’ intent. I made a measuring demo using Magic Leap a long time ago, and it worked flawlessly.
The out-of-the-box way to use gaze on Meta Quest Pro is to show where users gaze in VR. It might sound minor, but along with face tracking, it makes the avatars come alive and brings the interactions to the next level.
I was most interested in using gaze to identify users’ focus areas. When I got the Meta Quest Pro, I first built the prototype to connect users’ emotions (using face tracking) to the object (using gaze). It’s goofy, but it worked.
Next, I thought about how gaze could improve our interaction with the real world. It sounds like a dream: we can hide all the interfaces, let people do their thing and show the required minimum only when needed by utilising gaze. It sounds good on paper or in dreams, but I had to experience it to make sure it worked, so as usual, I built a prototype.
I wanted to test the variety of interactions which work with gaze. Some of them may not be optimal or perfectly suited for the case. But experiencing the ones that didn’t work still gave me valuable experience of actually trying it.
Book with audio bookmarks
Object: Physical book
It is a book that shows you the virtual icon with bookmarks. You can tap on it to open the complete list and play the specific bookmark. The poke button from Interaction SDK on the physical tabletop feels excellent.
Object: Physical window
Interaction: Pinch and move
In this prototype, the shutters are virtual, but I assume real ones could be controlled in the future.
After playing with this interaction, I found out that it might not be optimal to show the UI every time users glance at windows. It might be better to wait for the pinch in addition to gaze to show UI and start the interaction.
Object: Physical heater
This interaction could be connected to the physical thermostat on a wall, but the cause of the temperature change (radiator) makes more sense. The difference is more obvious in the example with shutters (looking at shutters or at the controller on the wall).
I used the thumbs-up and down pose from Interaction SDK for this one. Seems logical and works well.
Object Physical speaker
Interaction: Gaze to select, pinch to confirm
I wanted to fit the whole UI on the speaker’s area; therefore, I reduced the number of “buttons” to only three.
You look anywhere at the speaker to get the UI. You look at the button you want to activate and pinch your fingers to confirm the selection.
Of course, this interaction is quite restrictive and keeps users aware of where exactly they’re gazing.
While working on this prototype and recording it, I somehow got used to this interaction, and now I feel pretty confident in using my gaze as a pointer.
Object: Physical device
Interaction: Gaze to get more info
This is another example of the value of highly contextualised info. It might be one of the few examples when it’s okay to show virtual info on only gaze without any additional gating.
Object: Physical wrist
Interaction: Gaze to get more info
All the physical interactables were stationary, so I wanted to test how it felt when moving. Also, I intentionally wanted to avoid any additional interaction altogether. It is supposed to work as a rise-to-wake on the Apple Watch, but the trigger is not raising your wrist but looking at it.
It worked pretty well. I guess the ring when I’m not looking isn’t necessary.
Object: Physical palm
Interaction: Pinch specific finger
I’ve been working with the Interaction SDK team on the palm menu concepts using two hands. We came up with some solid interactions, which work flawlessly, but unfortunately, I can not use anything similar until the example scene is publicly released (Which will happen soon 😉). Therefore, I settle for basic one-hand interactions.
My takeaway is that gaze doesn’t change much in this case.
Object: Physical plant
Interaction: Complex sequence
As a part of IoT, MR could bring a lot of value, and gaze could make this value even more contextualised and not overwhelming.
In this example, only when the situation warrants it, the alert icon appears next to the object. After noticing and looking at it, more info appears (in this case, it says that the plant needs water). After the required action is performed (watering), the status of the object is updated. In the IoT example, the sensors could change their status.
In the default state, no virtual objects are shown. I’m not sure that while just gazing, the info “all is fine” is even necessary.
This is another example of when additional gating as a gesture or timeout might be a better solution.
Object: Virtual icon
Interaction: Swipe gesture
The current weather icon stays in the window’s context and grabs users’ attention if they want to get more details.
I leveraged out-of-the-box swipe gestures from Interaction SDK to engage with this widget. It feels interesting, as it doesn’t require your to change your focus to the hand.
Object: Virtual frame on the physical tabletop
I wanted to see how the “regular” UI could live in the MR. The placing was obvious, as tactile feedback from the tabletop makes poking way better.
Using pokable canvas from Interaction SDK, I quickly made this example UI, which feels great.
Frankly, I’m not sure that gaze brings a lot of value in this case. Only a little will change if the info stays visible if youre not looking at it.
I’m glad to make this exploration, as I learned a lot. I hesitated as I expected it would take a lot of my time, but I spent only a few weekends on the main coding part. ISDK helped me a lot in bringing the interactions; for most of the cases, I used pieces of examples scenes. (Disclaimer: I actually work as a contractor with the Interaction SDK team, exploring new interaction models for future versions of the SDK, so I know the ISDK quite well, but here I only used publicly available resources, so I wouldn’t consider it unfair 😀)
Anyway, if you want to start with VR interactions, I recommend you give Interaction SDK a try. It has a good set of reusable components and example scenes. Plus, a lot of exciting stuff is coming soon 😉
- UI might be too abundant. After playing with it, I am more confident that gaze at itself might not be enough to make the interactable active. Some additional gestures might be required for additional gating.
- Early on, I realized that eyes move too rapidly, and sometimes I’m looking away, not thinking about it, and it disables the interactable too easily. Just adding a timeout before deselecting the interactable helped to solve it.
- Gaze is indeed the ultimate tool for making human-computer interactions contextual and invisible when not needed
Initially, I made this prototype to work in my room, but I wanted to give you a chance to experience it and give feedback, so I made a VR version. You can sideload it from SideQuest.
It’s designed to work on Quest Pro, of course, but I made a workaround to make it kind of work on other Quests (it uses the centre of the view instead of the gaze).