Gaze for contextual interactions

Article posted on

2023-04-26

I have always been fascinated by the idea of using gaze to improve human-computer interactions.

For example, gaze-to-select is arguably the quickest way to identify users’ intent. I made a measuring demo using Magic Leap a long time ago, and it worked flawlessly.

The out-of-the-box way to use gaze on Meta Quest Pro is to show where users gaze in VR. It might sound minor, but along with face tracking, it makes the avatars come alive and brings the interactions to the next level.

I was most interested in using gaze to identify users’ focus areas. When I got the Meta Quest Pro, I first built the prototype to connect users’ emotions (using face tracking) to the object (using gaze). It’s goofy, but it worked.

Next, I thought about how gaze could improve our interaction with the real world. It sounds like a dream: we can hide all the interfaces, let people do their thing and show the required minimum only when needed by utilising gaze. It sounds good on paper or in dreams, but I had to experience it to make sure it worked, so as usual, I built a prototype.

I wanted to test the variety of interactions which work with gaze. Some of them may not be optimal or perfectly suited for the case. But experiencing the ones that didn’t work still gave me valuable experience of actually trying it.

Book with audio bookmarks

Object: Physical book
Interaction: Poke

It is a book that shows you the virtual icon with bookmarks. You can tap on it to open the complete list and play the specific bookmark. The poke button from Interaction SDK on the physical tabletop feels excellent.

Window shutters

Object: Physical window
Interaction: Pinch and move

In this prototype, the shutters are virtual, but I assume real ones could be controlled in the future.

After playing with this interaction, I found out that it might not be optimal to show the UI every time users glance at windows. It might be better to wait for the pinch in addition to gaze to show UI and start the interaction.

Heater

Object: Physical heater
Interaction: Pose

This interaction could be connected to the physical thermostat on a wall, but the cause of the temperature change (radiator) makes more sense. The difference is more obvious in the example with shutters (looking at shutters or at the controller on the wall).

I used the thumbs-up and down pose from Interaction SDK for this one. Seems logical and works well.

Music speaker

Object Physical speaker
Interaction: Gaze to select, pinch to confirm

I wanted to fit the whole UI on the speaker’s area; therefore, I reduced the number of “buttons” to only three.

You look anywhere at the speaker to get the UI. You look at the button you want to activate and pinch your fingers to confirm the selection.

Of course, this interaction is quite restrictive and keeps users aware of where exactly they’re gazing.

While working on this prototype and recording it, I somehow got used to this interaction, and now I feel pretty confident in using my gaze as a pointer.

Charging status

Object: Physical device
Interaction: Gaze to get more info

This is another example of the value of highly contextualised info. It might be one of the few examples when it’s okay to show virtual info on only gaze without any additional gating.

Wristwatch

Object: Physical wrist
Interaction: Gaze to get more info

All the physical interactables were stationary, so I wanted to test how it felt when moving. Also, I intentionally wanted to avoid any additional interaction altogether. It is supposed to work as a rise-to-wake on the Apple Watch, but the trigger is not raising your wrist but looking at it.

It worked pretty well. I guess the ring when I’m not looking isn’t necessary.

Palm massager

Object: Physical palm
Interaction: Pinch specific finger

I’ve been working with the Interaction SDK team on the palm menu concepts using two hands. We came up with some solid interactions, which work flawlessly, but unfortunately, I can not use anything similar until the example scene is publicly released (Which will happen soon 😉). Therefore, I settle for basic one-hand interactions.

My takeaway is that gaze doesn’t change much in this case.

Bonsai alert

Object: Physical plant
Interaction: Complex sequence

As a part of IoT, MR could bring a lot of value, and gaze could make this value even more contextualised and not overwhelming.

In this example, only when the situation warrants it, the alert icon appears next to the object. After noticing and looking at it, more info appears (in this case, it says that the plant needs water). After the required action is performed (watering), the status of the object is updated. In the IoT example, the sensors could change their status.

In the default state, no virtual objects are shown. I’m not sure that while just gazing, the info “all is fine” is even necessary.

This is another example of when additional gating as a gesture or timeout might be a better solution.

Weather forecast

Object: Virtual icon
Interaction: Swipe gesture

The current weather icon stays in the window’s context and grabs users’ attention if they want to get more details.

I leveraged out-of-the-box swipe gestures from Interaction SDK to engage with this widget. It feels interesting, as it doesn’t require your to change your focus to the hand.

Virtual tablet

Object: Virtual frame on the physical tabletop
Interaction: Poke

I wanted to see how the “regular” UI could live in the MR. The placing was obvious, as tactile feedback from the tabletop makes poking way better.

Using pokable canvas from Interaction SDK, I quickly made this example UI, which feels great.

Frankly, I’m not sure that gaze brings a lot of value in this case. Only a little will change if the info stays visible if youre not looking at it.

I’m glad to make this exploration, as I learned a lot. I hesitated as I expected it would take a lot of my time, but I spent only a few weekends on the main coding part. ISDK helped me a lot in bringing the interactions; for most of the cases, I used pieces of examples scenes. (Disclaimer: I actually work as a contractor with the Interaction SDK team, exploring new interaction models for future versions of the SDK, so I know the ISDK quite well, but here I only used publicly available resources, so I wouldn’t consider it unfair 😀)

Anyway, if you want to start with VR interactions, I recommend you give Interaction SDK a try. It has a good set of reusable components and example scenes. Plus, a lot of exciting stuff is coming soon 😉


Takeaways

Initially, I made this prototype to work in my room, but I wanted to give you a chance to experience it and give feedback, so I made a VR version. You can sideload it from SideQuest.

https://sidequestvr.com/app/16132/gaze-for-context

It’s designed to work on Quest Pro, of course, but I made a workaround to make it kind of work on other Quests (it uses the centre of the view instead of the gaze).