Case Study

Port 6


Importance of accurate touch detection

🤏 + 👉🎚

What is Port 6?

From their website:

“Port 6 is a VC-backed augmented and virtual reality interaction laboratory based in Helsinki, Finland. Our novel technologies detect real touch events and simulate virtual touch events using wrist-based sensors and AI. By enabling AI-powered wristbands for our clients, we offer the missing piece to the complete AR/VR input stack.

When I was hired as a contractor prototyper/designer, Jamin (Port 6) suggested five vectors to explore (XR hands, XR mouse, XR text, XR surfaces, XR tangibles). But as you will see, we explored only some of them and haven’t built any prototypes for others. What I like about working with Port 6 is the freedom of choosing what to research next. It’s a real R&D laboratory. Also, we had a deadline to prepare a series of prototypes for public demo day in a couple of months.
‍

The only thing that all of the explorations had to be about accurate detection of touch events. Look, current hand recognition hardware detects gestures only visually from the headset. It makes it operable only when hands are close to the headset, but the most important is that it’s just not that accurate. For example, when fingers are pinching, we, humans, can easily sense when we touch them and when they’re hovering 1mm next to each other.

To improve the experience by tracking hands more precisely, Port 6 is developing wristbands. They’re the machine learning model-driven and can accurately detect when you’re pinching or touching anything else. During summer 2021, they were in the prototyping phase, and the machine learning engineers were working on improving interpreting signals. Our team was partially distributed, so shipping and troubleshooting the prototypes wasn’t a feasible option, but Port 6 team came up with a clever workaround. At first, they developed thimble v1. It’s a small piece of copper that you can wear on your thumb. It helps us build prototypes with accurate pinch detection.

For thimble v2, Port6 added two more thimbles for the fingertips of the index and middle finger. They’re a little bit different, as they could register not only when thimbles are touching other fingers but also when they’re touching anything else. It’s one more feature that wristbands would do way better than any available hardware.

With this relatively straightforward solution, we could accurately register pinch gestures and precisely know when users are touching or sliding on different objects. We used this hardware during this project for most of the prototypes. In addition, we always had some toggles in demos to disable thimbles and feel the difference with conventional methods of gesture detection.

Pinch interactions

So, with the main hypothesis in mind and thimble v1, we started the work. I was so keen to feel the difference that the first thing I created was a simple demo around the direct manipulation of objects. With visual recognition of gestures, objects would always move when you release them, but knowing exactly when the pinch was released, the object wouldn’t move or shake. It was very motivating, as it worked really sharp.

We had so many ideas and concepts to test that we decided to take a step back and spend more time exploring what we could build and then select the most effective ways to spend our time.We started by brainstorming. We would go from one brief to another, spend 10 min writing down in Miro all the ideas, which we thought would be cool to examine and build, and then we were going through all of the notes, discussing and grouping them. For the next time, we would start preparing sketches to better articulate and clarify interactions. In this way, quite quickly, we had plenty of ideas to build. And we clearly started noticing the most promising ones. We finished the exercise by picking the vectors, which had parts that we were confident were cool and unknowns, to learn more new and leverage findings for the next iteration.

I was always fascinated by micro-interactions. I’ve been trying to prototype something like Google Soli for XR, but visual hand recognition was even worse on recognizing any movements except bending your fingers.

But this time, it might actually work, as we had part of the equation solved. We know precisely where fingers are touching something. So, the hypothesis was that with visual hand pose recognition but accurate touch detection, it would work good enough. So I created a series of demos to test it:

Touch sectors (5/5)

Quick test if we can accurately recognize users touching different areas on the index finger with the thumb. Initially planned to make users touch three phalanges in the middle, but moved to touch the tip of index + two joints, as it’s more reachable.

It worked unexpectedly well.

X Swipe (5/5)

Swipe with your thumb over-index to scroll left or right. And again, it worked pretty well. I was able to scroll content, even without snap-to-place, quite predictably.

Y Swipe (4/5)

Swipe with your thumb over your index finger to scroll up and down.Of course, it worked better with a rotated wrist 90 counterclockwise.

UI clearly has affordance of swiping vertically, but it wasn’t super comfortable as horizontal.
‍

Z Swipe (2/5)

Swipe with your thumb over your index finger to scroll objects in depth.
It doesn’t work well, as it requires an uncomfortable hand position, which isn’t properly trackable by Oculus. Also, thumb bends aren’t reflected in VR.
‍

XY Plane drag (3/5)

Swipe with the thumb to move the plane.

It was not accurate enough. In addition, very few use cases require such interactions (for example, dragging a 2d map, but it’s better to be done with more precise direct manipulation interaction)

Rotate with hand 3.5/5

Pinch the index and thumb and rotate the whole hand to spin the knob.

Because of the over-optimization of hand mesh in VR, it doesn’t match the real one accurately enough, so rotating wasn’t continuous and was skipping parts when a hand was facing the user with its narrow part.

Swipe to rotate (4/5)

Swipe left/right with your thumb to rotate the knob.
It worked really well. It was definitely a better option to rotate stuff than rotating pinch/hand. Users had no issues translating swiping gestures to turning movements.

For presentation purposes, I eventually kept only the best prototypes and added the option to switch to only visual hand recognition. It was clear even for non-VR users that only with thimble and accurate touch detection such micro-interactions were feasible.

BTW all demos were tested not only by our family members but also by the Port 6 team and their lab visitors. It helped us to constantly gather feedback from not only experienced XR users but also from people who were new to XR tech.

As a small unit, we found early on a sweet spot in collaboration. From one end, we clearly split the work; each of us had our own area of responsibility and Unity project to work on. But at the same time, each evening, we were recording videos, documenting progress and sharing prototypes, so the next day during daily sync ups we could discuss them, give feedback and recommend improvements to each other. Quite interesting to see the massive volume of sketches we produced to articulate ideas.


With this tight feedback loop, we could move quickly and efficiently, focus on quick sketches, and always-updating prototypes.

Else-touch interactions

In the next mini-project, we explored what if we can accurately detect not only pinch but touching any other object. To do this, Port 6 developed Thimble v2. Two more thimbles on the index and middle finger can recognise when they’re touching any surface, not only another finger. It opened for us a whole range of possibilities, practically the whole world to interact with.

At first, Port 6 made a couple of prototypes using Magic Leap. AR glasses felt like a good choice for an else-touch experience. It felt really remarkable, especially on non-solid surfaces like the sofa, but hand tracking on ML One wasn’t good enough. We decided to neglect context and prioritise hand interactions and started building for Oculus Quest.

My first prototype was very basic, just a plane with several buttons and a couple of interactions:

👉 Users touch the surface with two fingers to place the UI
👉 Users can change the scale of the UI to test its limits using upper buttons
👉 Users can decrease or increase the number in the centre of the UI using the middle buttons
👉 And adjust granularly vertical offset using buttons on the bottom row.
👉 The UI will flip 90 degrees to align with a real vertical surface like a wall if the wrist is vertical.
👉 Also, I added the option to disable thimble and switch to touch detection using proximity and colliders. It helped to test the difference.

Takeaways

🤞 Of course, the experience was kind of limiting in VR, but after users touched the tabletop for the first time to place the UI, they weren’t confused about where it was.
👌 Using thimble v2 made the experience confident and precise. When we knew when exactly users touched something, we just used the finger’s position to determine what virtual object was triggered.
👍 Hand positioning for this purpose worked fine, even for tiny buttons.
👌 Vertical offset was important only in default mode to detect when the button was engaged, but as we used a thimble to know when exactly the tabletop was touched, offset wasn’t so important.
👌 Vertical offset wasn’t so important when we used thimble, as we knew exactly when it was touching the surface. But for the default mode without thimble and with colliders, it was necessary to tweak.

I was working on this prototype exactly when the passthrough demo mode for Oculus Quest was released, so I wouldn’t miss a chance and created this experience with a passthrough view of the environment.

Takeaways

👋 Passthrough is still flat stereoscopic 360 view, so I didn’t show hands mesh as it didn’t match and was kind of floating in between.
👀 For the same reason, the UI was always on top of the hand.
😕 I would say that passthrough gave context, which is so essential when touching something, but its low quality and flatness convinced us to avoid it next time.

As public demo day was coming, we started focusing on prototypes that we planned to share.

After another brainstorming and sketching review, I started building a small story around else-touch interactions entirely in VR.

It was an IoT use case. I made a simple room with a TV and two lights. If you knock three times with your index on any horizontal surface, you get remote TV control with buttons to turn channels and adjust the volume. When you knock any vertical surface, you get a vertical UI with two sliders for adjusting lights. It’s a simple prototype that is supposed to give a sneak peek into how the future of IoT might work when you can summon controls anywhere.

Takeaways

👌 With clear else-touch detection buttons and sliders worked really well.
😬 A surface which you were touching became important, especially for sliders. Swiping on my rough walls wasn’t the most pleasures experience
🤔 We noticed that around our working stations, we had not so many vertical surfaces altogether. Even worse, Port 6 office is located in the attic of an old building in Helsinki, which has barely any vertical surfaces

Fun stuff

Somewhere in the middle, I made one more mini-game. We saw that direct manipulation of objects works noticeably better with the thimble vs conventional method. Still, we weren’t sure that non-VR users would understand the value of this difference. So we started looking for the idea of more intense user flow, where the accuracy of touch detection will have more value. As always, I started from sketches to quickly choose the direction.
‍


Eventually, we chose an idea called Falling Fruits

The story is basic. You’re on the island, from the palm are constantly falling fruits. You should gather them in the basket.

The mechanic of this minigame is straightforward. You should position small blocks to funnel fruits. Size of blocks and distance to basket requires users to pay attention to positing blocks.

I used a few free assets from Unity Store and built the demo in almost no time.

But the story and additional pressure did their work. Non-VR users were enabling and disabling thimble to feel the difference.

So, on demo day, Port 6 visitors and investors had a chance to experience a wide range of demos to understand better the challenges Port 6 team is solving.

As you can see, our process wasn’t very structured and linear. Initially, I was not completely happy with how the project was moving. We had three different and sometimes contrasted objectives:
1. Invent a new system of interaction patterns to shape and share it as SDK to reuse for others.
2. Inform and help ML and hardware Port 6 team with our insights.
3. Prepare nice and clear demos for demo day.

I got used to being a product designer, and the first thing that I do in new projects I define user groups and articulate their needs. This time the situation was different, we explored various concepts, and the whole process was more unrestrained.

In the middle of the project, I travelled around Scandinavian countries and visited Port 6 in Helsinki, where my wife stayed for a few days. We met Port 6 team members and other incredible Finish native creatives. Only after spending some time and understanding characters better, I realize that such a fuzzy process isn’t an issue. It’s organized in this way intentionally. Port 6 team has a broad goal (define natural and intuitive interactions with wearables), but they’re ok to explore vast and find problems to solve.

In terms of ideas for prototypes, it’s also absolutely cool to not have a quarter plan but just build one after another and see where it would lead us.

Luckily we were having retrospectives, brainstorm and sketching phrased during the project. Also, we’re constantly documenting all of our insights. It helped us in quite a short period to accumulate a significant volume of ideas, observations and insights.

Besides all of the technical takeaways, I noted for myself the next:

Sometimes it’s ok to don’t be sure.

As an outcome, our quick iterative process and habit of documenting all of the details helped Port 6 team build a better understanding of user needs and how hardware can solve them. I was able to mention here only a few publicly released hardware projects, but there is much more to come from Port 6 😉

As for me, I keep working with Port 6 as a prototyper/designer. I did enjoy this short explorational period, and now we’re cooking some next-level stuff, so stay tuned ✌️