Gestoos App lets users interact freely using gestures — connecting each gesture to a command or keyboard shortcut on the apps installed on their computer.
The idea reframed Gestoos’ computer-vision technology as a personal productivity layer. Instead of touching the keyboard for repetitive shortcuts, a user could raise a hand, swipe in space, or hold a static pose — and trigger anything the operating system supports. Simple, fast, and tied to the apps people already use every day.
My focus on this project was the mobile companion app — the surface users open to map gestures to commands, configure shortcuts, and learn the gesture vocabulary.
I designed the full visual system for the app and built the interactive prototype: onboarding flow, the welcome and installation screens, the gesture library browser, and the per-gesture configuration view that ties a movement to a shortcut on a connected computer.
The app is organized around a central Gesture Library, where every supported pose is grouped into two clear families.
Moving gestures — hand poses that can be moved left, right, up, or down. These are the directional shortcuts: scroll, swipe, navigate between desktops, change tracks. The mental model maps naturally to actions that have a direction in the real world.
Static gestures — fixed hand poses held still, recognized as discrete commands. These map to actions that have no direction: play/pause, mute, screenshot, open an application. The simplicity of the pose makes the shortcut memorable.
This split is the most important UX decision in the app. It lets a user predict, before they look, which gestures are likely to do what — turning a library of 30+ gestures into something a person can actually remember.
The onboarding sequence prioritizes time-to-value — every screen exists to get the user to their first working gesture as fast as possible.
The welcome screen establishes the Gestoos brand and explains, in a single line, what the app does. The installation screen handles permissions and the desktop pairing in the background while the user reads a short illustrated explanation. Then the gesture library opens directly — pick a category, pick a gesture, assign a shortcut. The user has trained their first gesture within thirty seconds of opening the app.
Beyond onboarding, the app stays out of the way. Users return only to add, edit, or delete gestures — not to maintain the system.
Every gesture in the Gestoos library passes through the same three-criterion evaluation before it can be shipped to users.
A gesture has to be holdable or repeatable without strain. Static gestures are tested for one-minute comfort; moving gestures are tested for repeatability across a five-minute interaction window. Anything that introduces wrist strain or shoulder fatigue is rejected before it reaches the recognition model.
Each candidate gesture is benchmarked against the computer-vision model under three real-world conditions: standard office lighting, low-light environments, and high-contrast / backlit setups. The recognition rate has to clear a fixed threshold under all three before the gesture moves to user testing — otherwise the feature creates more friction than it removes.
Users have to remember the gesture without instructions after the first successful trigger. We tested this with first-time users by walking them through onboarding once, then waiting 48 hours and asking them to perform the same shortcut blind. Gestures with low recall rates were either renamed, re-illustrated in the library, or replaced.
The split between moving gestures and static gestures wasn’t a visual design decision — it emerged from the data. Users predicted directional shortcuts (scroll, switch, navigate) for moving gestures and discrete commands (play, mute, screenshot) for static ones with near-perfect accuracy. The taxonomy was the easiest hierarchy users could already imagine.
The mobile app was prototyped in Figma with native animations, then validated against a working desktop integration before the final UI was locked.
The hardest interaction to design wasn’t the library itself — it was the bridge moment between the phone and the computer. Users needed to feel that their gesture, performed in front of the laptop camera, was actually being recognized and translated to a shortcut. We solved it with a real-time confirmation surface inside the app that mirrors recognition events from the desktop — visible feedback within 200ms of the gesture being captured. Once that loop was tight, the perceived value of the product clicked into place.
“If a shortcut takes longer to set up than it saves, the product fails. Every screen had to defend its existence.”
The Gestoos App taught me that gesture UX is half taxonomy.
Designing the interaction is the visible part; structuring the library so users can learn, remember, and predict gestures is the part that actually decides whether the product survives daily use. Splitting moving and static gestures was a one-line decision that did most of the work — it turned a flat, intimidating list into two small, learnable groups.
The other lesson was about onboarding restraint. Mobile companions for hardware-adjacent products are usually over-explained. Stripping the welcome flow down to a single value statement — and trusting users to figure out the rest by doing — produced a much shorter path to the first “this is useful” moment.