Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your third complaint can be resolved by placing the device lower than your elbow. Most people use it on the desk, which is the problem. It's not the ideal height for the Leap.

When it's higher than your elbow, your shoulder must be engaged and power all your movements. There are also many apps that map directly from x/y/z space. These can be hard to navigate. All your joints move in arcs, not straight lines.

You get to use the Leap however you want, so figure out what's comfortable for yourself.

Leap is quite sensitive. You could look for tiny flicks of the finger, for instance to create a virtual DataHand. These kinds of interfaces should be much gentler on the body than apps which require large sweeping motions, punches, etc.

https://en.wikipedia.org/wiki/Datahand



I don't think a lot of consumers will be ok with trying to find the optimal place to fit a Leap on their computer desks. It is a forced behavior not covering any specific need.

Leap has a long way to go in defining a use case for the technology; something which it does better than existing interfaces. Otherwise it will be a really cool niche product used by tech enthusiasts, not something a heavily VC-backed form is targetting.

Also, there is a reason DataHands never really took off and it is the same reason Leap might not. Changing human behavior is ridiculously hard, and pouring money in a dev ecosystem (how Leap is doing it) is not the way to do it.


I personally think it should be additional input to whatever, for your desktop or laptop.

I'm planning on creating my own laptop startup and eventually try to integrate Leap(or something similar). In many cases it may not apply to general usage, but there are many use cases where it would be preferred.

1) Cases where user interacts with a GUI where user is visually engaged, imagine I am on Youtube, personally because stupid instability of HTML5 and Flash on Linux, I hate using it, especially when I am full screening. That's something that can be replaced with a gesture. So try doing a pinch gesture(basically zoom out) and the browser can interrupt that as put video in full screen.

2) Rather than using your whole arm what about just one finger. Lets say you want to scroll down and both hands are on the keyboard, rather than moving one hand back to the mouse you can just flick one finger. This gesture could scroll pages, it could flip through tabs and you wouldn't even have to take your hands of the keyboard.

3) Now there are a lot of edge cases I would say where using a keyboard/trackpad is uncomfortable. When I am lying down using a trackpad is very awkward and strains my wrist. This is certainly the case where I would want to flick some stuff around, maybe even talk to the computer.

4) My parents connect their laptop via HDMI cable to the TV and that's all they use the laptop for, basically to watch youtube and Netflix(if Leap motion had the range), it's something I would want to control almost as an imaginary but intuitive remote.

And this is kind of just the start. I think with the increasing number of input devices available interfaces are going to have to be clever to hide that complexity from the user and make things intuitive.


DataHands cost hundreds of dollars, have huge brick interface boxes, cables everywhere, a mains power brick, they don't let you type one-handed while you drink, eat, mouse, or hold a phone, they take up a lot of space for each hand unit, and they look weird.

But in terms of changing behaviour, they are pretty much a qwerty keyboard. In use, they're not ever-so different at all. I would be very surprised if that was a significant contributing reason to why they 'never really took off'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: