Skip to main content

Mobile

Changing How We Interact With Computing Devices


There’s been a bit of excitement around the recently-announced Leap Motion device. The Leap allows for an inexpensive, accurate computer interface using gestures in mid-air. It immediately makes one think of Tom Cruise in Minority Report, or Tony Stark’s computer in The Avengers. A wave of the hand and images are shuffled around on the screen, twist to spin them in 3D. It sounds pretty amazing, and by the end of this year, we may be doing the same thing (sans the funky two-fingered gloves; we’ve been assured they aren’t needed).
I don’t know about you, but to me it definitely feels like it’s time for a bit of a revolution in how we interact with computers or electronic devices in general. It just isn’t practical to have a keyboard and mouse for everything, and as popular as the touch-sensitive tablets like the iPad are, they have some inherent flaws as well. When you have to touch something, you are obstructing your view. The screen gets dirty and smudged. Not to mention, on-screen keyboards take up half of the available viewing real estate. On the non-mobile side, there have been numerous attempts to integrate touch into desktop devices, and it’s fairly apparent users do not want to reach forward and touch their monitors.
It’s time to get the interface out of the device
That could mean in-air gestures. The Leap is definitely at a price point (a promised $70/unit) that could permit embedding into devices. Imagine for an instant: you’re cooking from a recipe displayed on an electronic device, your hands covered in flour and oil. With a mid-air gesture, you turn the page, without having to touch the screen.
Beyond the Leap, there have been a number of demos of technology that can turn any surface into a touch surface, such as the Celluon Magic Cube. Set the device down and the desk itself turns into a keyboard. Microsoft’s Kinect is being used to turn entire walls into touchscreens.
In addition to gesture and touch, we can talk to our devices. Apple and Android both have voice recognition built into their latest versions of their operating systems. It reminds me a bit of the scene from Star Trek IV, when Scotty attempts to interface with an old Macintosh by talking into the mouse. While the current implementations often leave a bit to be desired, vocal commands could be a feasible means of controlling a computer in the not so distant future. However, voice controls would, by their nature, be loud and fairly public, so I’m not much of a fan, but it could be a viable niche.
As for other control methods, Google is exploring eye and head gesture control with its Google Glass project. There are devices that have basic electroencephalograph (EEG) technology to “read” a person’s thoughts and let them control a video game by just concentrating on what they want to happen.
I don’t think we’ll see the end of the mouse and keyboard for general use—and particularly business use—for a while. Regardless, we should start planning for alternative entry devices. I think we’ll see these devices as secondary input at first, enhancing our ability to interact and control our computers, but not replacing the need for the mouse or keyboard. We are already a mobile computing society. Alternative control methods will enhance our ability to use computers while mobile, leading to smaller and more embedded computing devices as well.
I could go on at length about the possibilities that free our electronics from a physical control mechanism. It’s fun to think about and imagine the possibilities. In addition, it is an important exercise: we need to anticipate how users will be interacting with the websites we build, and the control interface is a vital part of that experience. What do you think? What uses do you see for some of these new interface technologies?

Perficient Author

More from this Author

Categories
Follow Us