Looking Beyond Google Glass
Andrew Sullivan points to Sam Biddle’s doubts about Google Glass:
Google thinks that we’re all ready to walk around wearing the things. Which, you know, we aren’t. Not everyone is a somewhat eccentric data maven billionaire, too rich and lost in a world of giant ideas to care how he looks walking around all day. Are you? This presents not just a leap in technology, but in culture.
For many of us, a computer that’s literally sitting on our face for every waking moment sounds really socially alienating. Say what you want about the punctures smartphones have delivered to everyday interaction, but at least those things go back to our purses and pockets when they’re not being used. But Glass is there to stay, becoming a part of your face, and turning you into a baby-scaring cyborg.
India’s Andla Aligarh helpfully provides a gallery imagining celebrities including Justin Bieber, Lady GaGa and Barack Obama wearing Google Glass. Not a one would scare any baby I know.
The screen of Project Glass sits off to the side, clear and unobtrusive. … When technology gets out of the way, we are liberated from it. Wearable computing will free us from peering at life through a 4-inch screen. We will no longer have to constantly look at our devices, but instead, these wearable devices will look back at us.
I’ve been eager for something like Google Glass since the dawn of the millennium, when I read this prediction for 2019 from Ray Kurzweil in The Age of Spiritual Machines [p.142]:
Computers are now largely invisible. They are embedded everywhere—in walls, tables, chairs, desks, clothing, jewelry, and bodies.
People routinely use three dimensional displays built into their glasses, or contact lenses. These “direct eye” displays create highly realistic, virtual visual environments overlaying the “real” environment. This display technology projects images directly onto the human retina, exceeds the resolution of human vision, and is widely used regardless of visual impairment. The direct-eye displays operate in three modes:
1. Head-directed display: The displayed images are stationary with respect to the position and orientation of your head. When you move your head, the display moves relative to the real environment. This mode is often used to interact with virtual documents.
2. Virtual-reality overlay display: The displayed images slide when you move or turn your head so that the virtual people, objects, and environment appear to remain stationary in relation to the real environment (which you can still see). Thus if the direct-eye display is displaying the image of a person (who could be a geographically remote real person engaging in a three-dimensional visual phone call with you, or a computer-generated “simulated” person), that projected person will appear to be in a particular place relative to the real environment that you also see. When you move your head, that projected person will appear to remain in the same place relative to the real environment.
3. Virtual-reality blocking display: This is the same as the virtual-reality overlay display except that the real environment is blocked out, so you see only the projected virtual environment. You use this mode to leave “real” reality and enter a virtual reality environment.
In addition to the optical lenses, there are auditory “lenses,” which place high-resolution sounds in precise locations in a three-dimensional environment. These can be built into eyeglasses, worn as body jewelry, or implanted in the ear canal.
Keyboards are rare, although they still exist.
From a Wired interview with project head Babak Parviz and product manager Steve Lee. Says Lee:
It’s my expectation that in three to five years it will actually look unusual and awkward when we view someone holding an object in their hand and looking down at it. Wearable computing will become the norm.
It looks like Google is keeping us right on Kurzweil’s schedule.