by Rom Feria
At the recently concluded TED Fellows Retreat 2013, held at the Fairmont Chateau in Whistler, British Columbia, Canada, I had the privilege of attending a Google Glass workshop. The workshop, attended by about a dozen TED Fellows from class of 2009 to the present, was conducted by the team of Timothy Jordan and Sidney Chang of Google. It was actually a design workshop where TED Fellows tried to hack the Google Glass and think of possible Glasswares, or Google Glass apps.
Hacking Google Glass won’t be possible without first using the device. Each of the TED Fellows had the opportunity of playing with the device before the workshop formally started. Everything you read about Google Glass is true. However, a first-hand experience is something that you need to have before you can actually give a sound critique of the future of human-device interaction.
The first time I wore the Google Glass, I was a skeptic — I had reservations that it won’t actually work for me considering that my right eye has astigmatism (yeah, a disability that took me awhile to admit). Whilst I had known that Google is working with using Glass with prescription glasses, the ones we had at the retreat did not come with one. However, it was explained (and shown) that it is not that difficult to tweak the device — there are small screws that allow you to remove the frame and put the actual Glass over prescription glasses. Not so bad.
The device sits high above your line of sight. Whilst I admit that it is not distracting, but for someone who is still getting used to wearing glasses for reading, it is a similar experience — not that comfortable, but you get used to it.
The Glass display surprised me — I thought that it will be difficult for me to read the text, but I suddenly forget that I have a disability. The text was clear and crisp and the display was pleasantly bright. Images don’t really pop-up like on a Retina display, but it’s acceptable.
The voice interface is similar to Android’s voice interface. It surprised me that the basic Glass voice commands immediately got processed despite the different accents of the TED Fellows — believe me, we are such a diverse group from all over the world. However, just like the Android voice processing, it is not always accurate. I said “OK Glass, Google Boracay images” and Glass showed me photos of Barack Obama.
“OK, Glass, take a picture” – this is immediate and was indeed a pleasant experience. The downside, however, is that framing the shot requires a little bit of effort, but not impossible. The image appears on the display and immediately gets uploaded to Google+ (don’t worry, it is tagged as private). If there is no connection, it gets stored in the device’s 12GB of user-accessible memory (4GB is for the system).
Streaming video via Hangouts, however, gives the other end a feed showing what you are seeing, rather than seeing who they are talking with, impersonal, I know — maybe you need to look at the mirror when you do Hangouts. For demos or sharing what you are seeing, this is indeed perfect.
I wish that there is a way to delay the sensor that detects whether or not you are wearing the device to prevent it from turning itself off – at least gives you an opportunity to turn the camera around to face you during Hangouts.
Google Glass has a touch interface as well. No, you don’t touch the display, but the right temple bar of the device. You swipe up, down, front and back, to cycle through different cards (the way information are presented, not dissimilar to Google Now) and tap to select. Whilst this definitely is a better interface compared to voice commands, which is not ideal in public, or you risk looking like a dork talking to yourself (yeah, same with that Bluetooth headset user), it is still limited.
Google Glass has lots of potential as a new wearable computing platform.
Whilst I am interested in getting a pair, even if I pay USD1500 and fly to the US to pick it up, I still find that it is not yet there, if you know what I mean.