Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been a Google Glass skeptic. But I just got back from Mexico, where I was walking all over waving my phone at various signs so Word Lens could translate them for me... skeptic no more! Word Lens is a killer app for the platform. Except now I see that there's no API to access the camera. Seems like a huge mistake.


One of their example API uses includes users taking photo's with the built in camera and sharing them with your service. See "add a cat to that" https://developers.google.com/glass/stories


If the app has to take a photo at the user's behest everytime a word needs to be translated, it's going to be very clunky very fast.

The whole point of technologies like Glass is that they should be as unobtrusive as possible and just work by themselves when you need to.


No camera API access? That's ridiculous! Not only does it restrict about 70% of the possible usefulness of a head-mounted computer (so it's basically just a fancy news-feed display), but it'll allow competitors to move into the arena having a clear advantage. The only reason I see for Google to be doing this is resource allocation, but I feel like image processing could be offloaded to a linked smartphone, if necessary.


It looks like there will be one standard way to take photos, and you'll register an Intent to handle what you want to do with those photos (see 'Add a cat to that'). Considering people will be wearing these things 23/7, it's not unreasonable that they wouldn't give arbitrary apps shutter control right away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: