Miskatonic University Press

AWE 2014 Thoughts

ar conferences

Here’s the fifth (first, second, third, fourth and final post about Augmented World Expo 2014.


Context and storytelling were two common themes at the conference. I think even Ronald Azuma, whose 1997 definition of AR still stands (“1. Combines real and virtual; 2. Interactive in real time; 3. Registered in 3-D”), mentioned context as a key ingredient—you can certainly have a system that meets those three criteria but is still uninteresting or unusable.

I get the sense that for all the innovation and research going on in AR right now, for vendors there are two different themes: the two ways they can make money: marketing and industry. No one normal is pulling out their phone and calling up an AR view of what’s around them. The hardware makes it a bit of a chore, the software is siloed, and most of the content isn’t particularly interesting or could be seen better on a map.

With advertising and marketing—through “interactive print” or enhanced billboards, posters and catalogues—there’s a reason, a need, for someone to run the AR app: enhanced product information, a chance to win something, etc. The user chooses to use AR because they get something.

With industrial applications—common examples are of finding things in a warehouse (imagine yourself looking for something on those shelves at the end of your odyssey through Ikea) or of working on engines or other complex machinery. The user wants to or is is told to wear the glasses or use the app, because it makes work easier or them more productive.

Things will change. The research and new technology and the context and storytelling will make better AR. But it will be a long while before regular folks are wearing daily.

Palm Drive, leading to Stanford University
After the conference I made a quick trip to Stanford University to look around. This is the Palm Drive approach. Beautiful campus!


“Got about 45 seconds to argue about privacy,” Robert Scoble said in his keynote.

I was amazed by the lack of discussion of privacy at the conference. Perhaps I shouldn’t have been. The vendors don’t want to talk about it. Why would they? It would just turn people off. Will there be more at the academic conference this summer, ISMAR 2014? I hope so.

The closing panel discussion The 3 ‘P’s of the Future Augmented World—Predictions, Privacy, & Pervasiveness had some good commentary in it—and some I’d take strong exception to—but the privacy was all centred around people observing other people. Rob Manson tried to change the direction by asking a question about when the problem is “deeper than wearables: biological, like pacemakers” and was cavalierly dismissed: right now cars can be hacked, planes can be hacked, they’re all computers, everything can be hacked, that’s old news. I wish Karen Sandler had been there to talk to that: she’s an expert on free software, she wears an implanted device on her heart, and she wrote Killed by Code: Software Transparency in Implantable Medical Devices.

“I want to help frame the conversation before someone else frames it for us,” said Robert Hernandez. The conversation has been framed. It was framed by Edward Snowden and his whistleblowing revelations.

There are two sides to this in augmented reality.

First, what the companies know. Running an app can mean sharing location information, the camera view, and more. Who are the companies, what are their privacy policies, are they using encryption on all communications, how secure are their servers? Privacy on smartphones is a nightmare anyway; installing proprietary apps from unfamiliar companies and granting them access to who knows what so you can see an auggie on a movie poster just makes it worse.

Second, what the spy agencies know. Assume that every AR device has been broken by them. We know they can listen through your device’s microphone without you knowing. We know they’ve got back doors and taps into pretty much everything. Assume they’ve broken Glass and any other wearable, and they can listen to the microphone and see the camera feed whenever they want without the wearer knowing. They know where the wearer is, they know what’s being seen and heard, they’ve got it all, and they can tie it in with everything else they know. This is what everyone in AR needs to remember and to work against.

You don’t give up easy on this. You fight it, because privacy is a right and we need to defend it. What “privacy” means may be changing, but it doesn’t include the state recording everything everyone sees from head-mounted cameras.

What can we do about this? How can we use Tor for AR? Who’s working on privacy and AR?

Staircase in a Stanford library.
Interesting staircase in Stanford library stacks.

Free as in freedom

One of the parts of the solutions to the privacy problem is free software. There is little free software in AR right now. The AR Standards Community is working on open standards and interoperability, and those are crucial to all of this work, but not just the standards but the implementations need to be free and open.

The only FOSS AR browser I know is Mixare (there is also an iOS version), which is along the lines of Layar or Junaio. It’s under GPL v3. The project’s gone dormant, but the code’s there if people want to go back to it.

There are a number of point of interest providers for Layar and similar browsers out there, like my own Avoirdupois (GPL v3), which is a fairly straightforward web service written in Ruby with Sinatra. It will be simple to make it feed out POIs in ARML or any other open format.

There is OpenCV for computer vision: “OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products.” It’s under the three-clause BSD license.

But the best possibility out now is awe.js, the “jQuery for the Augmented Web” (MIT License). I wrote about it back in January. Here’s a nice demo video.

That’s great work! Check out the other videos BuildAR has made showing the augmented web in action.

I’ll be hacking on awe.js, and I hope others will too. Build more free and open source augmented reality software! Server-side, client-side, libraries, apps, whatever. The quickest route is through the web, the way awe.js is doing it. Web browsers can get access to most (soon all) of the sensors and data sources on a device (with permission from the user) and do AR in a browser window. Amazing! The standards that let the browser talk to the device are free and open, the standards that make the web work are free and open, and the software that builds the browser is (I hope) free and open. Then the browsers can be deployed on any platform, smartphone or tablet or glass or gesture-recognition system. Everything has a web browser in it. The platforms are probably proprietary but that may change.

The augmented web is a good idea for two reasons. First and more practically, when the web hackers get on something it will grow quickly. You’ve got to be pretty devoted to write an app for Android or iOS, and the entire culture there seems to encourage people to keep the work proprietary. But hacking some web stuff, that’s much easier and much more fun, and the environment is much freer and more open. When someone makes a WordPress plugin or Drupal extension to make using awe.js drag-and-drop easy, bingo, instant widespread deployment. Second and more fundamentally, we all need to control and own this ourselves. We can’t give away so much information and power over our daily lives to organizations.

I want glasses or lenses I can wear when I choose to see what I want. I don’t want to see a corporate-branded view of life knowing everything around me is being fed back to the spy agencies.