Augmented reality (AR) is not new, but it is re-emerging and taking a new form. Shaped by new entrants and approaches to how AR can be used, headset-based AR is just waking up—and as it opens its eyes, it finds itself squarely in the land of enterprise.
First, let’s get clear on what augmented reality looks like (using enterprise as an example).
Industrial-use augmented reality right now: a building contractor “sees” into the walls so he can know where to lay wiring.
Industrial-use augmented reality in 10 years: the whole crew walks into the holographic building to virtually lay out the wiring, plumbing, lighting, and Wi-Fi in real time—and check for potential problems before they build. All with a few waves of the hands.
The first is brand-new and still emerging. The second is coming faster than you may imagine.
AR for business: Interesting shifts in research and gear
Which raises the questions: why is augmented reality focusing on business rather than consumers? What is bubbling up in AR that is worth paying attention to?
A lot has happened in the past few months.
At CES in early January, Intel unveiled its entry into the space: a set of glasses built into a helmet that also includes its RealSense 3D camera for depth perception. It is called the Daqri smart helmet. What it is: X-Ray vision that allows you to see how objects work. You can watch how it’s made and some of the use cases here:
Essentially what Intel is envisioning is a tool that helps people like plumbers, carpenters, construction workers, oil and gas workers, and airplane mechanics see into—and fix—things.
This set up, which includes 360-degree sensors, Intel’s Core M7 processor, and Daqri’s computer vision and navigation technology, claims to be the “most powerful augmented reality device on the market.”
But Intel isn’t the only company playing in the space. In November, Microsoft announced the HoloLens partnership with Volvo, and PTC acquired augmented reality platform Vuforia. In January, a company called the Osterhout Design Group began shipping a $2,750 set of smart glasses targeted primarily toward enterprise customers in fields including automotive, medical, and entertainment. And last week, the MIT Technology Review published an article about an AR experiment to reshape remote communication—projecting a life-sized person into the room while you talk, creating what is essentially immersive Skype.
Augmented or Virtual: Depends on what you want to do
In theory, the distinctions between virtual and augmented reality are clear. Virtual reality takes you into the digital world. Augmented reality pulls the digital world into your reality--it weaves digital images onto and into everything.
In practice, it isn’t that simple, and it would take more than a few sentences to explain. Helen Papagiannis, an expert with a Ph.D. in augmented reality, succinctly sums up her view of the differences between AR and VR in “Designing beyond screens to augment the full human sensorium,” if you'd like to read more. For our purposes here, it suffices to say: the new breed of AR systems still relies on VR headsets—like the Oculus—and many of the people who play in one space play in both.
While a lot of virtual reality growth is coming from gaming, AR is starting with business. The reason makes a lot of sense: for AR to work well in business, you need a use case with clearly defined requirements.
Todd Harple, Intel experience engineer/innovation lead in Intel’s New Devices Group—and the man who led several of the company’s VR and AR research projects—explains:
“Over the last year or two, AR has taken a turn toward the business side of things. That’s because it takes a tight vertical to make it work effectively. We purchased Recon last year, and a lot of their use cases are tight verticals. Recon Jet was about cycling—that enables you to build the device with only what is necessary for cycling. And it gives you a clear understanding of the physical and linguistic vocabulary, as opposed to ‘I have a telephone that can do everything on my eyes.’ Field service and equipment inspection are similar. You can [program the system to] have a clear understanding of what is in the walls because there’s a CAD drawing somewhere.”
Which is to say: you can’t program a hologram to work well in a space unless you understand what is in that space, what people do there, and how it all works together.
For instance, computer vision systems are currently great at understanding that a sofa is rectangular. But they are not great at understanding that the sofa is covered with a material that should squish down when someone sits on it. And in the case of enterprise, you can only create an AR system for picking items in a warehouse when you understand exactly what is in that warehouse, how it is organized, and what is there at any given time.
“The promise of the new breed of AR systems is that they can place content into a world in the way that it seems like it’s natural to that world,” says perceptual neuroscientist Beau Cronin. “From my point of view, the more interesting challenge is that if you are going to put that content out into the world, you need to understand the world you’re putting it into.”
How can AR help business?
How can augmented reality help business people? In any number of cool ways.
Imagine you see your new house floating in front of you. You walk up to (or into) it, do a tour, and approve your architect’s plans on the spot. You can ask for changes, like a glass wall in the bathroom so you can see the green space outdoors. It’s a lot cheaper to check things out before you build.
Imagine you are an artist who wants to hang a show in a well-known gallery. You can interact with a holographic representation of that space and test out placement of your paintings. Should this one go on the left wall or the right? Is there enough space on the floor in the back for my sculpture? Or do you want to flip everything around? With AR, you will be able to test any of this in seconds.
Or let’s say you are planning a shipping container park using shipping containers as pop-up coffee shops and restaurants. Before you move in the heavy equipment, you can use your left pointer finger to move a 6,000-pound shipping container to one of the parking lots, use your right pointer finger to stack two shipping containers on top of each other, and use both hands to sweep everything to the sides—just like you’d use two fingers on an iPhone. Imagine that kind of power to visualize and control the world.
The new Augmented Reality for Enterprise Alliance imagines use cases similar to these as well as many others, from warehouse picking to emergency response to aircraft technician training to product design.
All of which hint at the most important bit of this: the moment when the screen goes away altogether. The moment when things appear Harry Potter-style in the real world. The moment when we are walking around in the Internet.
And the company to watch for that: Magic Leap.
Why this is one of the most important patents you will ever read
What most people haven’t noticed: Magic Leap has been quietly publishing a lot of patents over the last few months.
In November and December 2015, there was a flurry of activity (you can peruse through the whole list to date). Magic Leap is exploring things like:
- Finding new points by render (based on locations from AR data) rather than search.
- Creating and using topological maps.
- Rendering an avatar for a user in an augmented or virtual reality system.
- New ways of creating focal planes and virtual content displays.
- Ambient light compensation.
In short: they are tackling interesting and difficult problems.
One of Magic Leap’s most interesting patents is the one from October 2015. The full patent is 180-pages. Mark Wilson at Fast Company deconstructed the patent, pulling images with a funny take on some of the things it does. But for the purposes of this conversation, the most important thing to note is how they’re approaching AR.
In their patents, the working mechanisms of their systems are described as things like “Volumetric phase type diffractive elements [which] are used to offer properties including spectral bandwidth selectivity that may enable registered multi-color diffracted fields, angular multiplexing capability to facilitate tiling and field-of-view expansion without crosstalk, and all-optical.”
In English, that means: bending and weaving light—working with particles and waves, the building blocks of physics, the stuff we are all made of. While everyone else is playing primarily with hardware, Magic Leap started by playing with light. That is what will shift everything.
While some say their promises are unchecked, what they are delivering is a vision for the future. Look at their tiny virtual elephant. Or the whale jumping out of a gym floor. Or the demo of a miniature galaxy floating inside an office. It is fun and inspiring. And it does represent where this category is going.
As Bill Gates said in his book The Road Ahead: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next 10.”
We will start with enterprise AR. We will end up in a sensor-enabled screenless future that will feel normal, but right now sounds like magic.
But ignore the hype—we’re not going there immediately.
Jumping the AR hurdles
In order to move the space forward, augmented reality devices have some big hurdles to overcome.
First, there are the technical challenges around graphical horsepower, because rendering graphics for AR requires a higher resolution than we are capable of at this point. To get good virtual images, you need at least 4X the frame rate needed for a game.
Second, there is the challenge of latency. Beau Cronin puts it this way: “If I have something on my head, decisions about what to display have to be made in milliseconds. So it has to be done on the device itself because there isn’t time to send it to the cloud and back."
There is lighting to get right—moving from indoors to outdoors is a challenge for headsets.
There are mirrors to solve for—reflective surfaces pose a challenge for current computer vision systems. As Cronin says, “[Mirrors] make it harder for machines (and us) to understand the true structure of the world.”
Also, according to an excellent MIT Technology Review article, there is the issue of scaling up silicon photonics—Magic Leap is working to scale up a light-field chip that “brings optical components in or closer to computer chips.”
But let’s assume we can do all of that. Even as hard as that is, it’s still only baby steps toward what we want to do. Because even if you can understand the shapes of things, we also want to understand their properties, behaviors, and personalities.
And that is the most wonderful challenge of all.
Cronin poses it as a question: “The promise of the new breed of AR systems like HoloLens and Magic Leap is that they can place content into a world in the way that it seems like it’s natural to that world. What if you want to have content that is a person, that can read social skills of walking down a sidewalk in a crowd of other people?”
This means we need to get way better at the contextual tech-in-the-wild. Making a hologram appear in the air in a mapped-out room is one thing. Programming her gracefully to saunter down a busy sidewalk, weaving in and out of and interacting with pedestrians, that is quite another.
It also means that we need to get better at the human side of tech—teaching ourselves to interact kindly with everyone and everything. (Have you ever noticed that we program Siri with orders not requests? Practicing asking nicely is a basic human skill.) And we should always create technology with the end goal of helping humans. In short: we all need to be basing our work on the Mom rule.
What that means is that one of the biggest needs for AR teams is not just hardware builders, but also social scientists and people who can build in contextual understanding and empathy, constructing creative and kind systems. So far this space has been the realm of engineers. It is time for it to expand.
AR makers have a long way to go, but where it’s going will be a quantum leap from where we are now. And the types of thinking and human understanding work we do now will determine how it all turns out.
To learn more about this space, visit ThingEvent.com to view recordings of a live stream where PTC explored the relationship between augmented reality and the Internet of Things.
This post is a collaboration between O'Reilly and PTC. See our statement of editorial independence.