How to pick the right authoring tools for VR and AR
Identify the options available to develop an effective immersive experience.
Identify the options available to develop an effective immersive experience.
The world of virtual reality (VR), augmented reality (AR), and mixed reality (MR) is growing at a seemingly exponential pace. Just a few key examples: Microsoft partnered with Asus and HP to release new MR headsets, Google glasses have made a comeback, Facebook Spaces launched, and a patent for AR glasses, filed by Apple in 2015, was just discovered during a patent search.
At the Apple WorldWide Developer Conference (WWDC) this past June, Apple announced ARKit, which made augmented reality available to all 700 million worldwide users of iPhone and iPad. The momentum and economic impact of these experiences continues to accelerate, so it’s the perfect time to begin developing for them, and that means picking an authoring tool that’s right for the reality you want to create.
It can be confusing to know where to start as you begin developing in this field, because the three types of experiences seem to overlap at times, making it difficult to understand the similarities and differences. Each experience requires a slightly different development stack and tools, and in some cases necessitates targeting the specific display the observer is using. For example, VR that’s created for a tethered headset means you need to create a virtual world for the user so the user is completely immersed in the scene.
For example, if you want to create the VR for Oculus, you have four primary tools to choose from:
If, however, you also want to target the PlayStation VR, your primary tools are limited to the first three: Unity, Unreal Engine, and CryEngine.
Most iOS, tvOS, watchOS, and MacOS developers are accustomed to using XCode as their development environment for all things Apple. The need to determine which tool to use for AR and VR is something many developers new to AR, VR, and MR are not aware of. Let’s take a look at the steps for picking the right tool for the different reality types. Developers need to determine which type of reality they want to target and which types of users before they begin development. But before I begin differentiating tools and environments, let me clarify the different types of experiences and how I’m using the terms here.
Most people are familiar with the term virtual reality, but when I say VR, I’m referring to the computer-generated, sometimes simulated, and at times seemingly realistic 3D view of a world. The virtual world can be realistic, imagined, or a combination of both. To enter the virtual reality, users must cover their eyes with a screen and completely obscure the view of the real world, immersing them in the virtual world. The screen can be in the form of a headset (tethered to a computing device, like Oculus or Vive) or a phone (untethered) with additional hardware (like Google cardboard or Samsung Gear). Facebook just announced a new headset that is not tethered and will run VR on its own – how exciting is that? The Oculus Go may just change the business, but only time will tell. Many other companies are also working on self-contained head mounted displays, without the need for wires, phones, or laptops.
360° video is a type of virtual reality that allows the user to control the direction of view. Some people confuse 360° video with virtual reality because the user is immersed in a real-world, 360° view of recorded video. When I used the term virtual reality, I am not referring to 360° video, I am referring to the computer-generated 3D view of the world.
Augmented reality is a view of the real world in which computer-generated content is laid on top of the real world to augment the world with additional information or metadata such as sound, graphics, GPS data, and textual displays. In general, the overlaid content cannot interact with the real world and vice versa. The user must have some type of screen through for viewing the world.
Imagine holding your phone up in front of a building and the augmented reality enabled app displays the name, address, and businesses in the building. The user can’t interact with the textual overlay, but the data augments the real world. The most popular AR screens today are the Google Glass and smart phones with specific augmented reality applications installed. There are many tech companies creating augmented reality glasses and they will begin entering the market in the next few years.
Mixed reality, like augmented reality, is a view of the real world, but mixed reality overlays virtual worlds, and virtual objects can interact with the real world. The primary headset for mixed reality today is the Microsoft HoloLens. The HoloLens can map the real world in 3d space so virtual objects can be realistically overlaid on real world surfaces and objects. For example, mixed reality allows users to place a virtual cube on a table or a virtual chair in the corner of the room. Microsoft just released a series of mixed reality headsets earlier this year. Google released Tango (formerly known as Project Tango) several years ago, and they recently released ARCore which seems to be their answer to Apple ARKit. These too have motion tracking, light estimation, and environmental understanding. ARCore is available now on all qualified Android phones running N and later, and during the SDK preview it is available for the Google Pixel, Pixel XL, and Samsung Galaxy S8. Tango is available on two Google devices and an Asus and Lenovo phone.
The industry hasn’t yet standardized on the distinctions between mixed and augmented reality and the lines seem to constantly shift and blur between the two. Apple’s new ARKit framework (Augmented Reality Kit) allows virtual objects to interact with the real world (the definition of mixed reality), and so the confusion between augmented and mixed reality definitions continues. I believe the industry will eventually standardize on the term “augmented reality” and the definition will include objects that can interact with the real world and objects that can’t interact with the real world. I personally feel the term “augmented reality” will eventually absorb the context of “mixed reality” and the industry will no longer use “mixed reality.” For these reasons, I’ll focus the tool selection below primarily for AR even though it may also apply to the MR space.
As I mentioned earlier, VR and AR experiences sometimes require different tools and development environments, depending on what you want to do. For instance, if you want to develop a virtual reality game using Unity, you will need to specify the targeted headset (HTC Vive, Oculus Rift, PlayStation VR, etc.). If you want to develop an augmented reality application, you can choose to develop on a cross-platform toolset (like ReactVR), or you will need to target iOS or target Android devices. If you want to develop for the HoloLens, you can start with Unity and build for the HoloLens, and then you will need to use Visual Studio to test and deploy the application.
User adoption of the different experiences has not reached the critical mass needed to force hardware and software industries to standardize tools, interfaces, or programming languages. Until then, there is an opening in the market for innovative companies to step in and create a standardized toolset. For now though, developers still have to determine the type of reality they want to create, target the type of headset and platform, and then learn that set of requirements.
With all the hype around VR and AR, you’re probably reading this because you’ve been considering developing for one or more of these platforms. Statistics confirm this is a good time to get in on the action – the projected economic impact of VR and AR technologies by 2020 ranges from 5.8 billion USD (low adoption rates) to 20.5 billion USD (high adoption rates). The VR software market size in 2018 is estimated to be 4.8 billion USD. The question that remains after understanding how quickly the market is growing, is how do you decide what tool for which reality.
I don’t know many developers who are willing to learn a new tool or application that will only work on one device. We don’t want to invest our time and money in something that can’t translate and carry over to other tools and platforms. I’m assuming this is the case with you as well, and if you’re anything like me, all three experiences excite you and you may want to develop across all three someday. This means you should choose a tool that can work across as many possible platforms, devices, and experiences as possible or you should be willing to learn several different platforms and toolsets.
The first step to picking a tool is to look at the market and determine where the users are today. Factor in the size of the company creating the hardware and the platform and review the momentum for additional insights. The table below shows which VR headsets were selling well in 2016, an indication of how many people are using the different platforms. You then need to consider what type of application you want to build, and what platform the desired users of your application are on. Are you building an educational, gaming, business, training, or marketing application? Most hard-core gamers are using one of the tethered headsets because these offer higher resolution, better refresh rates, more sensors, and controllers.
Table 1: VR Headsets and Authoring Tools
|VR Headsets||HTC Vive||Oculus Rift||PlayStation VR||Samsung Gear||Google DayDream|
|2016 Units Sold*||420,000||355,000||745,000||2.3M||261,000|
It should come as no surprise that the two main authoring tools (Unreal and Unity) support all five of the main VR headsets. If you’re only interested in developing for the three main tethered VR headsets right now, you have three good options:
Now let’s look at the AR space to determine what authoring tools will work for development there.
Augmented (and mixed reality) require the user to view the real world through a screen and the screen overlays computer generated data over the real world. This means the user either must look at a phone, or tablet that is using its camera to view the world, or they must be wearing some form of screen over their eyes which can display the augmented data. These screens can come in two different forms – a headset that the user can wear, or a phone or tablet. Headsets available today are still fairly bulky, because of cameras, processing power, and displays for AR. I predict the size will continue to shrink so we can eventually wear what appear to be eyeglasses, but we aren’t there yet. There aren’t currently any front runners in the AR headset race, so I won’t include them in the analysis yet.
Table 2: AR Devices and Authoring tools
|AR Devices||iOS phones and tablets||Android phones and tablets|
|% of world users||33%||65%|
* Requires ARToolkit and Augmented Reality plugin
** Requires Vuforia plugin for Android and iOS, or ARKit for iOS, or ARCore for Android
As you can see, if you want to do VR and AR development across as many devices as possible, your options are still Unreal Engine and Unity3D. This doesn’t mean that development across all the devices will be easy or seamless and in some cases, you will have to develop the application specifically for the targeted device(s), but at least you will be learning one tool that can apply to many devices.
If you are most interested in developing AR apps for Android devices, you could choose Unreal Engine or Unity3D, and most recently ARCore. On the other hand, if you are most interested in developing AR apps for iPhones and iPads, then ARKit is definitely the way to go. AR is an exciting technology and especially so as Apple seems to be embracing it with their latest iPhone X release. Developers will soon be able to create facial recognition apps. Imagine walking into a conference or a hotel and you’re immediately greeted by a concierge who says “Welcome back! We just saw 2 weeks ago, right? You’re room is ready and is on the 7th floor. Would you like me to have an Old Fashioned brought up?” This could be possible thanks to facial recognition and AR capabilities that exist now, and because the form factor of AR glasses continues to shrink to a manageable size. Users will need the new iPhone X when it becomes available, but development can start now.
If you already know Objective-C and/or Swift, then you can use Xcode to develop your AR apps. If you don’t know Objective-C or Swift, then Unity will will help you do a lot of the heavy lifting with fewer lines of code. So pick a tool, take a step, make some progress and before long you can be building ARKit apps!