Why augmented reality technology is more than just virtual reality’s kid brother.

If you read the tech and video games press, the buzz is all about virtual reality. Valve revealed its long-hidden VR product Vive, built with HTC, in March. Oculus Rift, meanwhile, has been bought for a cool $2 billion by Facebook, with Facebook’s owner Mark Zuckerberg calling it “the dream of science fiction” that will “unlock new worlds for us all”.

Yet, quietly, people are whispering that the real story is augmented reality (AR). Influential data firms such as Juniper Research have even put figures on it. Juniper’s Augmented Reality 2015-2019 report predicts revenues of $4.1 billion for AR apps in 2019, with 1.3 billion apps in use. By contrast, Digi-Capital is advising that AR could  be worth $120 billion by 2020, with VR valued at a mere $30 billion. That reflects fundamental differences in both the underlying experience, and the progress made, in each field. We’ll explore why that is, and whether virtual reality has any chance of catching up.

As part of that, we’ll delve into the way big media corporations including Microsoft, Valve, Google, Apple and Sony are looking at this space. Each of them has its own strategy. Several have already invested billions such as Google with Magic Leap and Microsoft with HoloLens while others have already walked away, such as Valve, with its deliberate pivot towards VR.

That also means examining how AR tech is currently working, and where the next steps will be. After all, low-grade AR has become commonplace in several types of mobile application and is looking to become more widespread. Digi-Capital’s prediction is based on AR capturing a substantial chunk of the mobile phone market and with over a billion smartphones already shipped, you could say $120 billion was a conservative estimate. But then it’s only talking about five years away.

It took 20 years for mobile phones to move from the Nokia Ringo, which could merely call people, to today’s all-singing, all-dancing smartphones. For the full lowdown, read on.

The core difference between virtual reality and augmented reality technology is the worlds they move us into. One replaces, the other improves. Virtual reality creates a new world for you. It may be a world identical to the one you’re in now, or it may be a world built entirely from bones and elves, but it’s a world that’s fundamentally separate from the world we inhabit. No perception that you see through virtual reality is what’s outside of that headset. You’re totally immersed. It’s therefore a tech for totally-immersive experiences escapism like movies or games.

Augmented reality, by contrast, focuses on the real world as the base, and builds on top of it. AR generates virtual items in this world by either using a whole mess of sensors to ensure they’re correctly placed on its surfaces, or ignoring them Completely and placing it on a much nearer plane. Though this sounds easier, as you don’t have to generate an entire world, there are technical challenges that make versions of  it as hard but as valuable as VR.

AR’s not so good for immersion, because the real world is always there in the background. But it means it’s excellent for anything involving augmenting the real world. Voice calls, advertising, mapping, social networks… or even what Tim Merel of Digi-Capital calls “a-commerce”. Yes, augmented shopping.

There are degrees of augmentation, of course. Device-led AR, where the AR is imposed on a screen that’s distant from the viewer, is a mediated reality that allows developers to more simply judge the environment. At its simplest, it acts as a head-up display (HUD) that sits over the scene like a 3D movie title sits on its background. This basic tech is the level that firms such as advertising innovators Blippar or Aireal use. It simply uses your existing device and an app, and imposes a new image on an existing scene, such as putting an animation of a football player near his promotional merchandise.

At a slightly more advanced level, this tech can detect a surface or shape in the environment and use that as a marker to estimate relative depths in the scene. You’ll have AR apps that use special ‘fiduciary marker’ cards to anchor their sim, and allow the app to scale and rotate a virtual simulation to fit the environment.

The more VR-like AR requires a lot of extra tech. You need to be able to track the position of the viewer’s head and their eyes, and judge relative distances, to make the illusion of something virtual be convincing. Most of these AR devices use a head-mounted display (HMD), which is a headset supporting a display device (or two) in front of the user’s eyes. The sort of HMD we’re interested in also tracks head position along six degrees of freedom a phrase that means three components of translation (up-down, forwards-backwards, left-right) and three components of rotation along those axes.

There are more extreme techs in development as well. Two different sets of contact lenses are in development, one academic, one military. The military ones are called iOptik and function much like bi-focal lenses, with the twist that they’re designed to work only with AR goggles. These contact lenses will allow humans to focus both on the background scene and the HUD on the goggles at the same time. Though they’re being developed for the US Department of Defense, the company behind them hopes it can sell them as consumer products soon.

The more interesting academic tech comes from the University of Washington, and is a set of ‘bionic’ contact lenses powered by radio waves and with LED displays built in. (The microfabrication process that means they self-assemble their circuitry using osmotic pressure is fascinating and totally irrelevant). At the moment, the lenses have only been tested on rabbits, so it’s still at an early stage and there are questions about the quality of the images it produces. Still, it’s an impressive glimpse of the future.

Of course, this is all minor stuff, mostly built in the hope that the big tech firms will buy out the company behind it. What’s of greatest interest now is what those firms are focusing on, and what they think they can do with it.

Valve’s Blown Out
What’s most interesting about Valve’s AR offering is that it’s no longer Valve’s AR offering. After a Night of the Long Nerds in 2013, Valve released 25 people, including its entire AR team, to focus, we presume, on VR. The two AR project leads were given Valve’s permission to do whatever they liked with the tech that they’d made, which they’ve called CastAR. They’ve created a company called Technical Illusions to finish it off.

CastAR is a bit different from other AR. It consists of a pair of polarised glasses with built-in projectors and cameras, and a separate retro-reflective surface studded with infrared LEDs. The camera uses the LEDs to track your head movement, so it can adjust the images that the projectors cast onto the surface. This means each polarised lens gets a different, but coherent, image. Low latency lets you do things like look around an object. In other words, it projects a self-contained virtual reality into the real world.

As a result, it’s used for static purposes rather than something more mobile, like the existing Samsung Gear VR and Nintendo 3DS. CastAR’s pitch video focuses on uses including previewing 3D architectural blueprints, playing 3D board games remotely on unexpected surfaces, creating 3D presentations, and just for use as a 3D desktop computer. As long as you put that retroflective material all over your house.

Microsoft Does Everything
You’ve almost certainly heard of Microsoft’s contribution to the AR party its HoloLens system for Windows 10 and possibly Xbox One. It seems revolutionary, but its use of the word ‘holographic’ might be suspect (and mainly due to affection for Star Trek’s Holodeck). Kinect seemed revolutionary in the hermetic demonstration settings you get at big tech trade shows.

HoloLens definitely looks impressive from the screenshots scattered around these pages. It’s a futuristic headset that superimposes 3D creations into the world and allows you to interact with them. The impressive element here is how high quality the images they produce are from the videos and reports, it’s utterly compelling,  if nowhere near as immersive as the HTC Vive or the Oculus Rift. 

The actual headset is a lot bulkier than a comparative AR headset, Google Glass say, but then this has the power of a true VR headset because it’s not doing a simple 2D overlay. It reportedly weighs around 400g, is adjustable to all head sizes and is totally wireless. It consists of holographic lenses, depth cameras and three separate processing units one central, one graphics and one holographic.

The depth cameras are built from the same tech as Kinect but are lower power, have a wider viewing angle and are placed around the front and sides of the headset. They track both the user’s head and hands, as HoloLens is controlled entirely by gesture and voice, Minority Report-style. This lets you interact with the 3D virtual models of the apps, from building blocks in Minecraft, to sculpting the bodywork of a motorbike. MS is working on ‘pinning’, which will let you stick these models in place in the environment, so you can move around them, and ‘holding’, so you can pick them up and manipulate them.

The apps are really what wowed the press when HoloLens was announced. Microsoft recently bought Minecraft for $2.5 billion, and it’s already made a version of it that runs on HoloLens. Similarly, NASA has an app that lets you explore Mars, and there’s a version of Skype that runs on it so that a builder can explain to you why you should have spent more time in carpentry classes at school. Though the reports are mostly positive, the tech was in an early stage, and there were concerns over
whether the hardware could fit into a consumer unit, and the regularity with which the illusion was broken.

That’s not all, however, as Microsoft also has several other AR and VR projects underway. It definitely has an AR headset ready to go Microsoft bought the smart glasses firm Osterhout Design Group in March 2014. And another project called RoomAlive was shown off in October 2014, consisting of a set of projectors that transformed the walls of an entire room into a interactive environment.

Digressing for a moment, there are also persistent rumours about a VR headset for Microsoft’s Xbox One games console. After all, the Wall Street Journal said in March 2014 that the firm already had 3D virtual reality tech ready to go. HoloLens has reduced the chances of that coming to market, but we assume Microsoft has it ready as a back-up.

Google Does it Better
Google Glass was Google’s high-profile effort in the field of AR, and you might argue it was its first high-profile failure, given that Glass has currently been removed from sale ahead of a redesign and a new model. The device is a set of (quite pretentious-looking) plastic and metal glasses, with an HUD display projected onto it and a smartphone-like processor behind it all, which was on sale for $1,500.

Glass did everything you thought it should, like understand natural voice commands, record video, take photos, and all the update elements of your phone. It had a small touchpad on the side of the device, which let you browse a timeline of recent events. The screen was a Liquid Crystal on Silicon device with an LED-illuminated display that used polarisation and reflectors to bounce the image into your eye. It had  a wide range of supported Google apps, including Now, Gmail, Maps and Google+. But it’s now on a hiatus while a new version is being designed.

That’s not all of Google’s AR efforts though. It’s also making simple AR games for its Android phones, such as Ingress a massively-multiplayer location-based game built on Google Maps. And it has Project Tango in the wings. This is a standard tech for mobile devices that allows them to navigate the physical world in the same way we inefficient meat-bags do. It uses advanced computer vision, image processing and special image sensors to make an end-to-end navigation tech that understands its own 3D motion in the world, can perceive depth, and use visual cues about areas or objects they know to constantly self-correct. At the moment, it’s only available to core developers, but we assume it’ll be integrated into next-gen Android hardware.

Magic Leap (www.magicleap.com) is yet another Google-funded project, coming in at $542 million, and a direct challenge to Microsoft’s HoloLens. This is being built by  a team of tech and games industry veterans, including the author Neal Stephenson and the 3D team at WETA (who made the Lord of the Rings special effects). Reports have it as more believable and solid than Microsoft’s HoloLens. It works using a virtual retinal display that is, a display projected straight onto the retina itself.

Similar to HoloLens, the simulation looked utterly convincing. The animated 3D creatures it portrayed looked detailed and sharp, and sat well in the surrounding world. And similar to HoloLens again, it ran on a huge piece of hardware (essentially a PC) sitting nearby, rather than in the headset itself. It’s worth looking at the promo video to see what it’s capable of. Magic Leap hasn’t really been announced  or promoted yet, but we’re expecting it to launch in 2016 or 2017.

Sony’s ‘Me Too’ Tendency
The Japanese giant always seems to want  to get involved in any new tech, but recently it hasn’t been leading the market here. Its Project Morpheus feels like a ‘me too’ VR solution but it’ll surely work well on Playstation 4 and might actually sell well (see “What About VR”, opposite). And it’s already experimented with AR in the form of The Wonderbook for the Playstation 3 .

However, SmartEyeGlass is its foray into AR. The currently available SmartEyeGlass SED-E1 Developer Edition is very similar to Google Glass, though much cheaper at just $840. It uses “holographic waveguide technology” in 3mm AR lenses, which produces something very similar to Glass, with overlaid green text and diagrams operating at 15fps. It also has a 3MP camera that can take pictures or video.

It connects to compatible Android phones by Bluetooth, and is controlled by a small, ugly-looking puck that sits on the user’s lapel, which also doubles as microphone, speaker, NFC and battery which comes in at only 150 minutes. At the moment, we’d stay well away from this device. It’s ugly as sin, with a poor battery life and not many apps. Wait for version two.

Apple Of The Eye
There haven’t been any official reveals of Apple’s research into VR, but then Apple is more tight-lipped than a close-mouthed clam ahead of any announcement. Apple does have several patents for AR tech there’s a very interesting one for a ‘transparent electronic device’ that sounds very much like an augmented reality device. Examples in the patent include using the device to overlay information about a museum exhibit. Interestingly, the device would be able to opaque itself, and only display selected elements of the background world, otherwise being a normal opaque LDC or OLED display.

That said, an analyst from the US investment bank Piper Jaffray (annoyingly but understandably, investment bankers get a lot more access to tech firms than journalists do) published a report in March saying he believes Apple has a small team experimenting with the AR space, but that they think consumer AR is still 10 years off. We’ll see from Microsoft and Google efforts whether they’re wrong but they might be on the money when it comes to mass market success.

The State Of The Art
As that $120 billion valuation by Digi-Capital might indicate, there’s a lot of hype around AR and VR at the moment. Hundreds of firms are trying out strange new tech to augment the sensation. For example, Bristol’s UltraHaptics uses targeted ultrasound vibrations on a user’s skin to form tangible shapes and textures from thin air, so the users can feel them without the need for worn equipment. That, combined with the hand-detecting Leap Motion device, makes for delicately convincing sims, like brushing your hands over ghosts. For VR, we’ve seen every type of treadmill under the sun giant balls, resistant pads, harnesses around the waist anything to convince you that you’re in the virtual world.

On balance, the hype is justified. It’s not like the first tablets, when Microsoft launched them stillborn into the market. Too many big companies are competing here for this to not be a success for one of them. But challenges remain, and they’re not insubstantial. The biggest are in shrinking the tech down to a headset, or headset and pack model; in maintaining persistent simulations while doing that; and in preventing object-placement errors. It’s likely that, after all this experimentation, mobile phones will be the first devices that give us a taste of this. As always in that field, Apple will be the firm to watch. That said, Google’s Magic Leap investment is considerable enough and the tech advanced enough that we’d cautiously predict it’ll be first to market, albeit in a reduced form.

One prediction we’re happy to make is that in 20 years time we’ll be looking back at this, the way we now look back at the first mobile phones. This technology is going to revolutionise many things anything that requires 3D knowledge, such as architecture or warehouse management; anything that requires management of large data sets, like programming; and anything that just wants to look pretty, like art or video games. We just have to wait for the hardware to catch up.

Try AR Today
There are many older ways to try augmented reality today. As it’s a more mature technology, there are some basic devices that take advantage of it already, as well as many mobile phone applications to try. The Carl Zeiss VR One headset, for example, supports AR features and will work with any iOS or Android headset between 4.7 and 5.2 inches. Google Glass has now been cancelled, but that’s out there too.

There’s a huge array of AR apps for mobile phones. One of our favourites is GoSkyWatch Planetarium for iPhone and iPad. This is one of many stargazing apps that use the device’s accelerometer and GPS to orient your device, so wherever you’re pointing, it shows constellations, stars and nebulae. See also Anatomy 4D, Google Goggles (which can translate text on the fly), Field Trip (which lets you know about nearby attractions), and iOnRoad Augmented Driving, which gives speeding alerts, crash warnings and driving analytics.

The Playstation Vita has AR features, and comes with a package of free AR games, such as Table Ice Hockey and PulzAR. Similarly, the Nintendo 3DS comes ready-loaded with AR Games and six AR cards. Every game is super-imposed on the real world, but has no interaction with it it’s more of a gimmick, really. If you’ve got a PS3, you could pick up a copy of Wonderbook. It was a Harry Potter-inspired AR tome with blank pages that only filled when viewed on your TV through the Playstation Eye camera. Similarly, the PS4 has Playroom, a much smoother AR sandbox where you can play with small robots that are running around your lounge. Kids love it.

You can also try the Kinect system, on both Xbox 360 and Xbox One. Though it never got the backing it deserved from developers, it has a uniquely detailed depth camera that means it can track your entire body shape or several in the Xbox One’s case on-screen. It’s probably the most advanced consumer AR tech available on the market today.

What About VR?
We’ve covered VR extensively in the past, but it’s worth giving you a quick status update as to where the tech is today. There are three projects that are nearing release. Sony’s Project Morpheus, Valve and HTC’s Vive, and Facebook’s Oculus Rift. Of these, Oculus Rift is the oldest, and several developer iterations have been released. A mightily cut-down version of it made to work with Samsung mobile phones, the Samsung Gear VR, has already been released. It works by slotting a Samsung Note 4 or Galaxy S6 into a viewing device and runs with 1280 x 1440 on each eye and a 96 degree viewing angle.

Despite that, there’s still no sign of the Oculus Rift consumer model. The most up-to-date version, the Crescent Bay prototype, has a positional tracking camera for your head, low-persistence OLED display (to eliminate blur) and runs two screens at 960 x 1080 on each eye, at 90Hz, and a 110-degree viewing angle. No release date has been announced, but 2015 is likely.

Sony’s Project Morpheus is the quickest-developed of the three. Like its AR solution, Sony seems more concerned with getting a working version of its tech to consumers than with making it the cutting edge. The version we tried in July last year was much lower resolution and fidelity than the Oculus Rift versions we’d tried up to that point, but both companies have since substantially improved their hardware. It has a similar OLED screen running at 960 x 1080 on each eye, a 100-degree viewing angle, and a 120Hz refresh rate. It was very comfortable, presumably because much of the hardware was sitting in a set-top box, not on our heads. It tracked our heads using the Playstation camera, and it had true 3D audio. It’s due out in early 2016 for PS4, which already has motion sensitive controllers.

Valve and HTC’s Vive headset is the most impressive. It recognises that some of the joy of VR is in interacting with those virtual worlds, so it does two things. First, it has a pair of bespoke controllers for you to hold, allowing limited interaction. Second, it has a set of cameras that sit in the corner of your room, detect your location and any obstacles, and track your movement, as well as setting the virtual world’s limit at your real world limit.

Vive has two 1080 x 1200 screens running at 90Hz. As the screens are narrower, you’ll have a wider vertical field, and it should be lighter as your PC will do all the processing work. Its big selling point is those two motion-tracking cameras, which are infrared and wireless, to follow the headset’s 37 sensors. This enables you to roam freely in your room and the virtual world. Oh, it works with multiple players and should be out this year. If you want to try VR today, you can get a casing for your mobile phone like the free Google Cardboard, or a cheap third-party headset like the £30 Immerse from Firebox.

The AR Hardware
Not all AR devices share the same hardware and software, but there are some basic technology aspects they do need. First off, you need a processor to work everything out, a transparent display to show the world and the projections, a light power source, plus  a variety of sensors and input devices.

The sensors can take several forms, but are mostly included as standard in mobile phones. An accelerometer lets you measure impetus, a GPS measures global location, and a magnetometer or solid-state compass measures the device’s orientation against the Earth’s gravitational field that is, the ground. Luckily, modern smartphones contain all of those things.

For AR technologies that aren’t based on mobile phones, if you want all these elements, they have to be built in, which can increase the size and cost of the device substantially. If you choose to go without them, you’ll lose a huge amount of functionality. It’s notable that Sony’s AR glasses system has a relatively large external box clipped to the user’s lapel, while both HoloLens and Magic Leap have been demoed with large tabletop external units that were actually  running the tech.

Input systems are another challenge. Unlike with VR, the user can see their hands, so a keyboard is an option. But also unlike VR, AR encourages users to be mobile. You want to look around the object and touch it, so you want your hands to be either free or holding interactive objects (like Valve’s twin
pointers). That means the device has to be wireless and the interface has to be voice, gaze or mediated touch.

Post a Comment