Tuesday, January 17, 2023

 

TECH


VividQ and Dispelix Create 3D Holographic Technology for Wearable AR

VividQ, maker of holographic display technology for augmented reality games, has teamed up with waveguide designer Dispelix to create a new 3D holographic imaging technology.

Companies said the technology was nearly impossible just two years ago. They said they designed and

manufactured a “waveguide combiner” that can accurately display simultaneous 3D content of varying depth in the user's environment. For the first time, users will be able to enjoy immersive AR gaming experiences, where digital content can be placed in their physical world and interact with it naturally and comfortably. The technology can be used for wearable devices, i.e. AR headsets or smart glasses.

The two companies also announced the formation of a commercial partnership to develop the new 3D waveguide technology for mass production. This will allow headset makers to start their AR product roadmaps now.

The first augmented reality experiences seen so far through headsets like Magic Leap, Microsoft HoloLens, Vuzix and others, produce stereoscopic 2D images at fixed focal lengths or one focal length at a time. This often causes eyestrain and nausea in users and doesn't provide the necessary immersive 3D experiences – for example, objects cannot naturally interact at arm's length and are not exactly placed in the real world.

To deliver the kinds of immersive experiences needed for AR to achieve mass-market adoption, consumers need a sufficient field of view and the ability to focus on 3D images across the full range of natural distances – from 10 cm to infinity. optics simultaneously – just as they naturally do with physical objects.

A waveguide combiner is the industry's preferred method for displaying AR images in a compact form factor. This state-of-the-art waveguide and accompanying software are optimized for 3D applications such as games, meaning consumer brands around the world can unlock the full potential of the market.

Waveguides (also known as 'combiners' or 'waveguide combiners') provide a lightweight, conventional look (i.e. look like regular glass lenses) to AR headsets and are needed for widespread adoption. In addition to the form factor advantages, waveguides on the market today perform a process called pupil replication. This means they can take an image from a small display panel (also known as an 'eyebox') and effectively enlarge it by creating a grid of copies of the small image in front of the viewer's eye – a bit like a periscope, but in instead of a single view, it creates multiple views. This is key to making wearable AR ergonomic and easy to use.

Small eye boxes are notoriously difficult to align with the user's pupil and the eye can easily “fall out” of the image if they are not aligned correctly. This requires the headphones to be precisely fitted to the wearer, as even variations in the interpupillary distance (IPD) of different wearers can mean that they may not exactly align their eyes with the eyepiece and cannot see the virtual image.

Since there is a fundamental trade-off between image size (which we call the “eyebox” or “pupil exit”) and the field of view (FoV) being displayed, this replication allows the optical designer to make the eyebox very small, relying on the replication process to deliver a large, effective image to the viewer while maximizing the FoV.

“There has been significant investment and research into the technology that can create the types of AR experiences we dream of, but they fall short because they cannot meet even basic user expectations,” said VividQ CEO Darran Milne. “In an industry that has seen its fair share of hype, it can be easy to dismiss any new invention as yet more of the same, but a key issue has always been the complexity of displaying 3D images placed in the real world with a decent field of view and with a eyepiece housing large enough to accommodate a wide range of IPDs (interpupillary distance, or the space between the user's pupils), all wrapped in a lightweight lens.”

Milne added, “We solved that problem, designed something that can be manufactured, tried and tested, and established the manufacturing partnership needed to mass-produce them. It's a breakthrough because without 3D holography, you can't deliver AR. Simply put, where others have developed a 2D screen to wear on your face, we have developed the window through which you will experience both real and digital worlds in one place.”

Conceptual image for a simulation game where the user can interact with a digital world from a distance(image above)

VividQ's patent-pending 3D waveguide combiner is designed to work with the company's software, both of which can be licensed by wearable device manufacturers to create a roadmap of wearable products. VividQ's holographic display software works with standard game engines such as Unity and Unreal Engine, making it very easy for game developers to create new experiences. The 3D waveguide can be manufactured and supplied to scale through VividQ's manufacturing partner, Dispelix, a manufacturer of transparent waveguides for wearable devices based in Espoo, Finland.

“Wearable AR devices have huge potential around the world. For applications such as gaming and professional use, where the user needs to be immersed for long periods of time, it is vital that the content is true 3D and embedded in the user's environment," said Antti Sunnari, CEO of Dispelix, in a statement. “This also overcomes nausea and fatigue issues. We are delighted to be working with VividQ as a waveguide design and manufacturing partner on this groundbreaking 3D waveguide.”

At its headquarters in Cambridge, UK, VividQ demonstrated its software and 3D waveguide technology to device manufacturers and consumer technology brands, with whom it is working closely to deliver next-generation AR devices. games now a reality.

The task undertaken by the companies was described as “almost impossible” in a research paper published in the Nanophotonics Journal in 2021.

Existing waveguide combiners assume that incoming light rays are parallel (hence a 2D image) as they require light reflected within the structure to follow paths of the same length. If you were to place divergent rays (a 3D image), the light paths would all be different depending on where in the input 3D image the ray originated.

This is a big problem, as it effectively means that the extracted light has traveled different distances, and the effect, as shown in the image on , is to see several partially overlapping copies of the input image, all at random distances. Which makes it essentially useless for any application. Alternatively, this new 3D waveguide combiner is able to adapt to the diverging rays and display images correctly from the couch.

VividQ's 3D waveguide is comprised of two elements: first, a modification of the standard pupil replication waveguide design as described above. Second, an algorithm that calculates a hologram that corrects for distortion due to the waveguide. Hardware and software components work in harmony with each other, and as such, you may not use the VividQ waveguide with someone else's software or system.

VividQ has over 50 people in Cambridge, London, Tokyo and Taipei. The companies started working together in late 2021. VividQ was founded in 2017 and has its origins in the UK department of photonics at the University of Cambridge and the Cambridge Judge Business School.

So far, the company has raised $23 million in investments from deep tech funds in the UK, Austria, Germany, Japan and Silicon Valley. Asked what the inspiration was, VividQ CTO Tom Durant said in an email to GamesBeat: “Understanding what the limitations were and figuring out how to work around them. Once we identified this path, our multidisciplinary team of researchers and engineers in optics and software set out to solve each one. Rather than seeing this as just an optics problem, our solution is based on hardware and software tuned to work together.”

As for how this differs from competing technologies, the company said that existing waveguide combiners on the market can only display two-dimensional images at defined focal lengths. Usually they are about two meters in front of you.

“You cannot bring them closer to focus or focus beyond them to other digital objects in the distance,” the company said. “And when you look at these digital objects floating in front of you, you can very quickly suffer from eyestrain and VAC (convergence accommodation conflict), which causes nausea. For games, this makes it very limited. You want to create experiences where a user can take an item in their hands and do something with it, without needing a controller. You also want to get multiple digital items in the real world locked in place with the freedom to focus on them and nearby real objects as you wish, which leads to strong feelings of immersion.”

Source: Venture Beat

No comments:

Post a Comment

  TECH Xbox early Black Friday deals offer massive game discounts for PC and Xbox Microsoft is kicking off Black Friday early, with discount...