Immersive Web WG/CG demos
Presenter: Ada Rose Cannon
Demo by the Immersive Web Working Group and Immersive Web Community Group
I'm Ada Rose Cannon, one of the chairs of the Immersive Web Working Group, here to introduce some of the features of the WebXR Device API. Virtual reality is the core feature of WebXR.
Users navigate to web page, push the button to enter virtual reality and it appears that they are in the virtual world.
This works by using the WebXR Device API to get the position data of the headset and the controllers, you then render a WebGL Scene from the user's point of view and then you send this rendered image to the WebXR Device API to be displayed to the user. If you do this at the device's frame rate, you can create the illusion of being in a virtual world.
Augmented Reality works in much the same way as Virtual Reality, but instead of having an opaque background, the background is either transparent, or it uses the camera feed to display the real world behind the 3D objects.
Augmented Reality is more than just seeing the virtual content overlaid on top of the real world, it's also about interacting with the real world as well.
The Hit Test API allows you to cast out rays into the real world to work out the position of where it intersects.
But in the virtual space, when you place a 3D model such as this dinosaur, at that place in the virtual space, then when you move the camera around the dinosaur will appear to stay in place and look like a natural part of the world. Once your object is placed in the user's environment, you want it to blend in.
The Lighting Estimation API, estimates the real lighting in the real environment, so you can reproduce this lighting in your 3D scene, making your model look like a natural part of the user's environment.
Many pieces of Immersive Hardware, such as Virtual Reality Headsets and AR Headsets have accompanying controllers, with their own distinct appearance and button locations. The user will want to be able to see where these buttons are, what they're doing and be able to use them to control the environment. The core of WebXR only has a couple of buttons or the controllers.
The Gamepad API for AR and VR exposes the remainder of the buttons and gives you all the information you need to render the correct 3D model for that device.
What's better than controlling the environment with a controller, but how about controlling it with your hands?
Some AR and VR headset have the capability to track your hand motion including all the individual fingers and joints on your hand, so you can have an accurate representation of your hand in the scene, which you can use to grab and manipulate virtual objects.
One power of the Web is the ability to create semantic 2D interfaces, using a large suite of interface elements. WebXR, being WebGL only, cannot use HTML or CSS, which is a waste of the Web's power.
One common feature of websites which use Immersive Technologies are pre-rendered video content which is displayed on either rectangles or on cylinders or on spheres surrounding the user, giving the user an immersive video experience. This is a really popular thing to do, but it's surprisingly difficult and unperforming to do in WebGL.
The Layers API aims to provide a way of doing this, which is both faster and gives a better experience for users in a simpler declarative API.
Thank you so much for watching. These are just a few of the more mature features which have been developed within the Immersive Web Community Group and the Immersive Web Working Group. I really encourage you to check out the incubations, which are currently being developed in the community group.
Many of these great new features are designed to make augmented reality so much more engaging. This is a really exciting time for the groups at the moment and is a great time to get involved.