Development, Spatial Computing

Writing a talk about WebVR, Using WebVR

This past Thursday, I had the privilege of speaking about the VR Web, a topic near and dear to my heart, at Coldfront, a front-end focused, single-track conference in Copenhagen, Denmark. Despite battling off the remnants of a particularly nasty cold, my talk at Coldfront was one of my favorites that I’ve ever given. Why? The audience was great, the organizers were wonderful – but what really excited me about giving this talk was that I wrote the thing using A-Frame.

We’re at an exciting time with the VR Web. WebVR 1.0 is slated to hit Chrome 55 in December, and is in Nightly builds of Firefox. The specs of dust (see what I did there? No?) are clearing and the browser-based virtual reality API is being considered as a W3C standard for the internet of the future. I figured that now was as good a time as any to see if it could hold up to a test: a 45-minute presentation on stage in front of 300 or so people. Was my laptop up for the challenge?

As a matter of fact, it was. I know for a fact that I’m not the first person to attempt this (one such pioneer was sitting in the audience, in fact!) but it was so much fun to give a talk about virtual reality using a 3D environment in my presentation. I quite literally walked through my presentation, then pulled up the code and showed how the magic happens.

Building a rough (and it is like, pre-Nintendo era graphics rough) A-Frame application for presenting VR was an eye-opening experience.

Planning & Project Structure

  • I started with a basic index.html file that brought in the minimized A-Frame library. I used the built-in primitives to create a rough outline of what I wanted my experience to look like.
  • I separated out the content that I wanted to cover into a few different buckets – these would have been slide groups or sections in a traditional presentation, but I cut down on displayed content quite a bit for the purposes of performance. I don’t think anyone missed it.
  • Each content bucket became an <a-entity> object that I could contain all of the information in. As an example: my first content area was an introduction, which contained a photo of myself, the title of the talk, and my name. These were all A-Frame objects that were children of the parent <a-entity id=”introduction”> element.
  • I created a PageComponents.js script that called AFRAME.Register on objects in my scene that I wanted to be able to interact with. There were a few functions that I wrote, many which hid or showed the next content batch, and most of which looked fairly similar to the ‘play-video’ component:
  • I used the A-Frame text-component library that Kevin Ngo wrote to add “text boxes” to help define different sections of the content
  • I used a few different skyboxes to demonstrate 360 photos, and a video texture to play a clip of a different WebVR application that I had written using the .NET framework

Presenting on Stage

  • I had a backup slide deck ready to go, which I had loaded onto my iPad to use as project notes. It was weird doing a presentation without defined presenter notes, but having a secondary device to scroll through for more context helped me stick to my points without needing to have them all written out on screen.
  • I used the Edge browser to navigate through my experience. There are a few things about this that I want to share, besides the disclaimer that I do, remember, work for Microsoft:
    • I do not rely on conference WiFi, ever. I don’t care who is hosting the event, every single one of my examples that I plan to use will be located locally on my machine. If I want to demonstrate something that requires live endpoint access, it’s probably going to be a video. I also avoid running locally hosted web servers. This means that I don’t use Chrome to present most of my applications, because up until about a week ago, I had never been able to get a running web server* other than IIS for my Unity WebGL builds and Chrome gets angry about locally-hosted resources because of CORS. That’s a long tangent to get to what I’m saying: Between Firefox and Edge, Edge handled displaying WebGL content for 45 minutes better than Firefox did. My metric is how loud my laptop fan was running, YMMV. I know, very specific performance testing there.
    • I keep hoping that if I pretend Edge supports the WebVR API, it will one day magically support the WebVR API. (Hi Edge team! My alias is Livieric if you want to chat!)
    • Until I get one of those shiny new Nvidia laptops with a desktop-ready graphics card, I can’t present with a desktop VR headset anyway, so having the experience render stereoscopically anyway is a moot point. Also, on stage, people don’t care if it’s stereoscopic, they’re seeing it on a giant projector anyway.
  • While not related at all to WebVR specifically, I wore my GearVR on my head the entire 45 minutes. This served several purposes: when, as a speaker, I show that I don’t take myself too seriously, it helps the audience feel at ease too and the talk is a lot of fun. I was basically playing my talk like a video game, so the accessory helped set the tone. I also had a convenient place to store it while I wasn’t talking about Mobile VR headsets. The really interesting part was afterwards – someone in the audience mentioned that it helped break the stigma of wearing a VR headset to them, which I thought was pretty cool.
  • After I had finished walking through the app, I went into the source code and showed the audience how I had written the application. It ended up being about 300 lines of code total for the whole thing. Pretty snazzy!

I tend to alternate between Unity and WebVR technologies, but I stay really passionate about both of them, even when I’m trading off. There are a lot of benefits to each approach, and I’ve been spending a lot of time in Unity recently for HoloLens development, so it was great to get back into the web ecosystem to build out my talk. I was impressed with how A-Frame has been maturing, and the tools evolving around it – shout out to Kevin Ngo for the text-component addition, that library is incredibly helpful.

It’s always a good exercise to switch things up, in my opinion. I am constantly humbled when I attempt to write vanilla JS code and can’t figure out how to iterate through a variable or compare strings. It’s also incredibly rewarding to struggle through something and see it work. It gives you the opportunity to work on pushing boundaries.

By default, today’s Electron library doesn’t support WebVR because of the version of the Chromium browser used. When support for the WebVR API lands later this year, I hope that Electron may show some promise for JS desktop VR apps.

The app that I built is something that will evolve over time. I’ve got a week to polish it up and present at Full Stack Fest in Barcelona this Friday, and as the web tools evolve for VR, so will my experiments. After a number of failed past attempts, I was motivated to finally get a Node environment set up so that I could wrap my A-Frame site in an Electron package, something that worked absolutely beautifully and has motivated me to go deeper into open source. When the WebVR spec gets pulled into Chrome/Chromium all up later this year, Electron has the potential to be a really fascinating way to build desktop VR applications in JS.

Pairing up a library like Electron with Mozilla’s experimental browser engine Servo, a multi-threaded, GPU-first engine, would be a really interesting way to potentially start seeing highly performant desktop JS application development. Web beacons as a delivery and discovery mechanism for immersive web experiences is going to be an entirely new way to showcase location-relevant experiences. I’ll be the first to say that I have no freaking idea how it all ends up being technically feasible, but I can say that I’ve never been more excited about the potential of browser-based immersive technologies and the world that is evolving around them.

The future is fantastic.