p03s22rm
From Foveated Rendering to Depth Sensing… BrownbagTV’s first collab-up will explore the likely impact of key tech innovations and market forces on the content continuum — both its creation and consumption. With each trend, we’ll explore the underlying innovations and drivers. This volume we’ll unbox VR/AR/MR products for audience trends.

#1: Foveated Rendering Lowers the Bar and Ups the Ante, Squeezing Out Increased Performance from Existing Hardware

Innovation: The Swift, Modular Deployment of Eye Tracking Technology

Foveated rendering enabled by eye tracking technology will drastically lower the barrier to entry for running high-end VR experiences, enabling support for older hardware and massively expanding the “high-end VR” install base. This tech stack will also enable improved performance on mobile hardware, upping the bar for what’s possible on “mobile VR” platforms.

The fovea of the eye has max resolution in the center. So we render the middle circle you’re looking at in full resolution, degrading as we go to the extremities, without you even noticing.

Beyond performance gains, gaze-driven UI paradigms will get a whole new set of superpowers. VR content creators (*cough* advertisers) will soon have a lot more contextual information on how you uniquely navigate and react to the virtual (and eventually real) world.

And if you thought you had to wait till gen 2 headsets to get eye tracking, then guess again… The HTC Vive has already has a plug-and-play contender in this category, and the feedback is that once installed, “it’s barely noticeable.”


Trend #2: “Making VR Inside VR” Continues to Thrive, Spawning New Ways to Make Content for Existing Mediums like 2D Video.

Innovation: Cheap 6-DOF motion controllers and HMDs like the HTC Vive and Oculus (combined with slow adoption of HMD-dependent VR platforms)

We’ll continue to see a slew of “make VR inside VR” tools emerge that let creatives ideate, create and publish without leaving the spatial realm. Think Oculus Medium, Quill and Tilt Brush, but also consumer animation tools like MindShow and professional plugins like VR-Plugin for Maya.

You’ll soon be able to make that CGI V-22 Osprey land perfectly by controlling it with your own hands, finessing the animation further in Maya — ultimate childhood dreams realized!

 

Mindshow wants to you to make character-driven animated movies in VR. It’s like Microsoft 3D Movie Maker circa 2017.

MindShow wants you to make your own animated cartoon shows in VR and then put them on YouTube. No, not as a 360 video. A good old fashioned rectilinear 2D video. Heck you could even livestream your toon comedy show straight to Twitch if you wanted. The VR world becomes an expansive set and 2D video an abridged delivery medium.

But in many cases, the abridged 2D version will be better than the native VR experience. Why? Because storytelling in spatial mediums is a WIP at best with no install base in sight. Meanwhile, tightly framed narrative content isn’t going anywhere (exploding, rather) and these VR tools give you supercharged workflows and sandboxes to do more for existing mediums like web photo and video, letting you natively storytell faster on platforms like Instagram, YouTube and Twitch without the overhead of a large studio or team.

 

Goro Fujita’s A Moment in Time piece was painted and animated entirely entirely in VR using Quill (a 3d painting by Oculus Story Studios).

Meanwhile, 3D/VFX studios and software incumbents will continue to create their own plugins, tools and utilities to bridge emerging VR/AR hardware seamlessly with existing software toolchains and production pipelines, improving productivity. Yes Tom, I DO need to see a live-updating holographic model floating on my desk to boost productivity, Tom. Jeez… 😉


Trend #3: Virtual Production Becomes Increasingly Democratized, Enabling Big Budgets Folks to Push the Ceiling, while Bringing Untold Power to the Indie Creative Arsenal

Innovation: Vastly improved and cheaply available depth/video capture, positional tracking, GPU and game engine technology

 

Yes, I know web streaming mixed reality videos of you playing Job Simulator is cool. But that’s merely scratching the surface for what this technology stack fundamentally enables — it makes buying VR equivalent to getting a film production studio and visual effects house all rolled into one box.

In 2018, Avatar-style virtual production will become increasingly democratized as indies and professionals alike start repurposing VR and MR hardware to bridge the gap between real-time and offline rendering.

We’re already seeing this at the higher-end with The Mill’s Blackbird collaboration with Unreal Engine. Even at a prosumer level Owlchemy Labs has been showcasing with their Mixed Reality experiments, which have demonstrated per-pixel depth sorting, composting and relighting inside of game engines like Unity (yes Unity). Realtime VFX is a high bar, but we’ll take a giant leap by 2018. Benefits galore for both indies and pros alike.

 


Trend #4: Depth Sensing Phones Become the Norm in Top Tier Smartphones Fostering a Boom in Pass-through AR Experiences

Innovation: Miniaturized Depth Sensors (ToF, Structured Light, Stereo Odometry) + cheap SLAM on platforms like Snapchat and Facebook

Apple will announce some sort of an entry into the VR/AR landscape stemming from their longtime acquisitions of PrimeSense, Metaio, FaceShift, and not to mention Flyby Media and more. Will it be the iPhone X or a new iPad Pro? Will it be a slide-in-your-phone style device or a passthrough window-into-a-world style AR? Who knows, but I do know Apple will sell more VR/AR/MR on their first day than HTC, Oculus, Microsoft and Sony combined. There’s no contesting that. The Apple fandom is real, folks.

 

Look ma, no QR code! Markerless tracking in action.

Google’s got their Tango tech, which we’ve seen in phone and tablet form. Microsoft has HoloLens/Kinect tracking. Leap Motion is publicly working with OEMs to get their tracking hardware on smartphones by the fall.

Someone just has to go all in with their flagship phone. Not the fringe enthusiast/dev phone. Their flagship smartphone. The center piece on the mantle so to speak. My guess is it’ll be Apple after they, at the very least, introduce a depth sensor on the upcoming iPhone X. Once one of the big dogs jumps on board, it’ll become the defacto standard for top shelf smartphones and the rest will follow suite.

Best part is, this new found depth sensing power will give you more than just bokeh-laden selfies — it’ll give enable passthrough mixed reality and untethered virtual reality experiences. However, since I expect a majority of people in 2018 will not bother sliding their phone into headset, we will see a massive boom in passthrough AR content, especially as Facebook opens the floodgates to their expansive AR platform, giving brands and developers an audience of a 2 billion people to engage with.

 

Augmented Cereal — MR/3D designers going to be in high demand bringing asinine ideas to mixed reality near you.

Trend #5: Real-time & Offline Rendering Continue To Collide

So today in computer graphics, when you finally want to deliver or renderthe fruits of your labor, you have two basic categories — real-time rendering and deferred (or offline) rendering.

For real-time rendering think Epic’s Unreal Engine, which can render a CG world and present a pretty decent raster to your monitor at 60FPS off of a consumer nVidia GPU and 4 core Intel Processor. Trade off here is visual fidelity.

For offline rendering think Pixar’s Renderman, which is a raytracing render engine that can produce extremely high fidelity image, but is running at 0.00006 FPS on 100 blade servers at a nondescript warehouse in LA. The trade off here is time, and the fact that you need a boat load of it.

This collision course was started a long time ago, but the lines are now quickly blurring thanks to Moore’s law and human ingenuity. Traditionally deferred renderers are taking advantage of dense compute capability in consumer GPUs. Simultaneously, real-time renders are adopting techniques and tricks from the deferred world and optimizing them for real-time use.

This gap between real-time and offline rendering will continue to blur in 2018 as GPU-based rendering solutions like nVidia’s Iray, OTOY’s Octane Render, and Redshift3D continue to improve, offering support for features like volumetric and fur rendering.

Meanwhile, CPU-rendering incumbents like Solid Angle Arnold will gear up to release GPU implementations of their rendering engines. Simultaneously, real-time game engines like Unity, Unreal and CryEngine strive towards greater cinematic quality.

 

To finish up, we’ll go back to the Mill Blackbird experiments and see the end results in this Chevrolet car advert. Just consider for a moment that the CG elements in this entire video were rendered in realtime in a game engine. Then just imagine what comes next…