Are We There Yet? The Road to the V.R. Museum Tour
If, like me, the excitement gathering behind the next generation of VR devices–led by the Oculus Rift–has allowed your imagination to soar to a near future where an immersive, fully interactive, photo-real museum or gallery experience is available to you anytime, anywhere, from the comfort of your own home, then let me be the first to tell you that we are at the threshold…
…of a cold storage chamber, where you should set that dream on a chilled shelf until some distant point on the time horizon. Because we’ve still got a long, long way to go before we get there.
I came to this conclusion after attending the aptly and efficiently named LA-based virtual reality interest group VRLA’s third meet-up this past Saturday. Held at New Deal Studios about 20 minutes northeast of downtown LA, the event oscillated between a convention and a conference, with the two components combining to offer a stereoscopic view of virtual reality’s present and near future.
About 20 vendors, from device manufacturers to VR game/experience developers to TV studios, offered demos of their projects on site. Meanwhile, key figures from some of those same exhibitors made TED-style presentations on the venue’s temporary center stage.
As you might guess, I went with an eye toward virtual reality’s implications for visual art making, viewing, and sales. In particular, I had in mind the application I alluded to up top: virtual tours of actual exhibitions, such as a real-time walkthrough of the Whitney’s current blockbuster Jeff Koons retrospective–only digitized and available on demand to anyone with a compatible device.
Virtual tours like this would impose no restrictions on our movements and demand no deviation from reality in terms of either image resolution or visual detail. Instead, they would be nothing short of fully immersive, high fidelity, true-to-life art-viewing experiences.
I first got interested in this concept after reading a recent artnet piece about the CultLab3D scanner, which was used to digitize Renaissance sculptor Antico's Apollo Belvedere at the Frankfurt Museum in late July. According to artnet’s Alexander Forbes, the technology “combines over 80 views of the sculpture and realistically recreates the luminosity and texture of [its] surfaces.”
The end result is a 3D digital model that can theoretically be accessed and appreciated in its full 360 degree glory via any Internet connection on our planet. And in theory, if it were translated to a 3D screen rather than the 2D ones the vast majority of us would still be limited to using at home today, the virtual viewing experience could reach an unprecedented level of simulated reality.
However, the CultLab3D scans of each piece still have one impenetrable set of boundaries: Each digital model can only be presented in solitude. Think of it as the high culture equivalent of car shopping on a site that allows rotatable 360 degree views of each vehicle you pull up. It’s more useful and informative than choosing between three static JPEGs, but it still can’t rival walking into an actual brick and mortar dealership and seeing their entire offerings in scale and in context.
Given what I’d been reading about the Oculus Rift, my brain made the obvious leap: Wouldn’t it be infinitely better if 3D digital models of the world’s greatest artworks were viewable in their actual museum environments, with the actual digitized architecture surrounding them and the actual curated layout of each gallery intact?
I genuinely believe that one day Oculus and the other major VR developers will enable this experience. But one of the primary facts I extracted from VRLA was that, despite some crucial advances, the technology is still many miles away from making my vision possible.
Why? The main roadblock facing developers is the conflict between realistic visual fidelity and full interactivity. Right now–and for the foreseeable future–the tech just doesn’t allow both to co-exist.
To be clear, this same friction is inherent in all interactive images, whether they appear in a Mars simulation inside a VR headset or the Kardashian game on your iPad screen. Anytime a CGI environment must react in real time to a player’s actions, the hardware and software’s combined visual resources need to serve the same two masters simultaneously: the frame rate and actual image rendering.
If the frame rate falls beneath a set floor, you encounter what’s referred to as a latency problem. In plain English, a latency problem refers to a delay between an input and an output–an unacceptable reaction time. And latency problems have been ruthlessly kneecapping virtual reality for years.
In past VR headsets, the technology was simply unable to keep up with the viewer’s inputs. Whip your head to the right, for example, and your virtual world would smear, blur, and/or lag ever so slightly behind what your brain knows it ‘should’ be seeing in terms of your orientation in the rendered environment.
Measured in typical time frames, the delays were minuscule–only a few microseconds. Yet to your visual cortex, a few microseconds are all that’s required to not only disrupt the perceived smoothness of a virtual experience but to induce motion sickness-style headaches or nausea. And of course, that’s no way to party, either virtually or realistically.
Latency is more crippling for a 3D VR experience than a 2D videogame, where the problem simply manifests as chop, slow-down, or momentary freezes in the action. It’s still a disruption, just not as severe.
But what this means for developers across all platforms is that their projects’ frame rate demands always must be satisfied first. The actual graphic modeling–how high-resolution or faithful to reality the imagery can be–is strictly determined by the leftovers.
Now, through a variety of technological innovations I won’t detail here (but which Peter Rubin efficiently and accessibly chronicles in his recent Wired piece), the Oculus Rift–and to a lesser extent in my demo experience, Sony's Project Morpheus–have triumphed over the latency problem. As a result, the new wave of VR devices provide fully responsive, fine-scotch-smooth visual tracking in their virtual environments.
The problem is that such a large proportion of the technology’s graphic resources are committed to the latency challenge that there isn’t much left over for actual image rendering.
That’s not necessarily a deal-breaker for VR in general. It just means that, if the software is generating a digital world from scratch, it’s going to have to be stylized.
How stylized? Well, the official Oculus Rift demo I experienced consisted of playing a game about on the graphic level of Mario 64:
I enjoyed it and, to a certain extent, was impressed by it. But the problem for a virtual museum tour is obvious.
If you want to walk through an immersive digital version of Richard Serra's The Matter of Time at the Guggenheim Bilbao, or sit inside James Turrell's Sky Space The Way of Color at Arkansas’s Crystal Bridges Museum, then the image quality can’t be stylized. It must be photo-realistic. Otherwise the entire experience collapses like a fragile Southern pledge shunned by her top choice sorority.
The caveat here is that, to virtually recreate an existing physical art-viewing experience, the tech would not be generating CGI from scratch. Instead, it would be relying on actual video footage transposed into a VR-friendly format.
And at first, this seems like a much more achievable end. Replaying 3D video is already an available option in today’s VR devices. But it too still has some serious evolving to do before it enables the types of art-viewing experiences I had in mind–not because of a visual fidelity problem, but because of an interactivity problem.
Tomorrow, if everyone wanted it to happen, a filmmaker could walk into any museum, gallery space, or site-specific installation in the world and shoot a photorealistic virtual walkthrough with either a Freedom 360 rig or a Jaunt VR camera. But while each option is fascinating in one sense, it’s frustrating in another.
Freedom 360 is not an actual camera. It’s a specialized rig that allows a filmmaker to combine a series of GoPros into a kind of all-seeing eye. Each camera captures only one angle, but through the rig’s design, the total footage collected can be stitched together in post into a seamless 360 degree view of whatever experience the operator filmed.
The most intriguing aspect of this innovation is that it gives a viewer the ability to change her perspective in real time, just as if she was looking around her physical environment. To get a sense of what’s possible, try this video shot from a helicopter as it strafes over top of an active volcano–a rare phenomenon you can now see from as many angles as if you were magically floating outside the chopper like some sight-seeing guardian angel the day it was filmed.
There are two current flaws to Freedom 360, though. The first is distortion. It’s not as pronounced in the above example as it is in others on the firm's video blog, but you can still see a fish-eye lens effect warping the edges of the view as you change perspectives in the chopper footage.
The bigger obstacle to a fully interactive museum experience, though, is that you the viewer cannot control your movement through a Freedom 360 virtual environment. You can only control the direction of your gaze.
Think of it like being strapped into a slow-moving amusement park ride. You’re able to crane your neck around and see your surroundings, but only while your body is propelled along a set track at a speed and in a direction over which you have no agency. Meanwhile, the fish-eye distortion also creates the effect that you’re viewing the entire experience through your apartment’s peephole.
In contrast, the Jaunt camera eschews the distortion inherent in Freedom 360 footage. But that extra level of photographic fidelity doesn’t eliminate the motion restrictions. So you’re still strapped into that same amusement park ride–you just get to look around with clear eyes.
In total then, even today’s most cutting edge VR tech currently leaves us with a strict either/or choice: a fully interactive world rendered in obviously stylized imagery, or a 3D photographic replay with which we can only minimally interact. The totally autonomous virtual museum-going experience I envisioned will not be possible until those two channels converge.
Make no mistake: I’m extremely interested in seeing where VR goes next, especially in relation to the visual arts. I’ll definitely be writing more about it as the technology continues to develop, and I think there are a whole range of potentially valuable and exciting art applications between now and the virtual museum tours of my dreams.
But for the foreseeable future, my experience with the technology suggests that we should all adjust our expectations–especially me. Moore’s Law will no doubt slingshot virtual reality forward at tremendous velocity. It’s just that, despite the critical advances in the new wave of VR headsets, the distance to photo-real, fully interactive virtual museum experiences is still very much an unknown quantity.