VR Cinematography Studies for Google

Michael Naimark
11 min readJun 23, 2016

Exploring how people are represented in VR

with David Lawrence and James McKee

Last July, shortly after Google announced its Jump VR camera collaboration with GoPro, Google’s head of VR Clay Bavor called me in to meet. The result was a short term, project-based artist residency, their first (using “early early” camera/algorithms with artifacts that aren’t representative of the current Jump).

I’m of the artist-as-bridgebuilder school and found lots of potential symbioses with old and new friends inside Google VR, around Google, and externally, and consequently proposed as a project a “community-based ethnographic VR experiment”, thinking globally but with a fast, lean, local prototype.

Areas of Exploration

Several timely and relevant areas of exploration emerged.

1. Close-up VR imagery from camera-originated material is awesome but tricky.

Headset-based VR is uniquely suited for experiences in the nearfield, “intimate zone”, where image and sound seem within or near arm’s reach. Screen-based immersion such as 3D movies is not very good in this zone, and Hollywood has long learned to keep most of the action behind the screen (except for “gotchas” like blood, bats, and broomsticks). For VR made from computer models such as most games, getting the needed different viewpoints for each eye is trivial, but when the material comes from cameras, getting these different viewpoints is much more challenging.

2. High quality spatial sound is as important as image.

Shooting with panoramic camera rigs often defaults to recording with panoramic microphone rigs, usually an omni-directional thingy on top of the camera, resulting in compromised sound. Using human sound recordists with shotgun mics or booms, or mic’ing every individual subject in view, is far superior but problematic. And also, if you do, how do you hide the recordists and gear?

3. Filling (and unfilling) the panoramic sphere has novel challenges.

Filling the full 360 degree view with interesting material, and unfilling it with uninteresting material, has its own unique challenges and opportunities. Early on, many camera-based VR filmmakers felt compelled to digitally fill in the “nadir hole,” the region at the bottom of the panoramic sphere where either the camera rig couldn’t see, or if it did, saw the tripod.

4. The “hyperimage,” a Holy Grail of interactive media, is well-suited for VR.

So if we’re good digitally filling the nadir hole with nearby ground or floor imagery, how far can we go? The “hyperimage,” an artificially overpopulated scene where “more” is “happening”, has been a holy grail of interactive media from the beginning. Each element can serve as an interactive link, and the more links, the richer the experience. Think interactive Bruegel.

5. Metadata-based interactivity, another Holy Grail, may also be well-suited for VR.

A related Holy Grail is “directed interactivity,” where individual scenes or clips or other media are all parsed and tagged with metadata to allow interconnection with some sort of narrative or direction, more compelling than a random walk. This grail, which includes “interactive movies” and “database art,” has its roots in the grand databases developed over many decades in anthropology, such as George Murdock’s Ethnographic Atlas and most notably, Alan Lomax’s Global Jukebox project. (Alan was a mentor and personal friend.)

6. Community buy-in is essential.

Finally, as VR cameras begin to proliferate, the vision of a “One Earth Model” begins to emerge, spanning from entertainment and gaming to tourism and travel to ecology and activism. For this (attitude alert!), community buy-in is essential, where production is a collaboration and control is shared between producers and subjects, at its best, in the spirit of cinéma vérité pioneers like Jean Rouch and Richard Leacock. (Ricky was also a mentor and personal friend.) Without community buy-in, in the end, the loss will be ours.

Starting with Studies

It was no secret last fall that the Google / GoPro VR launch had been delayed, and being under time constraints, I proposed getting the Jump VR camera rig for a day and shooting some studies. I’m a big fan of studies (think Muybridge) and frankly am bewildered how little the VR community has understood the value and leverage of them. I was also in a good position for this: the joke was while everyone else was flying off to shoot VR in Timbuktu, I had already done that and was good shooting, literally, in Google’s backyard.

Michael with a stereo-panoramic motion picture rig in Timbuktu in 1995.

For this, I enlisted the talents of a couple other VR OG types, David Lawrence and James McKee. We’ve worked together on and off since the Apple / Lucasfilm Multimedia Lab days c. 1990, and together the three of us represent over 85 collective years of experience working with cutting-edge experimental media. Jim and David also made the “fantastic” early VR radio piece based on “Cyberthon” (that’s another story). Most recently David produced “Farm”, a stereoscopic art video with San Francisco artist Dale Hoyt and Jim produced the spatial installation audio for Chinese artist Ai Weiwei’s “@Large” show on Alcatraz.

We took over the “Big Chairs Park” on the Google campus and, using wide blue masking tape, staked off the ground into 12 “one hour” radials with concentric rings at 1 meter out to 5 meters and beyond and got to work.

David and Jim with the Google / GoPro Jump VR camera rig at Google in 2015.
Our intention was to explore how people are represented in VR.
The 360 degree by 180 degree “equirectangular” video format turns the radial lines into parallel lines. Here’s the left eye view from a stereo pair. (Yep, that’s a 360 degree image of the same scene above!)

Our primary goal was to explore how people are represented in VR and to produce some modest, solid studies that would be immediately useful to students and folks getting started in VR. Something both practical and provocative. And our message to you is: Surprise us!

Study #1: Close-Up Tests

As mentioned, getting the needed different viewpoints for each eye in the nearfield from camera-based material is problematic, where these different viewpoints are most different. For farfield imagery like landscapes, it hardly matters since both eyes see essentially the same viewpoint.

Capturing these different nearfield viewpoints require special panoramic cameras which fall into three categories: 1) stereo-panoramic camera rigs with paired cameras, which is instantly viewable but with potentially gnarly lines between the stereo camera pairs; 2) unpaired stereo-panoramic camera rigs, which require computation to produce stereo pairs for viewing; and 3) panoramic camera rigs with additional magic such as laser range-finding (LIDAR), handiwork (such as 2D to 3D conversion), or clever computation (much yet to be invented).

Please don’t get me started on the state of VR cameras today (rant alert!). The Hollywood Reporter recently ran a story entitled “Virtual Reality Stitching Can Cost $10,000 Per Finished Minute.” This is largely because many folks building camera rigs have failed to do their homework. (Ask them what a nodal point is.) “Light Fields” is currently hot but, like “holograms,” even the experts are using the term looser than its technical definition.

The Google / GoPro Jump VR camera is an unpaired camera rig consisting of 16 GoPro cameras equally spaced around the “equator.” Because the cameras are unpaired, the wizards at Google have developed a cloud-based stitching algorithm that automatically converts the footage into stereo pairs for stereo-panoramic viewing. At the time of our studies, they claimed to be able to properly stitch imagery as close as 1 meter from the camera. We put it to the test.

If you look closely, the 1 meter shot is pretty good. Actually, we were pleasantly surprised at how good the 0.5 meter shot looked, with only minor noticeable artifacts.

Study #2: Recognizability

We had a practical agenda here: If we plan to shoot VR in the real world with real people, we may need film permits from local authorities, and we’d like to be able to confidently tell them how much space we need to “rent” by knowing where faces become unrecognizable. Remember, there is no zooming in VR headsets, you can only move or dolly the camera rig forward, so this should be a fairly simple number to determine.

The number, it turns out, is 5. :) About 5 meters radius from the camera rig. See for yourself.

Of course, this number not only depends on the resolution of the camera rig, but also on the storage resolution and viewer resolution. These numbers will all change but gradually and predictably.

Study #3: Camera Height and Eyeline

It’s long been known that imagery of people is greatly influenced by the relationship between the height of the subject and height of the camera, often referred to as “eyeline.” When the camera is below the eyeline, the subject looks “privileged” and when the camera is above the eyeline, the viewer feels privileged. We were surprised to learned how much this is amplified in VR, and found a very specific reason why.

The reason why is called orthoscopy (tech alert, hang with us!). An image is orthoscopically correct when it appears at the same scale and direction as it was captured. Turns out this is always true with VR but rarely true with everyday images. (The punchline to a Picasso anecdote, after a critic shows the artist a small photo of his girlfriend, is “she’s beautiful but she’s so tiny!”) In VR, when the viewer pans left 90 degrees, the image updates left 90 degrees as well, also part of being orthoscopically correct.

The amplification is because viewing VR images requires the viewer to physically pan and tilt their head accordingly, to be “embodied,” which is not the case in screen-based cinema. Theater audiences viewing a close-up in the center of a movie screen simply look at the center of the movie screen, regardless of the eyeline from which she was shot. Because of this, we suspect eyeline and camera height are much more critical in VR than in screen-based media.

Study #4: First Person / Third Person Solo Speaking

While television journalists and anchorpeople, onstage narrators and comedians, and many video games “speak to you” as a first person point of view, practically all narrative cinema is intentionally shot from a third person POV, with talent directed not to look at the camera (which in turn, serves as a “fly on the wall”). First person POV is so rare in cinema that there’s a Wikipedia Page dedicated to “Films shot from the first-person perspective” (it currently lists 33). And there’s a famous shot early on in “Apocalypse Now” where director Francis Coppola, cameoing as a television news director filming beach combat, screams to the soldiers “Don’t look at the camera!” Curiously, on-camera interviews fall somewhere in between, as exemplified by filmmaker Errol Morris’s "Interrotroninvention to maintain eye contact with interviewees.

So where will POV be with VR? We shot a little test.

It was apparent to us, especially when viewed in VR, that at least when someone appears to be speaking to the camera, they ought to be looking into the camera.

(A note about the “thumbs up / thumbs down” notations, please take these with a grain of salt. Our intention is not to provide answers as much as provocations. Remember this was an artist residency.)

Study #5: First Person / Third Person Two-Shot Dialogue

First, please take a look at this sequence. Keep in mind that Zach and Todd are always looking at each other, as exhibited by our VR viewer’s head swinging back and forth.

Here’s what we see is going on.

In shot 1, the camera is literally right between Zach and Todd, and they’re looking “through” it. We’ve seen and heard of VR productions with, say, several people sitting around a table in dialogue shot with a VR camera in the middle. While certainly worthy of experimentation, in our case we found this perspective unrealistic and unsatisfying.

Shot 2 is interesting on several levels. For one thing, neither Zach nor Todd are now looking at or through the camera, which has become a third-person fly-on-the-wall. Remember, they’re really looking at each other. And they’re still head-swinging far apart.

This perspective is “almost” unique to VR. In “Lawrence of Arabia,” an early Cinemascope film, the epic entry scene of the Omar Sharif character ends with Sharif in dialogue with Peter O’Toole at opposite sides of a very wide screen, at the time a unique and revolutionary composition. And in “How the West Was Won,” shot in 3-camera Cinerama for a giant curved screen, the talent often didn’t appear looking at each other at all, just like you see here.

While shot 3 is similar to shot 2 only less so, shot 4 is so much like a conventional “2-shot” that our VR viewer doesn’t even need to move her head anymore.

Study #6: Directed Attention

This little study speaks for itself.

Magicians and illusionists know the trick. Legend has it that Houdini was so brazen that, an instant before promising to transform his lovely assistant into a bag of sand, a planted shill in the back of the theater would scream something of surprise, redirecting the audience to instinctively turn around. Then, in plain sight, the assistant would jump out of Houdini’s arms and a stage hand would replace her with a bag of sand. Then trumpets (from the front) and voila, magic!

Our little study here is perhaps equally a cheap shot. If there was non-singular action, for example, several different people-of-interest criss-crossing each other (think Altman films), we may not be as “locked in.” This may, however, challenge the VR community’s current obsession with needing to fill the full 360 degree frame all the time.

Study #7: Hyper-Real Compositing

Here’s a looping video where everyone was shot separately and digitally composited together (using Adobe AfterEffects).

If you think the shadows don’t match, you’d be wrong. Remember this is in 360 equirectangular format. They do in VR (you have to see it and hopefully will some time soon). Everyone was shot within a relatively narrow timeframe and the shadows pretty much look credible, as does the “group” of people. We may call this a “credible” hyperimage.

Here’s something a little less credible.

From a purely technical perspective, it’s credible (except we intentionally slowed it down and remixed the sound for effect). It was, incidentally, shot in about 5 minutes, with our subject walking up and down each “hour” radial progressively. But since she’s the same subject, it’s an unreal, impossible shot. We may call this an “incredible” hyperimage.

And finally, here’s something combining several aforementioned elements (again, meant to be viewed in VR).

Some notes:

- This is a credible hyperimage, with everyone artificially added via digital compositing.

- Jim, the sound guy, has disappeared, digitally composited out of the scene.

- All subjects were mic’ed separately and a pro-level spatial sound mix was made in post-production.

- The subjects, all shot separately, appear synchronized in both words and action.

Well now, imagine what you could do with all THAT!

Acknowledgements

We’d like to thank our project proposal’s content advisors: Tressa Berman, Author, Anthropologist; William H. Durham, Professor, Department of Anthropology, Stanford University; Judith Fitzpatrick, Consultant, Anthropologist; David Evan Harris, Founder, Global Lives Project; Kevin Kelly, Author and Senior Maverick, Wired Magazine; and Anna Lomax Wood, Director, Global Jukebox Project.

We’d also like to thank our proposal’s production advisors: James Cha and Romalyn Schmalz of North Beach Bauhaus; and Roman Coppola, Susie Wrenn, and Michael Zakin of American Zoetrope / The Director’s Bureau.

And we’d like to warmly thank our friends in the communities of YouTube, Google Research, and Google VR.

--

--

Michael Naimark

Michael Naimark has worked in immersive and interactive media for over four decades.