Increasing VR Creativity
[March 2019: This article, and the entire VR/AR Series, is now available bilingually in Chinese and English courtesy of NYU Shanghai.]
[October 2020: Additional VR/AR class projects from NYU Shanghai since publication can be found here.]
The first half-semester was pretty easy on the students. They sat back through several weeks of richly audiovisual presentations on “VR / AR Fundamentals.” They unboxed and set up all five major VR platforms, then curated and experienced dozens of VR titles. They enjoyed the buzz of our “VR / AR News of the Week” class time.
But by the last two weeks of the semester, the students were neck-deep in highly experimental production, working through sleepless weekends, and were both excited and frustrated. We were all in over our heads. I sent frantic emails to my best-and-brightest VR colleagues for advice (which also validated that we were working on hard, timely issues). By showtime at the end of the semester, some things worked and some things didn’t.
That was the Fall 2017 semester.
Then in the Spring 2018 semester, it all played out roughly the same, again.
NYU Shanghai is a satellite campus of New York University, with 1,600 undergraduate students spread out over 18 majors, now in its sixth year. I’m visiting faculty in the Interactive Media Arts major, an undergrad spinoff of NYU’s venerable Interactive Telecommunications Program, long admired for cultivating a diverse, supportive, and fun-loving community. Half of our students are Chinese Nationals and the other half are broadly international. They’re all required to spend their junior year abroad. More than half of my students were female.
Students pushing boundaries is iffy business. On the one hand, one must teach and learn proficiency. Think learning how to play a classical musical instrument. For one thing, you need to learn how to read music. And you need to practice, practice, practice! In VR and AR, we need to learn how to shoot and edit 360 videos and how to compose and render 3D models. But the difference between learning violin and making VR is that our rules haven’t yet been established. We’re not even sure what they are, or what the right tools are. Most of the right tools haven’t even been invented yet.
This is a brief report — and exhibition venue — about teaching two rounds of “VR / AR Fundamentals” at NYU Shanghai. Its purpose is to share what we’ve learned, highs and lows, to help others working in VR and AR, and to encourage others to do the same.
First, some context…
First Word Art / Last Word Art
Years ago, I heard a theory of “first word art” and “last word art.” First word art is when a medium is new and the rules haven’t yet been established. Curators and critics have no metric for evaluation or basis for comparison. At the other end, last word art is after a medium is well-established. Haydn invented the classic symphonic form. First word art. Years later, Beethoven (one of Haydn’s students) composed his Ninth Symphony. Last word art.
Back then, convinced that it was possible to make both first word and last word art, I asked students to bring in examples of works they considered to be both. Once one student, an older woman back in school, asked “doesn’t last word art require the test of time?” She convinced me that the answer is yes.
It’s easy to be revisionist on this. “I saw Lady Gaga at the NYU talent show” etc. (She came in third place.) When Able Gance’s three-screen silent epic Napoleon premiered at the Paris Opera House in 1927, it received a fifteen minute standing ovation, then disappeared until its revival in the 1980s. (It is a masterpiece BTW.) Kubrick’s 2001: A Space Odyssey initially flopped until it was rebranded “The Ultimate Trip” (remember, it was 1967).
It’s too early for “Master Classes.”
What these examples suggest is that we need to be a little more cautious on the “Masterpiece” front, and by extension, on how we teach “Master Classes.” Consider the difference between teaching “It’s best to shoot VR with camera height at the same level as subjects’ eyes” and “The relationship between camera height and subjects’ eyes is more complicated in VR than in conventional cinema.” The first statement suggests a sense of “doing it right” while the second statement encourages experimentation. For first word art like VR and AR today, it’s simply too early to declare what’s right. There’s a lot of good experience out there. Share it humbly, because time will be the judge.
“If you think you’re doing research and you’re batting a thousand, you’re doing development in disguise.”
This is a quote from David Liddle, founding head of Interval Research Corporation, a long-term lab funded by billionaire Paul Allen in the 1990s. Prior to Interval, Liddle was a member of Xerox PARC when the personal computer and graphical user interface were being invented there. Research, he insisted, requires risk taking, and risk taking, by definition, means occasional failure.
Liddle was equally fond of saying that “in research environments, creativity is more important than productivity, and in development environments, productivity is more important than creativity.”
So who’s supporting VR / AR research?
Guess what the ratio of VR / AR research gatherings to VR / AR development gatherings is? How about ZERO?
Of course, that’s not entirely true. VR / AR research is prominent at large scientific gatherings like Siggraph and CVPR. The tech giants support impressive tech research, but it’s almost entirely development-driven. The more creative and holistic research is coming from small independent studios like Scatter, Pseudoscience, and xRez. Indies also know how to be more frugal and resourceful than tech giants.
In any high-growth moment (real or imagined), an industry gets flooded with developers, not researchers, seeking to commercialize the low hanging fruit. But if everyone’s cutting down trees for apples, who’s planting the seeds?
A golden opportunity for students.
As I said, students pushing boundaries is risky business. For one thing: they’re students (and at NYU Shanghai, they’re undergrads!). Our program requires everyone to take core courses where video, audio, internet, and custom hardware skills are developed, and we offer a wide range of electives such as animation, robotics, smart fabrics, and drones. It may be hard to compete with Facebook, Google, or Microsoft Research, but the students are fresh, fearless, respect experience, and do their homework.
This opportunity doesn’t need to be only for academia. In the big scheme of things, I’d like to believe, we’re all students.
Assignment 1 — Fall 2017
Make short (1 minute) VR sequences with real people, shot individually, doing synchronized movement and sound, interactively.
We wanted to explore the biggest chasm in VR today, the one between VR video and VR games. VR video is camera-based and photorealistic but linear, stored as frames; VR games is computer-graphic based and cartoon-like but interactive, stored as models. In addition to using two different underlying technologies, VR video and VR games come out of two different cultures. Of course, we’re not alone in this exploration, but most folks, rushing towards development, have taken sides. Our drive isn’t entirely academic: we believe the biggest opportunities will emerge through discoveries made in this largely unexplored space.
“Double Bubble” VR video inside a VR game engine
So, we’d like to shoot in VR video, as frames, and move it into a VR game engine, as models. A relatively lightweight approach may be called “double bubble,” where 3D 360 video is mapped onto a computer graphic 360 degree sphere, but with separate views for left and right eyes, hence the “double.” Double bubble architecture is essentially what 3D VR video apps like YouTube VR, Jaunt, and Within do, but without a game engine and without any interactivity.
Double bubble video inside a game engine enables interactive possibilities otherwise impossible with VR video. One of particular interest is real-time compositing, where individual video elements (such as people) can interactively appear and disappear in the same scene. So with a single VR camera, we could make credible (and incredible) “hyperimages,” with performers artificially appearing and disappearing (and crew artificially disappearing).
Apparently Google could not. They graciously supported the lead-up, a series of studies using the Google/GoPro Odyssey VR camera, then terminated the proposed project. (Why? You’ll have to ask them.) Google did eventually allow us to publish the results as VR Cinematography Studies for Google.
Now, two years later, and with a VR camera generously on loan from Jaunt VR, we continued this exploration. The class formed 4 groups, and each quickly came up with simple, elegant, one-word projects: “Smog,” “Alter,” “Rondo,” and “Rhythm.” Individual subjects were shot individually, sometimes in front of a green screen.
Plans for interactivity were scripted. Each group had the VR camera for one week, twice, so could shoot at least one iteration.
By the final two weeks, the students had all of the 3D 360 video shot and prepped for importing into a game engine, in our case Unreal. Then we hit a wall. Even though each individual double bubble video clip contained only a single person and was 95% “blank,” Unreal dealt with them as full-resolution spheres and we couldn’t “pile them on” as many individual layers as we had expected.
The reason turned out to be largely based on underlying incompatibilities between the video codec (made by and for the video world) and the game engine video encoder (made by and for the 3D graphics world). We heard of third party possible solutions, for example AVPro Video for Unity, for $450, but it was too late.
We had achieved most but not all of our objectives and cut our losses, ending up with non-interactive, linear versions.
Assignment 2— Spring 2018
Make interactive, immersive VR experiences based on “routes” and “destinations,” short narratives allowing users to chose and “travel” from one destination to the next.
Another way that VR video can be made interactive via a game engine is to enable real-time branching: a short clip plays, pauses, choices are visually apparent, the viewer selects one, and the next clip instantly plays.
Again simple, no? Branching interactive movies have been around since Expo ’67 Montreal (using film, no less!) but are not possible with conventional VR video viewers which play only linear VR video.
Like the first assignment, we had a model. A few weeks prior, the New York Times posted an absolutely marvelous video called “What Happens just before Show Time at the Met Opera, in 12 Rooms You’ll Never See.” A Steadicam was used to shoot continuous journeys from room to room in the Met, up and down stairways and through hallways, stopping at a dozen rooms of interest. The “journey” clips were digitally sped up and the “rooms” clips were shot with the camera relatively stationary. Under scrutiny, cheating can be seen, for example, a slight jump-cut from one closed door to a different door before opening. But this, for us, was to be embraced. (Alfred Hitchcock’s feature-length film Rope is a famously clever single continuous take, with equally clever cheats.)
So the assignment, simply, was to make something like the New York Times video but immersive and interactive. This sort of Google Street View-like “virtual travel” is particularly fertile for VR, and personally resonant, having worked on Street View’s original predesessor, MIT’s Aspen Moviemap as a student and over the years directed related projects in Paris, San Francisco, Karlsruhe, and Banff.
KandaoVR, a China-based company, generously loaned us an Obsidian VR camera, more portable than the Jaunt, and we bought some very smooth tripod wheels. We agreed to shoot entirely on campus. Teams formed around three themes: Activists, Poets, and Entertainers (perhaps no surprise: an equal split in our program). The interactive scripts were longer and more detailed than last semester’s, with the destinations connected to dollying hallway shots via homemade hot spots indicating choices.
This time, things went smooth but slow, very slow, much slower than we expected, and we had a hard, very hard, deadline: the semester’s end. Near the end, everyone was working overtime but we missed our target, the internal show, in what turned out to be a very close call.
Like the Fall semester, we found a potential solution, from Brooklyn-based Eevo, though our ambitions were to see how far we could explore video inside a game engine, for example, depth maps.
Unlike the Fall semester, this time we successfully managed to move 3D 360 video into a game engine and achieve interactivity.
A sad and frustrating note: we lost much of the Poets final digital material. Without getting into details, it was a perfect storm and in the end, on my watch. I take full responsibility for it and feel truly shitty over this. We all know the lesson here.
Many years ago I learned an interesting lesson. When I was being a good producer in a research environment, my collaborators would say “wow, this guy knows how to get things done.” But when I was being a good researcher in a development environment, my collaborators were more like “this guy is wasting our money.”
It’s common wisdom that in development, focus is paramount. Sidetracking, particularly if it’s off-topic, is defocusing and to be discouraged. Such is the state of most VR / AR activity today.
So how can we increase the level of research? If you’ve read this far, I hope you’re convinced of its importance, both short term and long term, and about the relationships between research, creativity, and risk-taking.
One strategy is to keep risky, creative research projects relatively small, at least by development standards. One might call this strategy “Leash Length.” Big budget: short leash. Small budget: long leash. It also means using resources more for substance than slickness. Scruffiness is OK.
This is where the arts community comes in. While many in the development community see research purely as a means toward their ends, there are other folks who embrace risky, creative research with passion and with joy.
NYU Shanghai “VR/AR Fundamentals”
Fall 2017 semester students: Alexis Trevizo, Bishka Zareen Chand, Bruce Luo, Cyndi Yudi Jia, Fernando Medina, Jack Hua Zhu, Jack Zhang, Jiayi Wang, Linda Laszlo, Kilian Hauser, Muru Chen, Linda R Yao, Quinn He, Ruonan Chang, Sara Dora Bruszt.
Spring 2018 semester students: Amy Mao, Cindy Yifan Hu, Ellen Yang, Joseph Boyle Meyer, Konrad Krawczyk, Lillian Korinek, Linjie Kang, Maike N Prewett, Mark Etem, Marina Victoria, Pascual-Izquierdo, Martin Teng Ma, Mary Xin Yu Gao, Nora Zhener MA, Olesia Ermilova, Steven Ziqi Wang.
Teaching Assistants: Ada Zhao, David Santiano. Lab Assistant Spring 2018: Kilian Hauser.
With help from: Anna Greenspan, Christian Grewell, Helen Wu, Luka Lujia Luo, Marianne R. Petit, Matthew Belanger, Sean Kelly.
Michael Naimark was recently surprised and amused, and is deeply honored and grateful, to have his work included in “Art in Motion. 100 Masterpieces with and through Media” exhibition at ZKM Karlsruhe, July 2018 — February 2019, along with works by Luis Buñuel, Joseph Beuys, John Cage, Henri Chopin, Marcel Duchamp, Charles & Ray Eames, Harold Edgerton, Auguste & Louis Lumiere, Étienne-Jules Marey, Norman McLaren, Laszlo Moholy-Nagy, Nam June Paik, Dziga Vertov, John Whitney, and others.
He’s also honored and grateful to be working in a culture-balanced and gender-balanced community, with more than 50% female students, with wishes for more balanced masterpiece exhibitions in the future.