VR / AR Fundamentals — 3) Other Senses (Touch, Smell, Taste, Mind)

Michael Naimark
15 min readFeb 16, 2018

Fooling the non-audiovisual senses.

[March 2019: This article, and the entire VR/AR Series, is now available bilingually in Chinese and English courtesy of NYU Shanghai.]

Welcome to #4 of 6 weekly posts in sync with my “VR / AR Fundamentals” class at NYU Shanghai, where I’m currently visiting faculty. The class was billed as “partially technical but in an understandable way for general liberal arts students.”

Hope you find these helpful. Your feedback is welcome!

It’s hard to hear panels, presentations, pitches, or cocktail chatter about VR and AR without the topics of touching, smelling, and tasting coming up. Indeed, this is part of “real reality.” Today we’ll overview the non-audiovisual senses, including “Mind as sensor.”

Let’s begin though, with the most ubiquitous non-audiovisual sense.

Haptics

Haptics & Force-Feedback

Haptics is broadly defined as recreating the sense of touch through forces, vibrations, and motions, and force-feedback is defined as combining the sense of force to an input device such as a joystick or steering wheel. Sense of touch, meaning on the skin, is rich and complex with sensations ranging from featherlight touch to handshakes to massage to pain. And touch goes beyond skin deep.

Fremitus is a word rarely used outside of medicine but is a feeling we all know. It’s the feeling of vibrations transmitted through the body, for example, the feeling of a powerful sub-woofer in the chest cavity. Body vibration devices today range from tiny vibrators on our smartphones to heftier add-ons for gamer controllers like Nintendo’s Rumble Pak to VR chairs fitted with a monster tactile “bass shakers.”

Seats & Motion Platforms

More effective than rumbles and shakers is when the seat or platform physically moves, such as with flight simulators or motion rides. In addition to touch-based haptics, our vestibular system inside our inner ear senses rotation and movement and provides powerful motion cues.

Frasca Helicopter Simulator — VR for one person

Flight simulators do a very good job adding haptics to an immersive audiovisual experience, but they’re not particularly cheap. And still, they’re not perfect. For example, what if the helicopter student pulls up and the simulator tilts up, then quickly pulls up again and the simulator tilts up more, then again? At some point, the powerful actuator legs reach their limit. The solution is call “simulator creep:” whenever the simulator is not centered but not in active motion, the system will “creep” it back to center at a speed below the threshold of detectability.

Douglas Trumbull’s RideFilm simulator for the Back to the Future Ride, Universal Studios Florida, 1991

Slightly more cost-effective simulators have been built for groups, for the entertainment industry. One of the most novel simulators uses an “orthogonal motion base” invented by special effects and high frame rate wizard Douglas Trumbull. It solves the problem that when the front of a group platform goes down, the back of the group platform goes up, by only having the platform always flat and only moving orthogonally (up/down, left/right, front/back). So when the movie from a jet fighter nosedives, rather than tilting the platform down (which brings the rear of the platform up), the entire platform simply moves down while staying flat. It seems like it shouldn’t work, but it does. Oh, and “slightly more cost-effective” still isn’t cheap: the Back to the Future Ride cost $40 million.

There’s another way to make VR experiences with the thrill and intensity of high G forces: take over a roller coaster. It’s a simple idea. First, select a pre-existing roller coaster. Next, measure all of the movements and accelerations during the ride. This can practically be done with the vestibular-like sensors in everyday smart phones. Then, create a computer graphic VR experience in sync with the movements and accelerations of the ride. And if you’re asking the roller coaster rider to wear VR headsets, why not also give them custom controllers, like a space gun, and add a level of interactivity to the experience?

Here it is.

Dare Devil Dive Virtual Reality Roller Coaster, Six Flags Over Georgia

You may think this is a little crazy, and of course it’s another not-very-cheap solution. In the early VR days, the VR-on-rollercoaster idea occasionally came up almost as a stoner joke. But check out the video. So far it’s been the biggest hit in my class. Six Flags intends to repurpose nine of its rollercoasters for VR.

Seats and motion platforms for VR are a very ripe area for creative and inexpensive alternatives. I’d put my money on the health and fitness industry and on the arts community.

4D

“4D Film” is bantered around as something like the Holodeck — an ultimate, multi-sensory immersive experience. Experiences that advertise as 4D often add wind, rain, heat, smells, smoke, air bubbles, and back ticklers to stereoscopic image, multi-channel sound, and haptic seats.

4D, unlike holography, has no scientific or technical definition. And nobody owns it. It’s basically a marketing term.

Skin as Input

On the other side, there has been solid research around “seeing” and “reading” through the skin.

Haptic Display (https://lmts.epfl.ch/haptics) and Electrostatic Display (https://tcnl.bme.wisc.edu/projects/)

“Haptic displays” physically move matrices of elements, sometimes called taxels, that can be easily seen and felt, while “electrostatic displays” have no moving parts and typically provide a sensation of touch rather than something physical. Such displays can be used as an “alternative eye” for the vision impaired.

I polled my class, if you wore a high-resolution dynamic display that covered the skin on your belly or your back with something that can be felt, would you “recognize” a landscape, a walk in the park, or a talking moving face. The consensus was a strong “maybe.”

Last year, Facebook demo’d an equally provocative system for “skin hearing.” We know it uses a system of actuators tuned to 16 frequency bands worn on the arm, and the demo video suggests it’s possible to discern three-by-three word sequences. The implications (as we understand it) is that users don’t learn in the sense of intellectual study like braille, but intuits over time: learning simply happens.

On the full body front, we’re beginning to see complete VR haptic suits, most using body-part specific vibrators which operate in sync with a VR game.

Then there’s haptics and VR sex. Now, I’m teaching an undergraduate class, in China (and mostly women), but I did show, without commentary, two published stories: Oculus VR Founder Wants To Make VR Porn With An “Industrial Robotic Arm” and Man has ‘sex’ with inflatable torso as he demonstrates bizarre adult virtual reality game in Japan (from the UK Daily Mirror no less, with an “explicit” video).

Hands & Controllers

Most of our everyday experience with the haptic sense is with our hands and controllers — game controllers, smart phone, key fobs, etc. — where the physical design of their shapes, sizes, knobs and buttons, could be considered haptic in nature.

Though I’m jumping ahead a bit to both Input & Effectors (next session) and Live & Social (final session), here’s possibly the most simple and elegant haptic hand demo ever.

inTouch, MIT Media Lab, 1998

In 1998, MIT Media Lab Professor Hiroshi Ishii and students Scott Brave and Andrew Dahley presented “inTouch,” a system consisting of two handsomely made identical devices each with three parallel wooden dowels. The two devices were connected live, so when one or more dowel moves on one device, it moves the corresponding dowel in sync on the other.

A “party trick” was to place both devices side by side with the electronics sections facing each other and covered with a napkin. The dowels behaved exactly as if they were three long dowels. Then the towel was removed and the units were

separated and still worked.

InTouch demonstrates the power and nuances of hands, touch, and movement. It’s been said that well-acquainted couples can “recognize” each other using inTouch remotely.

Touching Real Things / “Mixed Reality”

One of the first demos to come out of Scott Fisher’s VR lab at NASA in the late 1980s was a “surprise haptic demo,” made with Stanford intern Mark Bolas. They had made a 3D computer model of the VR lab space, and after visitors experience the standard demos, they’d switch from aircraft and elevators to the actual room, and say “see the table in front of you? Touch it.” The visitor’s VR hand (via VPL’s Data Glove) would reach out in a minimal wireframe world, viewed through then state-of-the-art VR headgear which Bolas calculated were equivalent to 20:200 vision, and alas!, touch the actual physical table.

Today this phenomena is sometimes called “mixed reality” (MR), now clouded by Microsoft’s different use of the same term. The phenomenon involves building a location-based space with physical props whose shapes approximate the virtual world the viewers will see with their VR headsets. The first major public appearance of this MR was a Ghostbusters VR experience, in 2016, in New York’s Times Square, built by The Void. Visitors wore standalone VR backpacks. Check out the video. A more recent startup, Nomadic, calls this “tactile and walkable VR adventures.”

A ground-breaking and emotionally resonant MR art installation was presented last year at the Tribeca Film Festival called Draw Me Close, an autobiographical story about the filmmaker as a child and his relationship with his terminally ill mother. The installation involved a live actress performing as the mother, captured live and appearing in the MR world, who would touch and hug the VR headset-clad visitor. See this video.

Non-Contact Haptics

Is it possible to feel something without any contact at all, beyond “very low resolution” wind? Can I feel a virtual ping pong ball hit a particular place on my shirt, or can I touch and control a virtual rotary knob? It would be nice, and there’s a lot of speculation (and hype) around non-contact haptics. So far, we’ve found two methods.

The first is the use of air vortex rings, a way to blow air out of a hole with impressively good aim and distance. Usually associated with smoke rings and “air bazookas,” air vortex rings are now a serious enough candidate for VR applications that they’re being studied by Microsoft Research.

The other non-contact haptic method uses a matrix of ultrasonic transducers. Ultrahaptics, a British startup, has developed a small array of 64 transducers that allow people to “touch virtual objects in mid air.” It’s been making the rounds at VR events and is impressive. I’ve tried it and the feeling is hard to describe. When you hold your hand above the matrix, you can definitely feel something, though more akin to a tingle, but can also definitely discern a sphere from a pyramid from a cylinder. As the video shows, there’s a good future for virtual knobs and other controls where not much force is needed.

ultrahaptics.com

Smell & Taste

I’m combining these two senses into a single section partly because they’re so similar, relying mostly on chemistry, and partly because there’s not much that’s truly new and revolutionary.

Smell-O-Vision & the Food Simulator

“Smell-O-Vision,” specially equipt movie theaters with aroma emitting machines, really did exist (1960), as did AromaRama, Odorama, & Aroma-Scope, all with custom produced movies for the experience. The biggest problem was evacuating the aromas as fast as they were emitted, and allegedly this was the single biggest reason home “aroma players” never caught on. I once heard a radio interview with the maker of such a product, a CD player like device, which flopped and when asked why, he replied “people think they’d like to have the smell of ‘baked bread’ but then after a few minutes, they could no longer stand it.”

One solution is to wear small “scent release devices” around one’s neck. Los Angeles based start-up RemniScent makes small wireless modules loaded with chemical based scent filters.

Targeted Scents, Dan Novy, MIT Media Lab, 2018

Another solution, recently demonstrated by MIT Media Lab PhD student Dan Novy, is to “target” scents across the room using a vortex generator.

On the one hand, all of these methods work, in the sense of engaging smell as a sensory input.

On the other hand, the technology remains entirely chemical, not electronic or digital, so each scent requires its own dedicated vial.

The same is essential true for the sense of taste, at least for the forseeable future.

The Food Simulator, Hiroo Iwata,, University of Tsukuba, 2003

In 2003, the Food Simulator premiered at the Siggraph Tomorrow’s Reality Gallery in the LA Convention Center. Participants were asked to put in their mouth a gauze-covered electro-mechanical device with a thin plastic hose attached. Biting down triggered the device to quickly contract while squirting a food flavored chemical into the participant’s mouth. While many found it “novel” (or worse) virtually no one could make the leap between this device and virtual food. To its credit, the lead inventor, University of Tsukuba professor Hiroo Iwata, is perhaps the most prolific exhibitor of edgy haptic devices.

Meta Cookie, Takuji Narumi, University of Tokyo, 2010

In 2010, Meta Cookie combined headset-based VR with a “scent helmet” with visually coded “plain cookies” to make “augmented gustation,” whereby the user sees an augmented reality cookie, like chocolate, while breathing chocolate scent. A seemingly more serious, well designed endeavor, Project Nourished, purports to use similar tech for weight loss, allergy and diabetic management, eating therapy, and remote dining.

Like smell, taste experiments are primarily still based on chemicals. Whether a usable connection exists between smell affecting taste is worthy of further study. And there’s a trickle of research about electronic stimulation of the tongue to produce the sensation of sweet and salty and of the jaw to produce the sensation of chewing.

Virtual sweetness, Nimesha Ranasinghe and Ellen Yi-Luen Do, National University of Singapore, 2016

It’s often hard to tell what’s legit, what’s irony, and what are design exercises. In 2013, artists Miriam Simun and Miriam Songster exhibited Ghost Food, a stark post-global-warming food truck. My San Francisco Art Institute students exhibited EAT, virtual dining via projection mapping as a commentary about consumption, and Virtuality, Inc. as a total VR hoax, which even included the dry-cracker-to-strawberry-cream-pie trick.

Mind

I’m using “Mind” as any sensory input not coming in through the known five senses. This may be related, but as I understand, different from Ayatana, the Buddhist belief in Mind as the sixth sense.

ESP, Brainstorm (the movie), & Science

ESP, Extrasensory perception, was coined by Duke University Professor J.B. Rhine “to denote psychic abilities such as intuition, telepathy, psychometry, clairaudience, and clairvoyance, and their trans-temporal operation as precognition or retrocognition.” While many (actually most) people polled believe that some form of ESP or psychic phenomena exists, so far it’s been impossible to reliably replicate these phenomena. And it’s not like people haven’t been trying, including the military. We even had a “psi research program” at Paul Allen’s Interval Research Corporation in Palo Alto in the 1990s.

For this session, we’re only focussing on “mind as sensor:” can the mind “read” other minds or “see” something far away without any other sensory input? Next week we’ll address “mind as effector,” which, it turns out, is a very different thing.

Even — especially — the best ESP or psi researchers will be the first to say that if these phenomena exist, they are very weak, inconsistent, unpredictable forces.

The 1983 movie Brainstorm, directed by the (thrice) aforementioned Douglas Trumbull, has perhaps the most technically believable premise, that a “brain helmet” can serve as a total input/output device, recording and playing back full sensory human experiences via something like video tape. The teaser gives an idea of the tech (and please forgive the early 1980s styles).

Using a brain helmet as input, to inject signals into the brain containing previously recorded human experiences is simply not on the horizon.

Using a brain helmet as output, to extract signals from the brain containing previously recorded human experiences, or as an input device — maybe. Next week . . .

Hacks

So far I’ve only been able to find two solid, repeatable, ESP-like phenomena. They’re not affecting the brain itself, but they do both use electrodes rather than eyes, ears, nose, or tongue to “get in.”

The first was the “Phosphotron” by Oakland, California video artist Steve Beck in the 1980s. Phosphenes are the dots, blurs, and scintillations of light we see when we gently rub or put pressure on our closed eyes. “Seeing stars” after a sneeze or standing up quickly are also phosphenes. Phosphenes are real, probably having something to do with stimulation of the retinal ganglion cells, but it’s not exactly clear how. In addition to pressure, phosphenes can be generated by mild electrical stimulation to the scalp, like around the temples.

Steve Beck’s Phosphotron, from www.vasulka.org

Beck’s Phosphotron was such a device. Participants were fitted with electrodes on their temples, usually as a group sitting in a circle, and Beck would “play” the Phosphotron, sending electrical signals of different shapes, sizes, and frequencies. Almost everyone saw something. Someone might say “white dots” and another “yeah” and another “I see blue lines.” The Phosphotron was far from a controllable image-making device, but it worked, in that people saw something with their eyes closed.

The other electrode-based sensory phenomena was “Shaking The World” by Taro Maeda and his team at Japan’s NTT Communication Science Laboratories. It used electrodes behind each ear to produce “galvanic vestibular stimulation” which results in “lateral virtual acceleration,” a fancy way of saying it can make you sway left and right as you walk. Shaking the World was exhibited in the juried Siggraph 2005 Emerging Technologies show, where I experienced it first hand. After signing a liability waiver, two participants were wired with electrodes and asked to walk side by side along an open space in the exhibition hall while the “operator,” in plain sight, was sending control signals. No matter how hard everyone tried to resist, everyone would uncontrollably sway left and sway right as they walked, shown even more dramatically as two people at a time. The video is a must-see. Proposed applications for galvanic vestibular stimulation ranged from VR to automatic collision avoidance to pedestrian flow control.

Issues

When is Suggestion Good Enough?

When is a suggestion of a knob via ultrasonic transducers, or the suggestion of getting hit by an arrow via a vibrator on your chest, or the suggestion of eating chocolate cookie via chocolate aroma with a tofu cookie, good enough?

Personally, I haven’t a clue. We certainly know that low picture resolution in VR is bad (though not necessarily bad in painting), and if virtual ultrasonic knobs work, as can be empirically measured, we have some data points. But this is an entirely open and exciting area of exploration, largely because it’s so unpredictable and counterintuitive.

My personal favorite example took place in Jon Barne’s Ultimate Taxi in Aspen Colorado. His taxi is pretty ultimate, with live music, laser light shows, and magic, all while Jon Barnes drives around slowly through the picturesque streets of Aspen. Once, during the winter in the middle of an empty street, he simultaneously veered left, hit the breaks, exclaimed “Whoa!,” and pressed a hidden button under the dashboard which sent a loud tire screeching sound to his sound system. The screech added so impressively to “like being there,” more than anything else I’ve experienced.

Synaesthesia

Synesthesia is defined as a “perceptual phenomenon in which stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second sensory or cognitive pathway.” It’s most often associated with different senses rather than pathways, such as “seeing color” or “tasting sound” and I’ve had my own good experience using sound to enhance haptic force (read #6).

Perhaps the most provocative, and repeatable, synaesthesia experiment is called the rubber hand illusion. A subject is seated with one hand out of view and replaced by a rubber hand in view. Both the human hand and the rubber hand are stroked by an operator in sync. Very quickly, the subject perceives the rubber hand as being her own. The rubber hand illusion can even be performed with full disclosure and transparency and it still works. And it’s also been shown to be affected by emotional audio cues.

Rubber Hand Illusion from parsingscience.org

The bigger idea here is called the body transfer illusion, the illusion of “owning either a part of a body or an entire body other than one’s own.” Researchers have studied using VR to understand first-person perspective changes, for example, when “transferring” from a woman to a man’s body.

It’s noteworthy that the word “feeling” applies both to touch, in the haptic sense, and to emotion. Smell and taste are not far behind, as particularly deep, emotional, limbic triggers.

So I hope we all now have a bit of an understanding how essential the non-audiovisual senses are to human experience and what the challenges are for bringing them in to VR and AR. With a little bit more experimentation and creativity and little bit less hype, VR and AR may indeed become empathy machines.

Thanks to Jiayi Wang for the modules sketches.

--

--

Michael Naimark

Michael Naimark has worked in immersive and interactive media for over four decades.