The One Earth Model

Geographic Interoperability in the Real-World Metaverse

Michael Naimark
8 min readDec 9, 2022
World map with thousands of dots representing song origins.
Alan Lomax’s Global Jukebox, 5,776 traditional songs representing 1,026 societies. [Assn for Cultural Equity]

In 2011, I gave a presentation at the MIT Media Lab called Place Representation and the One Earth Model, based on a 2009 Class at NYU’s Interactive Telecommunication’s Program called Representing Earth. From the MIT presentation description:

Representing actual places presents unique opportunities and challenges because, unlike fantasy places, actual places share common frames of reference as indexed by latitude, longitude, altitude, and time. Thus, all real-world representations can cross-correlate into a singular, unimaginably giant Earth model, containing as much detail as is chosen to add. … The big players know it and the field is lively.

A major inspiration for this work was the work of ethnomusicologist Alan Lomax, who spent several decades collecting, codifying, and cross-correlating traditional songs, and later dance, from around the world. He amassed the world’s largest collection back when “thousands” was an unimaginable number, and developed what he believed was a “unifying theory of culture” from it. How influential was this work? Brian Eno: “Without Lomax it’s possible that there would have been no blues explosion, no R&B movement, no Beatles and no Stones and no Velvet Underground.” Possibly world-changing. But it didn’t come quickly or easily. Incredibly, Alan Lomax’s Global Jukebox finally launched publicly just last month.

Back in 2011, my claim that the big players knew it and the field was lively might have been a little too premature.

In 2019, on a somewhat more modest scale than a global jukebox, my NYU Shanghai VR/AR students made short VR experiences “On Shanghai,” shot from single spots exploring “sense of place” through manipulation of image, sound, and time. They produced several 3D 360 degree experiences of single spots in Shanghai. But there was no common place to put them to correlate with other similar experiences. Also around that time, my NYU Shanghai Interactive Media Arts faculty colleagues and I made a map of IMA-related places around the world one evening over Indian food and beer, loosely indexed and organized. We ended up with almost 150 relevant places, but again, there was no common place to put them to correlate with other similar endeavors.

At that time, Google Earth was the gold standard for giant 3D Earth models, still is, and Street View was the gold standard for photo mapping. Consumer-based VR headsets were beginning to ramp up, but by far, most of the popular titles were fantasy-based video games. Onsite in-person AR had made a splash with Pokémon Go, but was mobile-based and not immersive.

Today, it’s very clear that all the big players know there’s value in giant Earth models, at least their giant Earth model, and the race is on, with Apple, Meta, Microsoft, Epic, and fresher newer players like Niantic and Living Cities vying to explore and exploit giant Earth models.

Anchored Representations

All forms of real-world-based representations — photos, videos, words, music, VR, and virtual objects — can be anchored spatially and temporally to spots or areas on Earth, like stakes in the ground, and cross-correlated with annotations, tags, and other metadata.

Currently, most all of the excitement with anchored representations is around making virtual objects to anchor in actual places for mobile AR games, but we can also find AR fantasy spectacles, art, protests, and historical re-enactments.

BeHere / 1942, an AR historical re-enactment about Japanese American Incarceration (2022). [Masaki Fujihata]

Soon, as the tools become easier and more accessible, we’ll see much more. Imagine standing anywhere on Earth — a popular spot or not — and seeing anchors to tour guides, personal videos, artworks, pranks, hidden treasures, and of course ads, as depicted in this popular parody video.

The amount of anchored material will be overwhelming. Will they be on separate platforms, inaccessible on other platforms? How can they be filtered? Could they be represented on a single giant Earth model?

Remote Travel

For every real-world place where you can actually stand, there are billions of other people who could only be there remotely.

With a giant 3D Earth model viewable down to a ground-level view (important), one could “travel” from one anchored representation to another, literally, like Superman or Superwoman, “flying” around a city or around the planet from one spot to another.

Will it be “just like being there?” No. Let’s not fool ourselves: the smells, the food, the climate, the ground under our feet, and the live interactions will never be just like being there. “Just like being there” almost killed VR so far with its over-inflated expectations.

Remote travel, however, could be “the next best thing to being there,” not as a substitute — and not as some artifice way to commune with nature — but as a way to survey many more places than is otherwise possible to actually visit. Perhaps a few people will find a few places that truly resonate, and will have the passion and commitment to actually visit, whatever it takes.

👎 Just like being there.
👍 The next best thing to being there.

In 2006–8, I directed a project at USC called Viewfinder which allowed users to spatially situate their photographs inside a 3D Earth model like Google Earth.

Photo aligned inside Google Earth for USC Viewfinder project.
Family photo spatially aligned inside Google Earth. [USC Cinema Interactive Media & Games Division]

Users could fly, or more appropriately “bounce,” from one photo to another thanks to Google Earth’s graceful way of moving from one ground-level view to another. These bounces could be from one LA neighborhood to another or from LA to Timbuktu.

Viewfinder five minute demo video. [USC Cinema Interactive Media & Games Division]

During the course of the project, our team spent a great deal of time bouncing around Google Earth. We noticed that, after a long day of bouncing, in bed at night, we all felt the same bodily sensation one has after a day of skateboarding or a night of dancing. And that was viewing Google Earth on laptops. Imagine how much more visceral it is in a VR headset!

Mediated Presence

Whatever you might think of Mark Zuckerberg, he’s single-handedly set the frenzied pace of metaverse research and development, and his priorities are clear:

The defining quality of the metaverse is presence, which is this feeling that you’re really there with another person or in another place. (Mark Zuckerberg, Nov 2, 2021)

Though presence can mean a lot of different things to a lot of different people (like when your eyes tear up and mine don’t watching the same movie together, or vice versa), a baseline approach which I’ve used for years is a mixing board analogy.

Mixing board analogy to subject and representation.

The sliders on the mixing board are all of the elements that affect our sensors and effectors. For example, one slider might be static foveal visual pixel resolution, another may be audio frequency response, several may be haptic; and others in the other direction may be gaze, voice, and hand recognition and input. If we know what all of the sliders are, which most experts generally believe we do, and if we can turn all of the sliders up to “10,” which all experts agree is currently not possible (if ever), then, by definition, the representation will be indistinguishable from the subject.

(Anyone interested in a deep dive into this approach may read my six-part series on Medium, another series in English and Chinese on NYU, an intensive workshop description on mediatedpresence.com, and look out for a NYU Shanghai mini-course launching China-wide this spring.)

The tech giants are currently hedging their bets that the metaverse will also be available on non-immersive formats such as mobile, tablets, and laptops for practical reasons, but the eye, as Mr. Zuckerberg states, is on the prize of immersive, high presence platforms.

It’s noteworthy today that most all consumer AR is non-immersive, low presence, on mobile phones and tablets, and most all consumer VR is immersive, high presence via headsets, but with a much, much smaller user base. So things are in a bit in an interim and confusing state, with the expectation that headset-based AR will become immersive and affordable and usable outdoors, and that consumer VR will take off with widespread adoption as a product for the home, school, or office, possibly as Mixed Reality (MR) via small “pass-through” cameras in front.

Geographic Interoperability

The metaverse is widely portrayed as artificial worlds — for gathering, hanging out, fantasizing, being entertained, or working. These artificial worlds are constructed from scratch, professionally or by members of a community. Interoperability between these artificial worlds assumes that they’re all different, like different territories. Hence, interoperability is akin to territorial treaties.

But with the real-world metaverse, several parties can occupy the same physical space. Will the tech giants with their giant Earth models host different anchored representations? Will we need to toggle between Geo-TikTok, Geo-Instagram, and Geo-YouTube to see photos and videos of the same spot? Will third parties be able to aggregate representations around the same spots? Might this best be served by a nonprofit entity?

And, can private property owners prevent us from virtually seizing their property for video games, and what if, unlike Pokémon Go, it’s played entirely remotely? Can China or Russia prevent us from anchoring representations within their borders? And how can they, if we’re not actually there?

I don’t have answers (though I do have uniquely well-seasoned hunches). These questions need to be openly and collectively debated and resolved. The important piece is that the real-world metaverse needs to be considered by different norms than the artificial worlds metaverse, and that the real-world metaverse for on-location AR, VR, and MR needs to be considered by different norms than the real-world metaverse for remote AR, VR, and MR.

It would also be quite nice if all real-world metaverse representations could be universally accessed via a singular, unimaginably giant One Earth Model. It could possibly be world-changing.

Michael Naimark produces and advises on immersive and interactive media. He was an early advisor to Alan Lomax on the Global Jukebox project, where he remains an advisor-at-large.

Michael will be teaching “Designing XR Experiences,” a new class at UC Berkeley this spring.

For more information please visit tinyurl.com/michaelnaimark or linkedin.com/in/michael-naimark-0441/ .

--

--

Michael Naimark

Michael Naimark has worked in immersive and interactive media for over four decades.