How Facebook Turned the Metaverse Into a Buzzword

It all started with a cat and a baby.

In 1959, neurophysiologists David Hubel and Torsten Wiesel conducted an experiment on an anesthetized cat to better understand the interactions between the eyes and the brain when shown light and dark images at different angles. Their discovery of how the brain’s visual cortex processes simple and complex images is considered a critical cornerstone of AI and deep learning. Wiesel and Hubel won a Nobel Prize in 1981 for their work, which scientists say remains one of the most critical developments in brain research ever accomplished to this day.

Two years earlier in 1957, engineer Russell Kirsch developed a machine that could  produce a digital version of an image. Using a model that converted images into a series of 1’s and 0’s for computers to process, he produced the very first digital image of his 3-month old son. Kirsch’s invention of the digital scanner is regarded as a critical cornerstone in the field of computer vision and credited with subsequent innovations in space exploration and CT scanning. Life Magazine would later publish the picture of Kirsch’s son  as one of the 100 most important pictures in history, alongside those capturing the collapse of the Twin Towers on 9/11.

Over the last six decades, advances in computing power, hardware and algorithms have made it possible for scientists and entrepreneurs to use those AI and computer vision foundations to solve a variety of real problems in the physical world. Today, innovations related to every field — remote machine monitoring and repair in the industrial economy, medical imaging and disease diagnoses in healthcare, building design and development, simulated training environments for complex jobs, military intelligence and soldier readiness on the battlefield — can trace their roots to the scientific accomplishments made more than six decades ago. These innovations solve real-world problems and result in profits for their developers.

And until October 28, 2021, no one ever used the term “metaverse” to describe the convergence of the physical and virtual worlds these innovations and others like them enabled, which is what exactly what they did.

This pivot has diverted attention, and most likely investments, away from innovators using sophisticated technology to harness the complementarities of the physical and digital worlds.

On that day, Mark Zuckerberg rebranded the 19-year-old social network formerly known as Facebook to Meta Platforms.

Ever since, most of the hype, VC funding, and corporate efforts to use advanced AI-powered visual technology to connect the physical and digital worlds has been focused on giving up the physical world to live in a virtual one called the metaverse.

This pivot has diverted attention, and most likely investments, away from innovators using sophisticated technology to harness the complementarities of the physical and digital worlds to improve how people and businesses live in the real one but interact in both.

And the pivot has co-opted the term metaverse, which was coined by Tim Sweeney and Epic Games in the early 2000s to describe a virtual ecosystem in which consumers engage purely for entertainment and social enjoyment, to mean living every aspect of life inside of one.

Down the Metaverse Rabbit Hole

Meta’s vision of the metaverse is straight out of Neal Stephenson’s 1992 sci-fi novel Snow Crash in which avatars live — as in live their daily lives — in a connected virtual world to escape the real one. Here’s a movie narrated by Zuckerberg to explain how we would live in his.

Zuckerberg describes the world in which VR, AR and AI will elevate social connections into an embodied internet that gives creators and contributors new ways to engage, via avatars in a purely virtual world. Web3, the decentralized version of the internet, is the technological foundation for it, capturing the fascination and checkbooks of investors bought into the notion of a digital economy governed by no one. Except, in the case of Meta, probably Meta.

Zuckerberg expects one billion people to live in — as in live in — the Meta metaverse by 2030 and spend “hundreds of real dollars” pretend living in a virtual world that Meta will enable, and profit from.

Meta’s late-2021 vision fueled a 2022 startup frenzy.

In 2022, $120 billion was plowed into metaverse companies, up from $57 billion in 2021 and $29 billion in 2020. Brands from banks and card networks to retailers and restaurants diverted marketing and R&D dollars into creating a presence in the metaverse as new virtual worlds like Decentraland got a boost as more people decided to give it a test drive. Described as owned by no one other than its users, Decentraland is a space where avatars can buy land, do their banking, eat at restaurants, buy and wear fancy clothes, meet and hang out with friends and watch fashion shows. The economies inside of these virtual environments, including Decentraland, are powered by crypto. On that score, Decentraland has seen the value of its token, MANA, decline by 71% over the last 12 months as users lost interest and crypto crashed.

Meanwhile, since 2019, Meta has racked up more than $30 billion in operating losses to create Meta’s metaverse — not counting its investment in Oculus, which it bought in 2014 for $2 billion, almost nine years ago to this day. In 2022, Meta reportedly lost six times more money than it made from the metaverse as it tried to get users to get on board.

Undaunted, and despite having a market cap that is a little more than half of what it was in 2021, Zuckerberg says that Meta will invest 20% of its revenues in the metaverse every year going forward.

There’s only one problem.

Most people — as in mostly all of them — don’t want to give up the physical world to live mainly via avatars in a virtual one.  Even though most — as in mostly all of them — very much want to use technology to improve their interactions with people and businesses in the physical world where they live right now.

The Convergence of the Digital and Physical Worlds

The launch of the smartphone and app stores in 2007 and 2008 created an entirely new way for the digital and physical worlds to interact and for people to move between them because they provided the operating systems, software platforms and connectivity for users to engage anywhere and at anytime.

Most people — as in mostly all of them — don’t want to give up the physical world to live mainly via avatars in a virtual one.

The app economy that emerged over the decade of the 2010s inspired innovators to create the infrastructure and the apps for the consumer-driven version of the convergence of the digital and physical worlds. Innovators weren’t inspired to create apps that asked users to abandon the physical world, but instead to use technology to create more convenient ways to live in it.

The launch of Uber in 2009 was one of the earliest and novel use cases of the physical-meets-digital-world phenomenon. Using a smartphone and an app (digital) to hail a car service (physical world) that used embedded payment credentials (digitally) to automatically charge the cost of the ride once the destination was reached (in the physical world) eliminated the hassle of wondering if a taxi was available or would show up on time. Uber also birthed the “invisible payments” experience that many still struggle to replicate almost 14 years later.

Throughout that decade, advances in hardware, cloud computing, data and AI spawned a new wave of innovation that would pave the path for more sophisticated AR/VR experiences using connected devices and apps. Early use cases of AR and VR technology were primitive by today’s standards, but gave early adopters a peek into the future — previewing how a new sofa might look in their living room or how makeup or lipstick might look before making a purchase.  Today, apps using AI and visual technology help consumers see how glasses might look on their face before making a purchase and take an eye exam before buying them, for example.

Tonal and Peloton may both be the victims of a change in post-pandemic consumer fitness preferences, but their technology makes it possible for a consumer to have an immersive fitness experience wherever they are in the world. Apps and streaming tech make it possible for consumers to shop live with influencers and to bring shoppers and sales associates together, live, to review merchandise and buy it without being in a store. At the same time, innovations in AI allow brands to deliver personalized offers or recommendations in context and in real time to consumers as they are browsing a site so they can make more sales.

Yet-unpublished PYMNTS research finds that more than 75% of U.S. consumers and nearly 90% of millennials have used voice technology to complete an average of eight tasks over the last year, including making purchases. Generative AI models and voice, combined with visual applications, have the potential to turn a series of discrete activities into one seamless experience powered by AI, voice and visual tech. Smart, voice-enabled virtual assistants capable of handling increasingly complex tasks will be commercially viable in a matter of a few years.

Healthcare has the potential to be transformed as the combination of miniaturized and connected medical devices and AI-powered telehealth applications give doctors an opportunity to monitor and diagnose patients without making a trip to the doctor’s office. Powerful AI and imaging models assist doctors in identifying serious medical conditions in near-real time, and to use those precise diagnoses to more properly tailor treatment plans.

Generative AI models will democratize training for people by making it accessible remotely — particularly critical as a technology-driven workforce requires reskilling and upskilling. And these models will make it possible for everyone to have the benefit of the latest and most current information, regardless of where they happen to be.

None of these use cases, and the many more that will evolve in the years to come, require people to give up the real world to live in a fake one, or even to live in a fake one at all. But all of them improve the quality and efficiency of the outcomes that real people and real business have for the problems they face in the real world right now.

A Connected World Powering a ConnectedEconomy™

People — all people, everywhere in the world — like living in a connected world where little more than an app and a smartphone other device can open the door to a connected economy in which they can participate. Tens of millions of connected devices now power a connected world in which the best of the physical and the best of the digital worlds converge to add value to people and businesses. Just like it was impossible for Hubel, Wiesel or Kirsch to know how innovators would use their scientific and technological breakthroughs to innovate decades into the future, it is impossible for anyone to imagine how entrepreneurs and business leaders will use breakthroughs in AI and visual technology to improve our world.

What we do know is that humans don’t have to give up the physical world to benefit from what they will create. But maybe we all have to give up talking so much about the metaverse as the stepping stone to this vibrant connected economy in order for this powerfully exciting connected world, powered by the imaginations of these innovators, to reach its fullest potential. And for entrepreneurs and investors to stop chasing the metaverse rabbit, which will likely lead to few popular use cases and little return for their efforts.