How Neuroscience Has Shaped the Metaverse

Media technology companies will decrypt the brain’s spatial memory networks with advanced AI to generalize its intelligence

In 2005, neuroscientists in a laboratory basement at Stanford transformed their field by effectively digitizing brain network manipulation. The following year, Facebook, a quick bike ride down the road, would open its platform to the public and later wield AI technologies to sway its massive social network. Neuroscientists and computer scientists would soon converge their network-wielding tools to manipulate reality as we once knew it.

Social media companies have leveraged what neuroscientists have learned in penetrating the brain’s networks to fragment our concentration, precipitate addiction, depression, and anxiety, and manipulate social behavior with misinformation, as shown through political uprisings. Ironically, the “better connectivity” offered by media technologies compromises the very systems that split the brain’s communication mechanisms that give rise to these current issues.

Networks like Meta’s—version of virtual reality, the Metaverse, and Apple’s Vision Pro would create an immersive simulation between the user and the natural world or an augmented version of reality to extract content from the user to predict better and manipulate their decision trajectories. By developing computational models based on the brain’s spatial navigation network, AI researchers at places like Google, Microsoft, Apple, and Facebook will exploit the findings made through neuroscientists' ability to infiltrate memory systems and develop new forecasting algorithms beyond the reward-based methods used today. As you’ll see below, a tactic that’s been massively successful when used against the dopamine reward system.

If research from DeepMind, which was acquired by Google several years ago, is any indicator, then neuroscience-inspired algorithms that combine spatial memory and dopamine-based reinforcement may be the future of social media, AI, tech, and most of our daily interactions. With such human cognition and behavior models, AI technologies can become more generalized in their predictive and problem-solving capabilities using the massive data sets gathered through the Metaverse and augmented realities.

The potential for exploitation creates more advanced and immersive AI based on how we naturally think, fundamentally changing the behavioral and cognitive human-computer relationship. The critical ethical and societal questions should be addressed. Here, I aim to explain some of the mechanisms of the brain, AI, social networks, and their interactions with the Metaverse and applications like digital marketing and digital phenotyping.

Algorithms Bridged the Brain–Behavior Divide

Computer scientists and neuroscientists have a long and successful history of modeling learning that focuses on a reward. Media technology companies train their algorithms based on these models to predict user actions.

 

Think of Pavlov’s dog. Ring a bell and pair it with food; the dog will learn that food comes whenever they hear a bell. They’ll also drool all over the place. Or, if you’re a fan of The Office (the U.S. adaptation), you may remember Dwight holding out his hand after Jim repeatedly gives him a mint as his computer’s operating system chimes while it shuts down. These are examples of reinforcement learning (RL).

 

RL models, which are based on behavioral neuroscience studies, were first developed in the 1970s and were created, in part, to address the gap in research between brain and behavior. The technology at the time was unable to show how brain network activity resulted in learning. Over time, these models were elaborated upon. By the ‘80s, a significant improvement was made to account for real-time learning and predictions (i.e., temporal difference learning), which paved the way for algorithms that AI technologies now take advantage of.

 

As neuroscience technology evolved and brain activity was further linked to behavior, neuroscientists found neural networks that matched the computational components of these RL models. The system that showed a increasingly direct overlap was the dopaminergic reward prediction network.

 

In the late ‘90s, computational neuroscientists discovered that when an unexpected reward is encountered, the dopamine-producing neurons deep within the brain fire nerve impulses to release the molecule, most of which is to a brain region responsible for motivation, and motor control of the body to develop habits. Remember Dwight reaching his hand out.

 

However, when the dopamine network is overstimulated, so too is the motor control and habit formation part of the brain (i.e., the basal ganglia). When a cue, like a bell or a computer chime, predicts a reward, the brain learns to release dopamine to the cue rather than to the reward itself. It’s not hard to see how a person's phone, app icon, or some notification (even being bored) gets you to habitually check your phone because there might be some unexpected comment, heart, or video. There’s, in fact, a link between dopamine and social app use.

 

Media conglomerates utilize media technologies and RL models to manipulate the user’s behavior and their dopaminergic system by proxy to get them to respond to cues for engagement habitually. Researchers have quantitatively validated that these RL models are reflected in the behavior of social media users. People strategize their posts to maximize their digital treats.

 

RL-based algorithms increase engagement through the uncertainty of social reward. Though the direct effects of social media on mental health are still being researched, neuroscientists are sure that when high stress is mixed with uncertainty, there is a greater probability of mental health issues arising.

 

Watching people being killed in your News Feed, next to a highly filtered image of impossible beauty standards, witnessing the U.S. Capitol riot and attack, scrolling the barrage of return-to-work memes, the scarily precise ads to purchase toothpaste when you visit your mom out of town, and the collapse of the supply chain due, in part, to U.S. over-consumption due to the pandemic—all these things we engage with fuel our dopamine’s computational mechanisms.

 

Still, these RL models are limited, and not all human behavior and learning can be explained or predicted by reinforcing their behavior. For some time, researchers (including myself while at MIT) have shown that dopamine is released in response to stress, noticing anything in the environment that is unexpected or stands out, and is important for navigation-based memory (something I’ve also studied).

 

To more fully understand how we learn and use the past to imagine potential futures, understanding our environmental context and how we get there, not just trying to get more digitally based approval, must be considered. By studying the brain regions that underlie these processes, computer scientists and neuroscientists can update their algorithmic models to extend beyond a rewards-based approach that better captures human–media interactions and make predictions about how they navigate social networks. Research that Google and Microsoft are currently conducting.

 

The Artificial∞Human Intelligence Decryption Loop

The Metaverse is creating a new context for users to enmesh with. RL models have successfully bridged the brain and behavior for 2D media. However, mimicking brain functions that create contextual space will be pivotal in guiding future AI to match the symbiotic complexity between humans and machines in a 3D space.

 

Neuroscientists have shown that the brain regions encoding the fundamental components of experiential memory—that is, spatially integrated context and navigation—are essential for quickly updating new information with prior knowledge and generalizing problem solving—critical elements in human intelligence that are missing in AI.

 

There are many types of memories and different brain regions that support their various forms. As discussed above, the basal ganglia are essential for motor control and habits, called procedural memory, but this one system did not evolve to solve these problems. It is the hippocampus—in conjunction with other regions like the basal ganglia—that generalizes spatial solutions. 

A rendering of the human hippocampi

The hippocampus builds our perception of context, for example, the room you're sitting in right now, the train you're riding on, the queue you're standing in, and is critical for our ability to generate spatial maps—a GPS of the mind, if you will. We navigate our contexts to build a mental map as we encounter new experiences. Through these processes, the hippocampus quickly integrates new information into its network and builds future world models, something we call imagination.

 

Envision walking through a museum. The hippocampus merges the visual-spatial modules of the white walls, open floor space, the social encounters between people, the paintings, and unifies them to create the context you’re imbued by. Your emotions attach to visual cues you travel through the museum. In the future, when you see or hear something that reminds you of this holistic event, you can rapidly incorporate novel data into your hippocampal network.

 

Neuroscientists have thoroughly analyzed the hippocampus and its sub-networks, quantified the amount of spatial data each nerve impulse processes (~1.8 bits), how these bits build our minds' GPS system, and map visual cues on the walls or objects within contexts. These nerve impulses are thought to “replay” backward in time when we think about the past (i.e., remember), play forward when we imagine the future, and aid in abstracting general rules about the world.

 

Google’s DeepMind co-founder has written extensively on the need to combine neuroscience and AI. His company is actively researching the hippocampus and using algorithms to determine how it quickly pulls new information into its memory network. They have trained their RL models to simulate virtual navigation and have combined spatial memory and RL models to show how the hippocampus predicts the future. Their research has also looked at “replay” in generalizing knowledge. However, a missing component is the ability to quickly identify knowledge within complex networks that humans can do without direct experience, that is, cognitive navigation like internal language or social strategies.

 

Although RL-based algorithms can outcompete humans in particular tasks, they are slow to learn and narrow in their problem-solving ability. In developing navigational models from the hippocampus and combining them with RL-based algorithms, AI could become more generalized and be incorporated into how the Metaverse network engages with its users.

 

Deep Marketing

Findings from DeepMind and other research entities could result in algorithms that alter the course of media-technology prediction and user interaction. Figuring out where a person may want to navigate to, what they may engage with, and how they decide what to purchase is what marketers want to know when they pay Meta to align users' attention and actions with their ads.

 

This type of marketing data is precisely the information that would be inferred through network effects, that is, extracted through the interactions between the user and the Metaverse. Interacting with this augmented reality will require algorithms to quickly adapt to different contexts that users enter, rapidly learn about, and generalize behavior to predict what users will do across different environments.

 

The GPS of your mind will be turned inside-out and connected with other biometric data to support digital phenotyping. Digital phenotyping uses sensor information from your smartphone and wearables to capture moment-to-moment behavior to train predictive algorithms. With this biometric data and electronic health records, AI technologies can predict mood and psychiatric states like depression, anxiety, psychosis, and mania.

 

Already, facial expressions can accurately predict post-traumatic stress disorder (PTSD), depressive symptoms can be pulled from your texting speed, and your brain wave data can be inferred from your smartphone use; all of this information will be sewn into the fabric of your everyday habits.

An illustration depicting cloud computing, personal devices, and extraction to predict health outcomes.

Digital phenotyping and uploading personal health information to the cloud for analytic prediction of health states.

 

Although digital phenotyping is an essential advancement within psychiatry to widen care and enhance therapeutic outcomes, the technology can be used beyond its intended use. Corporations will push on emotional pressure points to get you to act the way they want to have you purchase what they are selling. Add in the potential for live facial recognition and eye-tracking from virtual reality headsets and Elon Musk’s vision for his Neuralink system, and things will get weird.

 

Combining our spatial memory information, RL reward behavior, and extracted biometric data, these new spatial-memory-RL algorithms will enter a new state of computer-human interactions, identifying the internal states of the user network to combine with simulated social networks, reflecting the user's mind embodying the user with the network. The Metaverse would act more like a cybernetic organism—or artificial parasite, depending on how bleak you want to take the analogy—that trains its algorithms on user data, altering the path of human perception and imagination.

 

Who Controls the Present, Controls the Past

Augmented and virtual reality will profoundly change human interaction, culture, and memory as the simulation of one replaces the biology of the other. As our attention moves from “in real life” through our screen and into the digital world, the Metaverse will act as the virtual neurotransmitter that regulates the trillions of connections between neurons. These connections, called synapses, mediate memory recall.

 

Meta has deployed RL-based algorithms based on the brain’s dopamine prediction network to maximize user engagement and sell ads. Overstimulating this network negatively affects our memory, contributes to mental health issues, and helps spread fake news.

 

Neuroscience is deepening its synthesis with AI research. With the essential data for training cutting-edge models and network architectures, including the potential integration of the brain’s hippocampal network, through the simulation of the Metaverse and augmented reality, a supercharged learning, intelligence, and problem-solving prowess of AI systems could happen faster than political systems, infrastructure, or even the human mind can handle. As a result, we could see an unprecedented improvement in predicting user behavior that falls into the hands of bad actors without a broader understanding of these specific mechanisms.

 

There is a tremendous opportunity for AI to find new theories of the human mind that we may never be able to do on our own. However, given privacy issues with tech companies like Meta and Google to sway election results, providing the NSA with direct server access to spy on Americans, and funding global misinformation, increased access to our minds by such corporations will not end well.

 

Like AI algorithms, human memory is deeply intertwined with creating models of our potential futures, that is, imagination. If tech companies base their AI technologies on our spatial and reward networks to predict and drive our cognition and behavior, they may overwhelmingly impact how we access our memories to imagine the future. As George Orwell aptly stated: Who controls the past controls the future.