paint-brush
Hyperreal or Cartoonish: The Keys to Lifelike Identities in the Metaverse by@dmshvets
643 reads
643 reads

Hyperreal or Cartoonish: The Keys to Lifelike Identities in the Metaverse

by Dima ShvetsFebruary 21st, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Dima Shvets is cofounder and CEO of Reface. He says we need to understand how our virtual and digital identities are connected to the concept of hyperreality. Hyperreality is an experience of a digital (artificial) origin that imitates reality but is not perceived by the user as reality.

People Mentioned

Mention Thumbnail
featured image - Hyperreal or Cartoonish: The Keys to Lifelike Identities in the Metaverse
Dima Shvets HackerNoon profile picture

If you're wondering how the future of simulated reality might look, you should watch Blade Runner 2049. This movie has one underrated character whose true meaning we don't even fully catch behind the pretty picture — the idealized and portable hologram Joi, with a custom avatar and options for changing her appearance in real time. In fact, Joi is an AI companion app that can be played in 3D with the help of additional devices, but the photorealistic 3D model makes her seem real. The only drawback is that Joi cannot be fully materialized in the real world.


However, her hyperreality means she is much more than just a simulation — for the movie’s protagonist, she's a real person. She’s his girlfriend. And, watching her in the movie, we also ignore her artificiality. Joi might be the best example of what many people would like to be in immersive spaces — perfect, endowed with the ability to create, and hyperrealistic.



Image source: Blade Runner 2049


But how do we get there? Virtual spaces that imitate reality already exist, but there is still no unified solution for how to transfer your personality to them and maintain a high-quality representation. It takes time, costs a lot of money, and, as a result, is not available to the average user.


Yet, achieving hyperrealistic virtual experiences — especially hyperrealistic avatars — is within our grasp. We just need a better understanding of how our virtual and digital identities are connected to the concept of hyperreality, how to create hyper-realistic versions of ourselves for the digital space, and why solving the problem goes beyond the high-quality visualization of objects and spaces.


What is Hyperreality?

We consider as digital objects that are visibly indistinguishable from physical reality. But because of their digital nature, these objects also have properties and capabilities that may not exist in the physical world. This definition applies to avatars or other representations of people (like Joi from Blade Runner 2049), as well as to any photorealistic digital content — movie stunts, games, and special effects. Hyperreality is more than just augmented reality that deceives the senses because we don’t--or prefer not to--feel the difference.


For a better explanation, let's place hyperreality on the spectrum between real and artificial. This spectrum also includes augmented and virtual reality, each providing a different experience of human perception.


  • Virtual reality is a complete digital representation of an imaginary world that the user experiences as independent knowledge, an alternative to the real world.

  • Augmented reality is a mixed reality with elements of either the digital world \or the real world. It is perceived by the user as an experience of interaction with the digital world for greater immersion and creating the effect of presence.

  • Hyperreality is an experience of a digital (artificial) origin that imitates reality but is not perceived by the user as a game. On the contrary – it expands the perception of reality.


Hyperreality is Real Life, but Better


​​I believe hyperreality will be a key aspect of certain experiences in the metaverse and in the future internet overall. I agree with Tom Graham, CEO and co-founder of Metaphysic, who because it allows more authentic and emotionally engaging experiences. This transition is necessary to open up a metaverse beyond gaming and entertainment, to include more ordinary daily affairs like visiting doctors or family gatherings.


For as long as the web has existed, people have imagined what a true virtual reality would look like. Almost 25 years ago, for example, using the term MUD (multi-user dungeons), which today we call the metaverse: “MUDding is more than just a computer game; it is hyperreality in full force.”


And when a person finds themselves in a virtual space, how do they perceive that world? Who or what are they? “They are as real as my real life,” responded one of Turkle’s research subjects. “I can now have a portfolio of lives in which my real life is but one; RL [real life] is just one more window, and it's not usually my best one.”


Amazingly, people in the late 1990s described their sense of virtual space as similar to what we feel today. According to , people feel more included in the metaverse than in real life. The to fill in three core aspects of life: inspiration, individuality, and inclusion.


Transferring a digital copy of yourself into the virtual world and having everyday human experiences is the next breakthrough to be made on the way to building the metaverse.



In fact, many people believe that real-world experiences will eventually be replaced by virtual ones, and they are expecting a hyperreal future. In June 2022, and found that two-thirds are excited about transitioning everyday activities to the metaverse, especially when it comes to connecting with people, exploring virtual worlds, and collaborating with remote colleagues.


However, any given environment will only feel hyper-real if it reflects the real world as much as possible. Not fictional and cartoonish, but real. Like a work Zoom call but in a 3D office with colleagues, or like a festival with friends but in an immersive 3D space. We already have gaming environments where the user can become anyone from a princess to a monster; there is nothing technologically complicated about them.


People, Places, and Things

The next challenge is learning how to recreate the real world and integrate ourselves into it. A major part of ourselves, of course, is our physical appearance. However, unlike with modern incarnations of social media, where users struggle over the need to look perfect or to look “real” (while, of course, still looking good), hyperreality should serve as a form of stress reduction. We should be able to modify our digital avatars without trying to make our literal selves seem perfect — whether you'd like to look like a celeb or stay yourself by preserving your personal features.


Maintaining the very ability to do what we want with our avatar is another key — grow fangs or choose any hair color, put on clothes that we like, or even change our shape to a zoomorphic one. Ideally, in virtuality, it's just the way we like it.


But our belongings also define us in real life, and they will in the metaverse, as well. So we will strive to transfer our belongings into the digital world – for example, our houses, cars, sneakers, or even favorite cups. Such services already exist. , a metaverse agency specializing in creating hyperrealistic customized house models, allows clients to add digital assets, their favorite furniture, or art to make their virtual home a unique place.


Even better is if your avatars, belongings, and general digital identity are interoperable as you transition between worlds or environments. Some enabling technologies to do this already exist, including , which makes virtual identities interoperable with various platforms.


Between Static and Expressive Lies an Uncanny Valley

Achieving true hyperreality will be difficult, though, even as some of the fundamental capabilities are shaping up to some degree. Solving all of its requirements and challenges — the technological ones as well as the business ones — is beyond the scope of any one company or field. The quest to perfect human graphics functions as a microcosm of the greater field.


When of its metaverse platform , for example, not everyone was satisfied with the quality, detailing, and realism of Mark Zuckerberg’s cartoon avatar. In fact, most, if not all, current visualization attempts in the metaverse are far from hyperreal and cannot compare to those in, say, massively multiplayer online role-playing games. There’s a simple reason for this: We don't yet know how to render the myriad human emotions and expressions that should be available in lifelike virtual environments, where avatars are responding to unpredictable stimuli in real time.


Image source: Meta


However, there are a bunch of tools on the market for creating digital avatars – from the cartoonish like and the aforementioned Ready Player Me to the most hyper-realistic provided by character studios like and . These studios showcase the highest quality of hyperreal humans for the metaverse, leading with Unreal Engine’s latest MetaHuman release, which can import face scans of real people and automatically generate a digital face. Although to fully reproduce someone's digital copy in 3D, you need to spend a lot of time playing with skin color, hair, and other details, even if you're a pro designer.


I would compare these beautiful realistic digital humans to a very expensive car that is the best in its class but one you can only drive on holidays and on a straight, flat road. We won't be able to use the same tools and approach to create hyperrealistic digital humans for a fast-changing, dynamic environment as we do with static realism. It is still too heavy and complex to meet all the challenges of real-time rendering at scale within the metaverse.


Just recently, though, Meta's Reality Labs its latest improvements in the Codec Avatars 2.0 project – prototype VR avatars using advanced machine learning techniques. The new avatars look , but they are not the result of the work of neural networks alone. For example, such quality still requires scanning with 3D cameras, which, for a number of reasons, are unlikely to reach mainstream adoption anytime soon.


Overall, the main hurdle is computation because the closer you get to hyperreality, the more power and time you need.


For example, to gain a 1% increase in realism, you need to do five times the computation.


So the real challenge for companies in the metaverse is to solve the uncanny valley of facial representation, which results in humans that look real, but also unnatural and creepy because something is just off.


The uncanny valley effect is even more noticeable when we’re experiencing the metaverse via an immersive device like a VR headset. We can forgive some bugs in video game graphics on a flat screen but trust me, you don't want to witness someone’s leg separate from their body (unintentionally, at least) while using an XR headset or see facial movements that do not keep up with direct speech. Therefore, the technology — from the network to the device itself — must be highly optimized to allow rendering on the device in real time. We have to find a workable tradeoff between user habits, device capabilities, and maintaining a hyperrealistic immersion effect.


Neural Rendering Shows Promise


When speaking about lower-quality but faster technological solutions for creating realistic 3D characters, we should look at the neural radiance fields (NeRFs) method. The by researchers from Google Research and UC Berkeley in 2020 at the annual European Conference on Computer Vision. In 2022, a new wave of conversation about this technology appeared after – a neural rendering model that has the capability to turn multiple 2D images into 3D scenes without camera scanning. In contrast to classic polygonal modeling, neural rendering reproduces a 3D scene based solely on optics and linear algebra. , it can be used to “create avatars or scenes for virtual worlds, to capture video conference participants and their environments in 3D, or to reconstruct scenes for 3D digital maps.”


The main advantage of neural rendering is its scalability. So far, the model has taken a long time to learn, and its render speed remains the main gatekeeper to entering the market, but AI is much more flexible in terms of optimization. Neural networks just need more time and data to produce progressively better results. I think we'll soon observe the emergence of new ways to optimize and commoditize neural rendering so that everyone can take a few selfies on a smartphone and get their virtual 3D copy to participate in a hyper-real 3D space.




It is more complex technologically, but I believe there will be a large market for future virtual worlds that are hyperrealistic, in addition to those that are cartoonish or game-like. And although we are talking about a future that isn't quite here yet, machine learning is once again proving itself in a new field. It could play a significant role in creating the hyperreal magic of the metaverse.


If you want to explore the topic a little deeper, I made a list of interesting articles on digital identity creation, graphics evolution, neural rendering, and more:


  • : For CGI geeks. A piece by Dan Sarto about how VFX expert Paul Lambert created the holographic Joi and other cyberpunk stuff and visuals for Blade Runner 2049.


  • : A solid explanation of the digital face evolution. My article about how technology is changing our faces and why these transformations are needed today. For starters, the mystery of whether AI can create an image worthy of the trust of millions.


  • : In-depth research done in 2021 focused on improving virtual characters' realistic expressions and non-verbal communication channels to create a more customized experience.


  • : This piece covers how William Wiebe rethinks avatars generated for the infrastructure of the metaverse, revealing its embedded preferences in the process.


  • : A great article by my colleague and Reface co-founder Oles Petriv that explains the tech aspects of neural rendering on the path to instant creation of photorealistic 3D objects.


  • : This is more about solving AI and is almost off-topic, but while structuring my article, I enjoyed exploring DeepMind's proof-of-concept demonstration of deep reinforcement learning, addressing one of the key challenges in AI research.


바카라사이트 바카라사이트 온라인바카라