Virtual worlds

At SIGGRAPH, Nvidia pushes the boundaries of virtual worlds and digital humans

Nvidia Corp. spoke today at SIGGRAPH ACM computer graphics and technology conference in Los Angeles for announce advancements in the tools developers will use to create the metaverse and avatars for digital humans.

“The combination of artificial intelligence and computer graphics will power the metaverse, the next evolution of the internet,” said Jensen Huang, founder and CEO of Nvidia.

Updated Omniverse Platform

In order to give developers what they need to deliver the next stage of the Metaverse, designing interconnected immersive 3D virtual worlds similar to what the Web has done for the Internet, Nvidia is advancing its real-time collaboration and creation platform Omniverse with a number of new tools.

The Omniverse expansion includes a number of AI-powered features that allow artists and developers to collaborate closely to create virtual worlds and content faster and connect to other third-party apps.

These include recently released upgrades to the Omniverse Kit, a toolkit for creating native Omniverse extensions and apps that include real-time physics simulations for particles, fabrics, and other physics objects. This allows artists and engineers to quickly recreate anything from the real world and put it into virtual worlds.

The AI ​​tool Audio2Face has been added which creates facial animations directly from audio files and can even mimic emotional facial expressions realistically. This is an important element in making virtual avatars, the representation of people in virtual worlds – so-called “digital humans” – more realistic.

Nvidia also announced the physics-based machine learning framework Modulus as an Omniverse expansion. Using Modulus, developers and engineers can use machine learning models trained in real-world physics to create digital twins and simulate industrial metaverse applications such as warehouses, warehouses and robotic pipelines, so that they can be experienced safely.

AI and Neural Graph Tools

To simplify and shorten the real-world modeling process, Nvidia has released a number of software development kits that use AI and neural graphics standards research, allowing an AI to learn from data as it happens. and as it receives them.

The new release Kaolin Wisp is a complement to Kaolin, which is a library of PyTorch tools for deep learning. This can allow engineers to implement new training models in days instead of weeks. In addition, NeuralVDBan enhancement to the industry standard for OpenVDB volumetric data storage, can significantly reduce the memory footprint of 3D model data using machine learning.

“Neural graphics blend AI and graphics, paving the way for a future graphics pipeline that lends itself to learning from data,” said Sanja Fidler, vice president of AI at Nvidia.

Nvidia produces software that allows artists to recreate real-world objects simply by scanning them with a camera and using neural graphics to capture them and their behavior quickly. The software, called Instant NeRFcreates a 3D model of the object or scene from 2D images and allows artists and developers to immediately import it into a virtual world or metaverse.

The rise of “digital humans”

Interacting in the metaverse will require more than just simulating physical objects such as cars, tables, and chairs. It will also be populated by digital humans, the avatars of people visiting the real world, who will want their bodies, called avatars, to express themselves coherently.

There will also be AI-driven avatars in the metaverse that look and act like people who serve as virtual assistants and in customer service roles.

To bring them to life, Nvidia unveiled the Omniverse Avatar Cloud Engine, a collection of models and AI services for creating and operating avatars within the metaverse. ACE’s tools will include everything from conversational AI to animation tools that sync an avatar’s mouth with speech and their expressions to appear appropriate to emotions.

“With Omniverse ACE, developers can build, configure, and deploy their avatar app to any engine in any public or private cloud,” said Simon Yuen, Director of Graphics and AI at Nvidia. .

The Avatar Cloud Engine includes technologies such as Audio2Face and Audio2Emotion, which would translate complex facial and body animations and more into the metaverse. As a result, avatars would be able to move and talk realistically.

“We want to democratize the creation of interactive avatars for each platform,” Yuen added.

The technology will be generally available in early 2023 and can run on embedded systems and all major cloud services.

Picture: Nvidia

Show your support for our mission by joining our Cube Club and our Cube Event community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, ​​Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many other luminaries and experts.