Photography tools

Nvidia announces new AI-powered metaverse tools at SIGGRAPH – The New Stack

Nvidia is betting on the metaverse. At this year’s SIGGRAPH, an annual computer graphics conference, Nvidia announced a series of new metaverse initiatives. These include the launch of Omniverse Avatar Cloud Engine (ACE), a “suite of cloud-native AI models and services” for creating 3D avatars; new neural graphics SDKs, such as NeuralVDB; plans to evolve Universal Scene Description (USD), an open source file format for representing 3D scenes; and various other updates to its Omniverse platform.

This year’s SIGGRAPH will “likely go down in history,” Reverend Lebaredian, Vice President of Omniverse and Simulation Technology, said at a press briefing. He believes 2022 will be the biggest inflection point for the computer graphics industry since 1993, when the movie “Jurassic Park” was released. The World Wide Web and Nvidia also launched in 1993, he added.

“What we are seeing is the start of a new Internet era,” Lebaredian continued. “The one that is usually called ‘metaverse.’ It’s a 3D overlay of the existing internet – the existing two-dimensional web – and it turns out that the foundational technologies needed to power this new era of the internet are all things the folks at SIGGRAPH have been working on for decades. .”

Yes indeed, 1993 was a huge inflection point for computing and digital graphics. But will the metaverse, still just a concept in 2022, ever match the impact of the web? It’s impossible to say, as we’ve only seen ‘fundamental technologies’ (like the USD) emerge so far. There is no real “metaverse” currently – just a lot of talk about building one.

Lebaredian later admitted in the briefing that Nvidia is a “tools company, ultimately,” and so it will be up to others to do the work necessary to develop the metaverse. That said, the tooling it announced looks promising.

Neural Graphs

Nvidia is best known for its graphics processing units (GPUs), but most of today’s metaverse announcements are AI-based, or what the company calls “neural graphics.”

“Graphics is truly reinventing itself with AI, leading to significant advancements in this area,” said Sanja Fidler, vice president of AI Research, during the briefing.

Nvidia defines neural graphics as “a new field that blends AI and graphics to create an accelerated graphics pipeline that learns from data.” The pipeline is shown in the diagram below, which Fidler says will be used “to simulate and render a dynamic virtual world.”

Nvidia Neural Graphics Pipeline (click for full screen image)

Developers can access this functionality through various Neural Graphics SDKs, including new releases NeuralVDB (an update to the OpenVDB industry standard) and Kaolin Wisp (a Pytorch library that aims to be a framework for neural field research ).

Fidler explained that creating 3D content will be a critical part of user adoption of the Metaverse. “We have to put things in the virtual world,” she said, “and we’re going to have many, many virtual worlds. Maybe each of us wants to create our own virtual world [and] we want to make it interesting, diverse, and realistic content – ​​or maybe even not-so-realistic, but interesting content.

So the idea is that neural graphs will guide content creators to create “interesting content” for the metaverse.

“We believe AI is existential for creating 3D content, especially for Metaverse,” Fidler said. “We just don’t have enough experts to fill in all the content we need for the metaverse.”

An example application is to integrate digitized 2D photography into virtual reality. Although this is already possible, Fidler said “it was somewhat cumbersome for the artists – they had to use a lot of different tools and it was kind of slow”. Nvidia’s new “neural reconstruction” process, she said, turns it into “one unified framework.” She mentioned a tool called Instant NeRF, which does just that (NeRF stands for “neural radiation fields”).

Fidler even hinted that neural graphics would allow social media users — not just artists — to easily create 3D content based on photographs. Certainly, if the metaverse is to take off like the web did in the early 2000s, then ordinary users will need to be able to “write” as well as “read” 3D content.

avatar cloud engine

Perhaps the most intriguing tool is the Omniverse Avatar Cloud Engine (ACE), a new AI-assisted 3D avatar builder that will be available “early next year” – including on “all major cloud service”.

If everyday people are going to use the metaverse as much as they use the web today, they’ll need easy ways to create custom avatars. Not only that, Nvidia claims that ACE will be able to create autonomous “virtual assistants and digital humans”.

“ACE combines many sophisticated artificial intelligence technologies, allowing developers to create digital assistants that are well on their way to passing the Turing test,” Lebaredian said.

avatar cloud engine

Cloud Avatar Engine (ACE)

ACE is built on NVIDIA’s Unified Computing Framework, which will be available to developers in “late 2022”.

Lebaredian added that ACE is “graphics engine agnostic”, meaning it can “connect to virtually any engine you choose to render avatars”.

Modern Tools for the Metaverse

In addition to Neural Graphs and ACE, Nividia released a new version of Omniverse at SIGGRAPH, which CEO Jensen Huang described as “a USD platform, a toolkit for building metaverse applications, and an engine computing to run virtual worlds”.

It remains to be seen how many 3D artists and developers – not to mention consumers – are embracing Nvidia’s latest collection of 3D graphics and AI tools. But just as the Web needed graphics tool companies (like Adobe and Macromedia) to pop up in the 1990s, the Metaverse will also need tool vendors. Nvidia is trying to take over this role.