Creating a virtual digital environment is a process that normally takes time, as those who deal with online games know very well.
Every detail requires special processing and powerful “Chip” of graphics processing 3D shapes, lighting, textures etc.
Games with cutting-edge graphics require “armies” of developers who have been working for years. However, this may not be necessary, thanks to a powerful new “artificial intelligence algorithm” that can produce photorealistic details of a scene on the spot.
As “MIT News” says, this software, created by Nvidia, will not simply ease the life of “software developers,” but can be used to automatically create virtual environments for virtual reality, or to train autonomous vehicles and robots.
“We can create new sketches that have never been seen and render them,
said Brian Catanzaro, vice president of applied “deep learning” in Nvidia.
“Basically we teach the model how to paint based on real video,
he said.
“Nvidia” researchers used a standard approach of “machine learning” to identify different objects in a video scene: Cars, trees, buildings, etc. Then they used what is known as “generative adversarial network” (GAN) to “educate” a computer to complete realistic 3D images. Beyond that, the system can take the outline of a scene, showing where different objects are, and completing the details.
According to Catanzaro , the technique could make things easier on game design field: Beyond the “rendering” of entire scenes, this method could be used to introduce real people into video games, by feeding the system with their a few minutes videos from the real world. He also highlights the advantages that would mean for virtual reality (creating realistic environments), or providing synthetic training data for stand-alone vehicles or robots.
Source: naftemporiki.gr
(Συνολικές Επισκέψεις: / Total Visits: 26)
(Σημερινές Επισκέψεις: / Today's Visits: 1)