By powering the neural network by selecting scene images and a rough 3D scene model created automatically using standard COLMAP software, it can accurately visualize how a scene would look from any point of view. But the neural network they developed at the University of Erlangen-Nuremberg differs from previous systems in that it is capable of extracting physical properties from still images.
3D world of two images
“We can change the position of the camera and therefore get a new look at the object,” says project manager Darius Rückert. This system could technically create a 3D world that can be explored from just two images but would not be very accurate.
“The more images you have, the better the quality. A model can’t create things he hasn’t seen,” Rückert explained in New Scientist.
Some of the simplest examples of generated environments use between 300 and 350 images taken from different angles. The researchers want to improve the system by simulating the way light is reflected from objects in the scene to reach the camera, which would mean fewer photos are needed for accurate 3D rendering.
Incomparable rendering quality
“So far, creating photorealistic images from 3D reconstructions has not been fully automated and has always had noticeable flaws,” says Tim Field, founder of Abound Labs, a company that makes 3D imaging software. The system still requires accurate 3D data entry and does not yet work on moving objects, but “the rendering quality is unmatched.“
“This proves that automated photorealism is possible,” says Field, who believes the technology will soon be used to generate visual effects in movies and virtual reality. “It will speed up the rendering of computer-generated images using machine learning.”
By: Amber V. – Zexron