When it comes to mobile photography, we’ve reached a point where smartphones have featured more megapixels than ever as well as large sensors. However, these have helped the actual photo-capturing experience only so much when it comes to shooting in extreme low light or in complete darkness, where digital noise is still a problem even for the best cameras. Google’s new AI tool could however, change that forever.
Meet RawNeRF, part of the larger MultiNeRF project that uses NeRF (Neural Radiance Fields) to scan a collection of images and reconstruct a 3D render of the scene. While it isn’t quite what Tony Stark used to recreate a street explosion in Iron Man 3, the technology works in a similar manner and opens a number of possibilities.
This includes changing camera positions, exposure or even the focus after an image has been taken. RawNerF “combines images taken from many different camera viewpoints to jointly denoise and reconstruct a scene,” says Google researcher Ben Mildenhall in a recently released video. The power of RawNeRF can be seen in the video below.
RawNeRF is also able to handle scenes with large dynamic range and as you saw in the video above, it can switch focus, change tone-mapping and exposure levels and shooting angles in such images as well.
The secret behind RawNeRF performing so well when it comes to not just denoising, but also changing most other key aspects about a photograph lies with how the AI is trained. The tool is trained on data gathered by RAW images, rather than standard JPEGs. RAW images capture a lot more detail compared to standard photos, details that can then be used to enhance the image in post-processing, or in this case, training an AI tool.
Will RawNeRF come to our smartphone cameras in the near future? That may be a tough question to answer right away. But the fact that it is now possible to use an AI tool to completely change how a photograph looks leaves the future of photography in a very promising direction.