Home Technology News Tech Google has developed a special AI capable of lighting your photos in a way that is simply amazing

Google has developed a special AI capable of lighting your photos in a way that is simply amazing

0
Google has developed a special AI capable of lighting your photos in a way that is simply amazing

Google is preparing an artificial intelligence that can clean images of noise without losing detail or quality.

Google has developed a special AI capable of lighting your photos in a way that is simply amazing

The relationship between mobile phones and night photography it’s anything but pleasant. It is true that having a camera in your pocket is a great advantage and one of the coolest things that technology has brought us in the last 20 years, but the truth is that shooting in low light conditions does not always offer ideal results.

In fact, if we take a picture in the street without natural light, it is normal for the optical sensor to generate a lot of electronic noise. There are many ways to reduce it, although one of them is to gain clarity in the photo by losing detail, the norm today for many smartphones. Nevertheless, Google is training an AI that will allow us to eliminate noise without losing detail.

Detailed and pristine low-light images thanks to the Big G

Google has developed a special AI capable of lighting your photos in a way that is simply amazing

Google is preparing the ultimate solution to noise in low-light photos

That is the idea of ​​​​Mountain View on paper. For this they have launched a open source project known as MultiNeRFas stated in petapixel. Since digital noise and its consequences are still two big areas for engineers to work on, Google’s algorithms want to solve the problem with the help of a neural network, whose first (and impressive) results you can see in the following video:

This neural network known as NeRF (Neural Radiance Fields), which was originally created to generate 3D images from 2D assemblies. If Google has decided to rely on this neural network, it is because, when generating a 3D image, it is much easier for it to analyze the information contained in an image, because it can “move” through it.

In the MultiNeRF project document its mission is clearly stated:

We modified NeRF to train the AI ​​directly on linear RAW images, preserving the full dynamic range of the scene. By rendering raw output images of the resulting NeRF, we can perform novel high dynamic range (HDR) view synthesis tasks. In addition to changing the camera’s point of view, we can manipulate focus, exposure, and tone mapping after analyzing the image.

In other words: the algorithm analyze the raw data of the RAW file and uses artificial intelligence to see what the resulting photo would look like if there were no digital noise in the scene. What is intended is to preserve the maximum detail, with the minimum of noise.

For now, the AI ​​that will be in charge of carrying out this entire process is in its early stages, although there is no doubt that it is something we would like to see implemented in the Google Pixel Sooner than later. It is still early to know if it will be possible to do it, but if so, it would not be bad for other manufacturers to join the bandwagon. In the meantime, don’t hesitate to take a look at the phones with the best camera you can find on the market and the best apps to edit your photos available for Android.

LEAVE A REPLY

Please enter your comment!
Please enter your name here