Google has successfully refined their neural networks as they introduce their new artificial intelligence system which is able to restore lost images by enhancing the resolution 16 times more than the usual. That means they can increase the resolution of a blurred or pixelated image and make it more recognizable.

In a paper titled, "Pixel Recursive Super Resolution," the Google researchers described how they used a two-pronged approach to achieve the new system's capability.

First, they trained the system by feeding it with countless images so that it would become familiar with different facial features including the nuances. The second part of the process was teaching the neural system to compare 8X8 pixel images with all possible versions of its 32X32 pixel images.

Then these two neural networks work seamlessly and harmoniously with each other to make the best guess of what the original image should look like.

This is a significant improvement from the previous system which only identifies the red block in the middle of a face and increases its size 16 times than the original. On the other hand, Google's new AI system can identify which facial feature it is and draw it accordingly by comparing high-resolution images with the low-resolution one.

The researchers explained in the paper that if the source image lacks some details, the neural network generates new image details that make sense to a person observing it. It does so by scaling both images until they are both the same size. That's because it is easier for Google's new AI system to identify the differences between both images with the same size.

Aside from image manipulation, the new system can also be used to compress images. It can be remembered that Google announced last January that it will be using a machine learning-based compression tool. If this is put into effect, users can save lot of bandwidth as they send less more information.

Topics Google