It will be no secret to anyone that modern phones are packed with different neural networks. Manufacturers place special emphasis on processing images obtained from cameras. But why is this done? What inspires engineers? Let's take it in order.

How does a person see?

The clear and beautiful image you see with your eyes is the imagination of your brain. The human eye receives images at a certain frequency. And all these pictures look like this: we clearly see some very limited areas, and everything else is blurry. At the same time, our eyes are fixed on different points in each "frame".

Thus, if we play back the sequence of "frames" obtained using the eyes, we will get a shaky video in which only the central point is clearly visible and everything else is blurry. Watching such a video can make you sick. But thanks to our brain, which makes a stable picture based on the available frames and all our life experiences, we don’t even think about everything that’s happening "inside".

How does a smartphone take pictures?

When you launch the camera application, continuous video shooting begins, and when you press the shutter button, the entire stream of images received before is processed. From this pile of information, the phone uses algorithms to "finish" the image. Moreover, it tries to do this so that you like the final image: neural networks are trained to process "raw" photos in such a way that most users like the result. Thus, the phone, in fact, completes the picture in the same way as our brain does.

What is all this for?

According to the laws of physics, large sensors are needed to obtain high-quality images. Obviously, it is not possible to put the latter in small phones. Therefore, by copying the behavior of the brain, phone manufacturers manage to "circumvent the laws of physics" — now the quality of photos no longer depends on optics but on how advanced the algorithms are.

A real example

British actress Tessa Coates posted a photo from her wedding dress fitting and everything would be fine, except for one "but". If you look closely, then in reality and in the two reflections on the mirrors, the actress's hands are located completely differently. Did Tessa Coates break the matrix? No.

Photo by Tessa Coates

It's all about the features of iPhone shooting, which, as we found out, are similar to the functionality of our brain. The camera took a lot of pictures, moving from left to right, and at that moment, the actress could move her arms. After that, the artificial intelligence algorithm combined all these frames into one. Hence this interesting effect.

In fact, it's even more complicated. The phone could still choose the best shots. But the fact is clear — Tessa Coates caught a rare case when a neural network made a big mistake and gave itself away.

Share this post