A brief guide to the nerdy science behind the past decade's quantum leap in phone photo quality.
Your smartphone's camera is much more than just a camera. Every time you take a photo, it's doing a lot more than just hoovering up tiny dots of light and displaying them on-screen. A large part of the behind-the-scenes magic is computational photography, which sounds complicated, but is a pretty broad term that refers to cameras manipulating an image digitally, using a built-in computer, as opposed to applying good old-fashioned optics.
The camera in any modern smartphone basically acts as a standalone computer. It uses specialized computing cores to process the digital information captured by the camera sensor and then translates it into an image we can see on the display or share across the internet — or even print out and hang on the wall. It's very different from the old way where photographers used film and darkrooms.
Your phone's camera is basically a standalone computer.
The image sensor is where it all starts. It's a rectangular array of tiny, light-sensitive semiconductors known as photosites. The end product is the created image, also a rectangular array but made of colored pixels. The conversion isn't a one-to-one map between photosites and pixels. That's where image processing comes in, and where computational photography begins.
The image sensor in your phone is covered by a pattern of red, green, and blue light filters. This means that a single photosite registers light in only one color (expressed by a band of light wavelengths). The final image has all three colors available in every single pixel, where the intensity of brightness determines what color our eyes can see.
The first step is to apply an algorithm that blends the image sensor's captured color information into an actual color that a pixel in the image should show. The name of this step is usually known as demosaicing.
The next step your phone's computer/camera does is to employ a sharpening algorithm. This accentuates edges and blends transitions from one color to another. Remember, each pixel in the image can only be one color, but there are millions of colors to choose from. The edge between a red flower and a blue sky needs to be sharp but also blended along the edge. Getting this right isn't easy.
Next, things like white balance and contrast are addressed. These processes can make a big difference in the quality of the photo, as well as the actual colors in it. These changes are all just numbers; after your phone algorithm defines color edges, it's much simpler to then adjust the actual shade of the color or the level of contrast.
With each photo you take, your phone is doing a ton of number crunching.
Finally, the output data is analyzed, and the image is compressed. Colors that are very close to each other are switched to be the same color (because we can't see the difference), and if possible, groups of pixels are merged into a single piece of information, leading to a smaller file output size.
These tools create an image that's as accurate as those created by the old film method. But computational photography can do a lot more by changing the data using the same algorithms. For example, portrait photography changes the edge detection and sharpening process, while night photography changes the contrast and color balance algorithms. And "AI" scene detection modes in modern phones also use computational photography to identify what's in the shot — a sunset, for example — and change the white balance to produce a pleasing shot with warm colors.
More recently, the increased processing power of smartphones considerably expanded the power of computational photography. It's a big part of why some of the best Android phones like the Google Pixel series and recent Samsung Galaxy models take such great photos. The raw, number-crunching power of these devices means that phones can sidestep some of the weaknesses of their own small sensors with computational photography. Whereas normally, a much larger sensor might be required to take a clear photo in low light, computational techniques can intelligently brighten and denoise images to produce better-looking shots.
Your phone's Night Mode feature would be impossible without computational photography.
Take the Google Pixel series, for instance. At the heart of the Google Camera app's magic is multi-frame photography. That's a computational photography technique involving taking several photos in quick succession at different exposure levels, then stitching them together into a single image with even exposure throughout. Because your phone was probably moving while you were taking the photo, HDR+ relies on Google's algorithms to put the image back together without any ghosting, motion blur, or other aberrations while also intelligently reducing the appearance of noise in places. That's a very different process from what you might think of as photography. The computing power onboard and the code commanding it is just as important, if not more so, than the lens and the sensor.
Multi-frame photography is at the heart of most smartphones' night mode capabilities, including Google's Night Sight feature. These computation-heavy features not only take several long exposures over a few seconds but also compensate for the large amount of movement taking place over with a handheld shot. Once again, computational power is required to gather all that data and rearrange it into a pleasing, blur-free photo.
Take this idea to the next step, and you have the Google Pixel's Astrophotography mode, which uses computational photography to compensate for the earth's rotation, producing clear photos of the cosmos while also not overexposing landscape details.
The next step in computational photography, as seen in the Google Pixel 6 series, is applying these techniques to video as well. Google's 2021 flagship promises to bring the same level of HDR+ processing applied to still photos in previous Pixels to 4K footage at 30 frames per second.
The power of computational photography is constrained by the amount of data it can gather from your phone's sensor and the number-crunching power available in your phone, which is why it's one of the major areas of research and development for pretty much all major phone manufacturers.
So when you notice your next phone taking way better photos than the model it's replacing, chances are it's not just the camera hardware that's responsible. Rather, it's the entire computer system behind it.
0 Response to "You Can See More: What is computational photography?"
Post a Comment