Google outlines how the Pixel 4’s dual cameras capture depth in portrait photos

Enrique, 17 December 2019

Google’s Pixel 4 and 4 XL mark the first time Google used dual main cameras in a smartphone, across both Pixel and Nexus lineups. In the latest Google AI Blog post, Google explains how it improved depth sensing on the dual cameras, as well as how it improved distance perception, which is needed to know what needs to be blurred out.

Google outlines how the Pixel 4’s dual cameras capture depth in portrait photos

Aside from just using the second camera, Google also uses the second camera’s autofocus system to improve depth estimation to more closely match with the look of natural bokeh from an SLR camera.

With the Pixel 2 and Pixel 3, Google split each pixel from a single camera to compare two slightly different images and try to calculate depth. This slight difference in images is called a parallax and it was quite effective for photos up close, but more difficult for subjects further away.

With the second camera, Google can grab a more drastic parallax. Now that the two images are coming from two different cameras about 13mm apart, depth becomes more apparent and the overall image’s depth can be more accurately estimated.

Google outlines how the Pixel 4’s dual cameras capture depth in portrait photos Source: Google

The photo on the left shows the half-pixel difference that is used on the Pixel 2 and 3’s single-cam setup while the image on the right shows the difference in image between both cameras. It doesn’t stop there; While the second camera grabs horizontal information about the scene, the half-pixel information from each camera sees a less drastic, but still useful, vertical change of pixels.

Google outlines how the Pixel 4’s dual cameras capture depth in portrait photos Source: Google

This lets the cameras see a four-way parallax, thus giving the camera more useful information that compliments the Dual-Pixel method from the Pixel 2 and Pixel 3. This has helped Google’s camera on the Pixel 4 to reduce depth errors (bad bokeh line) and estimate distance of objects that are further away.

This image explains how both Dual-Pixel and dual camera information is used to create the overall depth map of an image. The Pixel 4 can also adapt in case information from the other camera isn’t available. One example Google gave was if “the subject is too close for the secondary telephoto camera to focus on.” In this case, the camera would revert to using only Dual-Pixel or dual camera information for the image.

 Google Source: Google

Finally, the bokeh is applied by defocusing the background which results in synthesized ‘disks’ that are larger the further away from the subject they are. Tonal mapping happens right after the image is blurred, but Google used to do this the other way around. Blurring causes the image to lose detail and kind of smushes the contrasts together, but when done the current way (blur first, then tone mapping), the contrast look of the Pixel is retained.

 Google Source: Google

Despite the Pixel 4’s lukewarm reviews after launch, it features a phenomenal camera thanks to all the work that Google’s engineers have put into the image processing and HDR+. So the next time you’re briefly waiting for HDR+ to process on newly shot portraits on a Pixel 2 or even the Pixel 3a, remember that’s some Google magic at work that’s worth the brief wait.

Source


Related

Reader comments

Actually the processing of a sensor is a mixer of hardware and software.. Yes I also prefer hardware based result and DSLRs have the apex of it..

But I rather not let the software algorithm to do the job. Call me old school, but I definitely prefer hardware physics over software computation.

So does my self.. If you're a photographer should have understood my worlds.. Because the dynamic range of these samples are so good by quality to DSLR comparison.. Even compare to other devices..

Popular articles

More

Popular devices

Electric Vehicles

More