Google Camera


Google Camera is a camera phone application developed by Google for Android. Google Camera development began in 2011 by X, led by Marc Levoy, developing image fusion technology for Google Glass. It was initially supported on all devices running Android 4.4 KitKat and higher, but is now only officially supported on Google's Pixel devices. It was publicly released for Android 4.4+ on the Google Play Store on April 16, 2014 and removed from public view on Feb 17, 2016.

Features

Google Camera contains a number of features that can be activated either in the Settings page or on the row of icons at the top of the app.

Pixel Visual/Neural Core

Starting with Pixel devices, the camera app has been aided with hardware accelerators to perform its image processing. The first generation of Pixel phones used Qualcomm's Hexagon DSPs and Adreno GPUs to accelerate image processing. The Pixel 2 and Pixel 3 include the Pixel Visual Core to aid with image processing. The Pixel 4 introduced the Pixel Neural Core.

HDR+

Unlike earlier versions of High-dynamic-range imaging, HDR+, also known as HDR+ on, uses computational photography techniques to achieve higher dynamic range. HDR+ takes continuous burst shots with short exposures. When the shutter is pressed the last 5–15 frames are analysed to pick the sharpest shots, which are selectively aligned and combined with image averaging. HDR+ also uses Semantic Segmentation to detect faces to brighten using synthetic fill flash, and darken and denoise skies. HDR+ also reduces noise and improves colors, while avoiding blowing out highlights and motion blur. HDR+ was introduced on the Nexus 6 and brought back to the Nexus 5.

HDR+ enhanced

Unlike HDR+/HDR+ On, 'HDR+ enhanced' mode does not use Zero Shutter Lag. Like Night Sight, HDR+ enhanced features positive-shutter-lag : it captures images after the shutter is pressed. HDR+ enhanced is similar to HDR+ from the Nexus 5, Nexus 6, Nexus 5X and Nexus 6P. It is believed to use underexposed and overexposed frames like Smart HDR from Apple. HDR+ enhanced captures increase the dynamic range compared to HDR+ on. HDR+ enhanced on the Pixel 3 uses the learning-based AWB algorithm from Night Sight.

Live HDR+

Starting with the Pixel 4, Live HDR+ replaced HDR+ on, featuring WYSIWYG viewfinder with a real-time preview of HDR+. HDR+ live uses the learning-based AWB algorithm from Night Sight and averages up to nine underexposed pictures.
Dual Exposure Controls
'Live HDR+' mode uses Dual Exposure Controls, with sliders for brightness and for shadows. This feature was made available for Pixel 4, and has not been retrofitted on older Pixel devices due to hardware limitations.

Motion Photos

Google Camera's Motion photo mode is similar to HTC's Zoe and iOS' Live Photo. When enabled, a short, silent, video clip of relatively low resolution is paired with the original photo. If RAW is enabled, only a 0.8MP DNG file is created, not the non-motion 12.2MP DNG. Motion Photos was introduced on the Pixel 2. Motion Photo is disabled in HDR+ enhanced mode.

Video Stabilization

Fused Video Stabilization, a technique that combines Optical Image Stabilization and Electronic/Digital image stabilization, can be enabled for significantly smoother video. This technique also corrects Rolling shutter distortion and Focus breathing, amongst various other problems. Fused Video Stabilization was introduced on the Pixel 2.

Super Res Zoom

Super Res Zoom is a multi-frame super-resolution technique introduced with the Pixel 3 that shifts the image sensor to achieve higher resolution, which Google claim is equivalent to 2-3x optical zoom. It is similar to drizzle image processing. Super Res Zoom can also be used with telephoto lens, for example Google claims the Pixel 4 can capture 8x zoom at near-optical quality.

Smartburst

Smartburst is activated by holding the shutter button down. Whilst the button is held down, up to 10 shots per second are captured. Once released, the best pictures captured are automatically highlighted.
Different 'creations' can be produced from the captured pictures:
When Motion Photos is enabled, Top Shot analyzes up to 90 additional frames from 1.5 seconds before and after the shutter is pressed. The Pixel Visual Core is used to accelerate the analysis using computer vision techniques, and ranks them based on object motion, motion blur, auto exposure, auto focus, and auto white balance. About ten additional photos are saved, including an additional HDR+ photo up to 3 MP. Top Shot was introduced on the Pixel 3.

Other features

Location - Location information obtained via GPS and/or Google's location service can be added to pictures and videos when enabled.

Functions

Like most other camera applications, Google Camera offers different 'functions' or 'modes', allowing the user to take different types of photo or video.

Slow Motion

video can be captured in Google Camera at either 120 or, on supported devices, 240 frames per second.

Panorama

is also possible with Google Camera. Four types of panoramic photo are supported; Horizontal, Vertical, Wide-angle and Fisheye. Once the Panorama function is selected, one of these four modes can be selected at a time from a row of icons at the top of the screen.

Photo Sphere

Google Camera allows the user to create a 'Photo Sphere', a 360-degree panorama photo, originally added in Android 4.2 in 2012. These photos can then be embedded in a web page with custom HTML code or uploaded to various Google services.

Portrait

Portrait mode offers an easy way for users to take 'selfies' or portraits with a Bokeh effect, in which the subject of the photo is in focus and the background is slightly blurred. This effect is achieved via the parallax information from dual-pixel sensors when available, and the application of machine learning to identify what should be kept in focus and what should be blurred out. Portrait mode was introduced on the Pixel 2.
Additionally, a "face retouching" feature can be activated which cleans up blemishes and other imperfections from the subject's skin.
The Pixel 4 featured an improved Portrait mode, the machine learning algorithm uses parallax information from the telephoto and the Dual Pixels, and the difference between the telephoto camera and wide camera to create more accurate depth maps. For the front facing camera, it uses the parallax information from the front facing camera and IR cameras. The blur effect is applied at the Raw stage before the tone-mapping stage for more realistic SLR-like bokeh effect.

Playground

In late 2017, with the debut of the Pixel 2 and Pixel 2 XL, Google introduced AR Stickers, a feature that, using Google's new ARCore platform, allowed the user to superimpose augmented reality animated objects on their photos and videos. With the release of the Pixel 3, AR Stickers was rebranded to Playground.

Google Lens

The camera offers a functionality powered by Google Lens, which allows the camera to copy text it sees, identify products, books and movies and search similar ones, identify animals and plants, and scan barcodes and QR codes, among other things.

Photobooth

The Photobooth mode allows the user to automate the capture of selfies. The AI is able to detect the user smile or funny faces and shoot the picture at the best time without any action from the user, similar to Google Clips. This mode also feature a two level AI processing of the subject's face that can be enabled or disabled in order to soften its skin. Motion Photos functionality is also available in this mode. The white balance is also adjustable to defined presets.

Night Sight

Night Sight is based on a similar principle to exposure stacking, used in astrophotography. Night Sight uses modified HDR+ or Super Res Zoom algorithms. Once the user presses the trigger, multiple long exposure shots are taken, up to 15x 1/15 second exposure or 6x of 1 second exposure, to create up to a 6-second exposure. The motion metering and tile-based processing of the image allows to reduce if not cancel the users motion and shivering to result in a clear and properly exposed shot. Google claims it can handle up to ~8% displacement frame to frame. And each frame is broke into around 12,000 tiles. It also introduced a learning-based AWB algorithm for more accurate white balance in low light.
Night Sight also works well in daylight, improving WB, detail and sharpness. Like HDR+ enhanced, Night Sight features positive-shutter-lag. Night Sight also supports a delay-timer as well as an assisted selector for the focus featuring three options. Night Sight was introduced with the Pixel 3, all older Pixel phones were updated with support.

Astrophotography

Astrophotography mode averages up to 15x 16-second exposures, to create a 4-minute exposure to significantly improve shot noise. Astrophotography activates automatically when Night Sight mode is enabled and the phone detects it's on a stable support. Astrophotography mode includes improved algorithms to remove hot pixels and warm pixels caused by dark current and convolutional neural network to detect skies for sky-specific noise reduction. Astrophotography mode was introduced with Pixel 4, and backported to the Pixel 3 and Pixel 3a.

Unofficial ports

While the Google Camera software is specific to particular Google hardware, many developers have released unofficial ports that port its newest features to older Google phones and non-Google phones, sometimes enabling features under development not yet enabled by the official app. Because some of the Google hardware's advanced features are not available in other devices, the GCam app has been reverse-engineered and modified to make it compatible with other phones. There are different versions for many different Android phones. It is expected that some features may not be available or will not work properly.
In 2016 a modified version brought HDR+ featuring Zero Shutter Lag on back to the Nexus 5X and Nexus 6P. In mid-2017, a modified version of Google Camera was created for any smartphone equipped with a Snapdragon 820, 821 or 835 processor. In 2018, developers released modified versions enabling Night Sight on non-Pixel phones.