[4] The app was initially released as Google Camera and supported on all devices running Android 4.4 KitKat and higher.
[5] Google Camera contains a number of features that can be activated either in the Settings page or on the row of icons at the top of the app.
The first generation of Pixel phones used Qualcomm's Hexagon DSPs and Adreno GPUs to accelerate image processing.
[6] Note that the Visual Core's main is to bring the HDR+ image processing that's symbolic of the Pixel camera to any other app that has the relevant Google APIs.
Pixel Visual Core is built to do heavy image processing while conserving energy, saving battery.
HDR+ also uses Semantic Segmentation to detect faces to brighten using synthetic fill flash, and darken and denoise skies.
HDR+ also reduces shot noise and improves colors, while avoiding blowing out highlights and motion blur.
Like Night Sight, HDR+ enhanced features positive-shutter-lag (PSL): it captures images after the shutter is pressed.
When enabled, a short, silent, video clip of relatively low resolution is paired with the original photo.
[20][21] When Motion Photos is enabled, Top Shot analyzes up to 90 additional frames from 1.5 seconds before and after the shutter is pressed.
[24] Slow motion video can be captured in Google Camera at either 120 or, on supported devices, 240 frames per second.
[26] Google Camera allows the user to create a 'Photo Sphere', a 360-degree panorama photo, originally added in Android 4.2 in 2012.
[27] These photos can then be embedded in a web page with custom HTML code or uploaded to various Google services.
[citation needed] Portrait mode (called Lens Blur previous to the release of the Pixel line) offers an easy way for users to take 'selfies' or portraits with a Bokeh effect, in which the subject of the photo is in focus and the background is slightly blurred.
[29][30][31] Additionally, a "face retouching" feature can be activated which cleans up blemishes and other imperfections from the subject's skin.
[35][36] The camera offers a functionality powered by Google Lens, which allows the camera to copy text it sees, identify products, books and movies and search similar ones, identify animals and plants, and scan barcodes and QR codes, among other things.
This mode also feature a two level AI processing of the subject's face that can be enabled or disabled in order to soften its skin.
[clarification needed] Night Sight is based on a similar principle to exposure stacking, used in astrophotography.
The motion metering and tile-based processing of the image allows to reduce, if not cancel, camera shake, resulting in a clear and properly exposed shot.
Night Sight also supports a delay-timer as well as an assisted selector for the focus featuring three options (far, close and auto-focus).
It simulates the directionality and intensity to complement the original photograph's lighting using machine learning models.