[5][6][7] When directing the phone's camera at an object, Google Lens will attempt to identify the object by reading barcodes, QR codes, labels and text, and show relevant search results, web pages, and information.
[5] The service originally launched as Google Goggles, a previous app that functioned similarly but with less capability.
[10][11] Lens uses more advanced deep learning routines in order to empower detection capabilities, similar to other apps like Bixby Vision (for Samsung devices released after 2016) and Image Analysis Toolset, also known as IAT (available on Google Play).
It will also have the ability to calculate tips and split bills, show how to prepare dishes from a recipe, and use text-to-speech.
The feature was originally just on the Samsung Galaxy S24 and the Pixel 8, but expanded to other phones from those manufacturers.