Image: Google

At the I/O event, a senior vice president from the company announced enhancements to its tool that lets you conduct a search with just an image and a few words. A new mode called "near me" will allow users to take a photo of an object and then find results locally.

Raghavan explained that you will be able to take a photo of a dish and then search for restaurants that serve it. You can find a list of restaurants near you on the internet. This feature will be available in English to start, and will expand to more languages over time.

This is like having a supercharged ctrl-F for the world around you

A new feature called scene exploration is being rolled out by Google. Users will be able to pan their camera and enter a search phrase about the objects in front of them. Raghavan used the example of trying to find a nut-free chocolate bar in a supermarket to explain the feature. You'll be able to see reviews about each object, and you'll be able to Scan an entire shelf of chocolate bars. This is like having a ctrl-F for the world around you according to Raghavan's description.

In April, the search giant rolled out multisearch, but its purpose was primarily to search for instructions. You can take a picture of a dress you like and then type in the name of the color you like. A list of similar-looking dresses will be shown in the multisearch results. You can take a photo of a specific type of plant and look up care instructions.

Lou Wang, the director of product management at Google, said during the release of the multisearch feature that it can be used for a lot more.

If you want to use multisearch, you have to open the app on your phone and hit the lens icon on the right side of the search bar. To begin your search, you can either take a picture of an object in front of you or a photo from your gallery. To add a photo to your search, you have to use the screen as a magnifying glass. It is not clear how scene exploration and near me will be integrated into the interface.

Update 1:45PM: Added more context from Prabhakar Raghavan.