You like the way that dress looks, but you would prefer it to be green. You prefer flats to heels. What if you could have the same drapes as your favorite notebook? I don't know how to use the internet for these things, but I was shown real-world examples of each by the product manager of the search engine.
I have only seen a rough demo of the feature so far, but you shouldn't have to wait long to try it.
One of the most common requests was for it to be used for more than just shopping.
Zeng says that it might already work with broken bicycles. She says she learned how to style nails by typing a phrase into an app and seeing the results on social media. You can take a picture of a plant and get instructions on how to care for it.
Wang explains how multisearch will expand to more videos, images, and even the kinds of answers you might find in a traditional text search.
It sounds like the intent is to put everyone on even footing, too: rather than partnering with specific shops or even limiting video results to YouTube, Wang says it will surface results from any platform we're able to.
It won't work with everything, like your voice assistant doesn't work with everything, because there are infinite possibilities. Should the system pay more attention to the picture if it seems to be in contradiction? Good question. If you want to match a pattern, get up close to it so that Lens can see it. If it thinks you want more notebooks, you might have to tell it that you don't.
There are big questions about whether context and not just text can be used in a new era of search. The experiment seems limited enough that it probably won't give us an answer. If it became a core feature of the search engine, it would be a neat trick.