To get started, simply look for the Lens camera icon in Google’s Android or iOS app.
You might be wondering what kind of multisearch we’re referring to and its connection to the camera. Let’s take a closer look at the new AI capabilities already integrated into regular Google Search.
In essence, it works like this – point your camera, ask a question, and get assistance from AI. This represents the new era of multimodal search, combining images and text.
When you point your camera (or upload a photo/screenshot) and pose a question using the Google app, the new multi-modal search provides results with artificial intelligence that goes beyond visual matches. This allows for more complex or nuanced questions about what you see, helping you quickly find and understand key information.
This is where the new multi-search experience comes in handy. Capture an image of the game, add your question (“How do I play this?”), and receive an artificial intelligence-based review that consolidates relevant information from the web. This way, you can swiftly discover the game’s name and learn how to play. Moreover, an AI-based review facilitates deeper exploration with supporting links, providing comprehensive details.
For users outside the US who opt for the Search Generative Experience (SGE), they can also explore this new feature in the Google app.
Of course, this is just the beginning – you can explore all the latest features of search with embedded artificial intelligence by registering with Search Labs (where available) and selecting the SGE experience.
SEM MasterPlus: Complex website promotion.
SEO
