Google brings generative AI technology to shopping and aims to get a head start on e-commerce sites like Amazon.com.
The Alphabet-owned company on Wednesday announced features designed to help people understand how clothes fit them, regardless of their height, and additional features for locating products using its search and image recognition technology. In addition, Google has introduced new ways to research travel destinations and map routes using generative AI – a technology that can create text, images or even videos from simple prompts.
“We want to make Google a place where consumers can shop and also a place where retailers can connect with consumers,” said Maria Renz, Google’s vice president of commerce, in an interview ahead of the announcement. “We’ve always been committed to an open ecosystem and a healthy web, and this is one way to make that technology available to all retailers.”
Google is the world’s leading search engine, but according to research firm CivicScience, 46 percent of respondents in a survey of US shoppers conducted last year said they still started their product searches and research on Amazon. TikTok is also on the rise, according to a study by CivicScience – 18 percent of Gen Z online shoppers turn to the platform first. Google is taking note of this with some of its new AI-powered shopping exploration features aimed at targeting younger audiences.
With a new virtual “try on” feature launching Wednesday, people can see how clothes fit different body types, from XXS to 4XL. The apparel is superimposed over images of various models that the company photographed during development of the feature.
Google said it was able to launch such a service due to a new image-based AI model it developed internally, and the company is releasing a new research paper detailing its work alongside the announcement. The depictions of the clothing take into account the way the fabric stretches and curls when worn to create lifelike images. The fitting feature begins with women’s tops in partnership with retailers such as Anthropologie and Everlane, with menswear to follow later.
The company also announced that it will start leveraging more sources of information as people test out its new “search generative experience” — a service that Google first announced at its I/O developer conference last month. Currently, this offering is only available through the company’s experimental Search Labs product.
Google had previously announced that it would use various web-based sources to display AI-generated information, such as the best hotel for families in a specific vacation destination or the best waterproof Bluetooth speaker. Now it’s also adding user ratings for its AI model to draw on.
The company is also rolling out new additions to existing features on Google Maps. Immersive View, which uses AI to show people 3D walkthroughs of landmarks, is rolling out in four new cities: Amsterdam and Dublin, and Florence and Venice in Italy. Google expands its collection of immersive sights to more than 500 – in both the iOS and Android apps – and adds destinations like the Sydney Harbor Bridge and Prague Castle.
Clear Directions allows users to see turn-by-turn directions for walk, bike, and ride modes on their phone’s lock screen. According to Google, users can also see updated ETAs while tracking their routes in real time. This feature will be rolled out globally in June.
Some AI capabilities of Google Lens — the image-recognition app that uses a phone camera to identify objects and text — have been around for a while, like working out the name of a local dish by snapping a picture of it while you travel. But on Wednesday, Google announced it was rolling out the ability for users to search for skin conditions using the app.
After a user snaps a photo of a rash or skin bump, Lens finds visual matches for the image that could help users search, the company says. The feature is meant to be a starting point for research, not certified medical advice, Google said.
© 2023 Bloomberg LP