- Main functional characteristics
- ** One-stop shopping intelligence assistant**
- # Agentic Checkout: AI to help you “take a look at the price”
- Technical principles: image generation models for fashion
- How to use Google’s “Try It On” function?
- Step 2: Browse your style
- Applying scenes
At the 2025 Google I/O Congress, Google announced a highly attractive consumer-level AI application:** Virtual Try-On**.
-
Based on a new generation ** Image Generation Model**, designed for fashion scenes
-
I understand different sizes, postures, pictures.
-
For the first time in a commodity database of such magnitude (billions of grades)
Starting today, American users can experience this function in Google Labs, using AI technology to “put on” what they see, and visualize whether it is appropriate. Not only is this a new breakthrough in image-generation models, but it is more likely to change the way future shopping is interactive. When a user is browsing a commodity (e.g. a nice shirt), if it’s not sure what it looks like: ** Upload a picture of yourself** Start the “Try On” virtual try-on function AI’s gonna put the costume on the user and make a real visual effect. This gives users an early sense of whether their clothes fit their size, style and colour.
Main functional characteristics
Past online shopping relied on modeling + user brain patches, but Google brought the whole process AI upgrade:
AI doesn’t just understand what you’re saying.
-
From the costume map to the look on you, just one picture.
-
Shopping process is fully automated and real “AI requisition” is achieved
-
Provision of real-time, credible commodity information based on global commodity data of 50 billion
** One-stop shopping intelligence assistant**
** Episode Gemini’s smartness and Shopping Graph’s big data.**
Gemini model (understanding semantics, executing queries) and Shopping Graph (global 50 billion commodity data).
-
Shopping Graph updates 2 billion times per hour of merchandise information to ensure that prices, inventories, evaluations are always up to date.
-
AI can conduct ** multi-conditional search and comparative analysis** on a voluntary basis, depending on user needs.
** Example scenario** When you search for “Travel packs suitable for travel in Portland in May”, AI Mode will automatically do the following:
-
I understand you need a waterproof, durable, easy-to-mouth travel bag.
-
Synchronize requests for weather and scenery
-
Dynamics match commodities to support rolling screening and comparison
Function bright spot
-
Real-time dialogue shopping advice
-
Picture + Commodity Association Browse
-
Self-recommend right panel for suitable commodities
-
Support brand exploration and style advice
# Agentic Checkout: AI to help you “take a look at the price”
Google releases a new Agentic Checkout mode:
-
Users can click “Track prices” on any commodity
-
Set size, colour, expected price
-
AI automatically: Add to commercial website shopping carts
-
Google Pay safely completes the next order.
This enables the shopping process to complete a closed loop from “discovery” to “screen” — “wait for” — “purchase” — and AI really replaces you with the next act. ** Virtual audition suit Try It On: Try hundreds of thousands of clothes with your picture** Support for “yourself” to try it on. Users can upload their whole body photos and AI will simulate:
-
How the clothes wrinkle on you, fall, fit.
-
Real vision of the interaction between fabric material, colour, contour and body
-
Full simulating skirts, shirts, pants, skirts, etc.
The current test dress supports American users by starting the test function in Search Labs.
Technical principles: image generation models for fashion
To achieve this function, Google ** specifically constructed a fashion image generation model (Fashion Image Generation Model)** with the following capabilities:
-
Understanding the body structure, position, body ratio
-
Identification of clothing material, texture, light.
-
Simulate the natural alignment of fabrics in different parts of the body.
-
Keep the image as a whole accurate and consistent.
How to use Google’s “Try It On” function?
Step 1: Enable function (opt in)
Going to Seech Labs, in the experimental function** selects and activates the “try on” function**. This step is the key to the beginning of the experience.
Step 2: Browse your style
When shopping for a Google search, such as for shirts, pants or skirts, you can simply enter the test-wear mode by clicking on the “try it on” icon** in the list of commodities.
Step 3: Uploading a picture
Upload one of your photos.
-
The photo needs good light and a clear picture.
-
Wearing personal clothes to match them more accurately.
AI will generate in a few seconds the real effect map of you wearing the commodity.
Step 4: Save or share shape (Show it off)
After the tryout, you can:
-
Keep this look.
-
Share your look with your friends.
-
Or just click on the purchase, Google would recommend ** similar goods ** for your selection.
Applying scenes
-
Online shopping is more intuitive: Consumers can decide whether to buy it not by modeling, imagination or comment, but by looking at the effect of “putting it on their own”
-
** Reduction in the return rate**: the proportion of apparel items returned by electric power suppliers is expected to decrease significantly due to “physical non-conformity”
-
Enhanced user participation: Through photo interaction, users are easier to create “emotional connections” with products
-
** The future could be expanded to more categories**: not only tops, but also pants, shoes, accessories and even virtual makeup.