The latest advancement in the technology behind Just Walk Out uses a new multi-modal foundation model, which further increases its accuracy by using the same transformer-based machine learning models underlying many generative AI applications, and applies them to physical stores.
Amazon accomplish this by analyzing data from cameras and sensors throughout the store simultaneously, instead of looking at which items shoppers pick up and put back in a linear sequence.
For retailers, the new AI system makes Just Walk Out faster, easier to deploy, and more efficient. For shoppers, this means worry-free shopping at even more third-party checkout-free stores worldwide.
Just Walk Out uses cameras, weight sensors, and a combination of advanced AI technologies to enable shoppers in physical stores to buy food, beverages, merchandise, and more, without having to wait in a checkout line, or stop at a cashier.
Just Walk Out technology, which launched in 2018, was built using generative AI and leading-edge machine learning available at the time to figure out “who took what.” Previously, the AI system analyzed shopper behavior sequentially—their movement and location in the store, what they picked up, and the quantity of each item—each action processed one after another. However, in unusual or novel shopping scenarios (such as if a camera view was obscured due to bad lighting or a nearby shopper), the sequential approach could take time to determine purchases with confidence, and sometimes required manual retraining of the model.