Back-end Engineer at Preply


Somewhere between "AI suggests an option" and "AI just did it for you" there's a line. Cross it, and users don't complain — they just quietly disengage. Or worse, they stop using the feature entirely. I spent the last year trying to find exactly where that line sits. Working across hospitality technology, I ran research with 74 participants + concept testing (in the US and EU), guests and frontline staff, walking them through AI scenarios at different levels of automation, from gentle suggestions to full autopilot. The results were messier, more human, and more instructive than any benchmark could capture. This talk is about what we found: the moments trust broke, the assumptions that didn't survive contact with real users, and what it means for teams who are right now deciding how much agency to hand over to their AI features. I'll share the framework we developed for stress-testing AI trust boundaries, the failure scenarios that shaped it, and how any team — not just in hospitality — can apply it before they ship something users quietly walk away from. Because the hardest UX problem in AI isn't making it smart. It's making it trustworthy.
Mews - Staff Service Designer