When I was designing Squeezey (best viewed on desktop), a budgeting app meant to help people manage impulsive buying, I added a feature that leverages the latest technology. Using Google Vision API, users could photograph something they wanted to buy and tap an AI icon. From there, the app would automatically identify the product.
I never shipped the product, because admittedly this was for my Behavior class. When it comes to projects like that, however, I employ a professional designer hat and take things seriously. Naturally, during the design process, I was asking myself so many questions to account for potential edge cases. Some go like this:
What if the AI gets it wrong? What if someone photographs a vintage coat and the app suggests a dupe?
As interaction designers, we’ve always stood between complex systems and the people who use them. AI, however, changes that role in a fundamental way. We’re no longer only shaping how people interact with technology, but also shaping how technology makes decisions about people.
With Squeezey, it would have been easy to let the AI automatically add the detected products to a wishlist. It would’ve been seamless, efficient, even delightful, but it also would’ve removed the most important part of the experience: the pause— the moment when someone decides whether they actually want something (the pause was the whole point of the app so as to prevent users from impulse buying). So instead, I added a confirmation step. The AI can suggest what it sees, but the user has to approve it. It’s a tiny bit of friction, but it’s barely noticeable. What it does, however, is that it preserves something essential: the user’s agency. That small design choice taught me something important:
Ethical design isn’t always about big warnings or dramatic constraints. Sometimes it’s about protecting a single moment of choice.
Through usability testing at CCA, I’ve learned that people can sense when something feels off, even if they can’t explain why. An example: Copywriting that sounds helpful but doesn’t clearly tell what’s happening.
AI makes this harder. When I integrated Google Vision API, I had to decide how much to reveal. Do I let the detection feel invisible, or do I acknowledge that an algorithm is involved? In the end, I chose to make it visible. The AI icon in Squeezey tells users that something algorithmic is happening during detection. It gives them a moment of awareness, and implicitly, a moment of consent.
One thing that kept bothering me, however, was the data behind the tool. Google Vision is trained on massive datasets, but those datasets reflect a very specific world: Western products, mass-market goods, and recognizable brands. What happens when someone photographs something handmade, secondhand, or culturally specific? When the AI fails, it doesn’t just fail technically. It communicates something implicitly…
My own background has made me sensitive to moments like that. Design can include more things and more people, that’s for sure. But, the other side of the coin exists: design can also exclude things and people, often without meaning to.
Designing ethically with AI means testing beyond people like ourselves. It means questioning the systems we integrate instead of assuming they’re neutral. And sometimes, it means being honest enough to say: this AI isn’t ready yet.
And yet… AI isn’t slowing down. It’s becoming part of every product, every workflow, and every interface. As designers, we can either smooth its edges and make it invisible. Most of all, we can step into a more active role. I don’t think the answer is rejecting AI, though. I think we just need to rethink what our job actually is.
We’re not just interaction designers anymore. We’re mediators between automated systems and human values.
This kind of work isn’t always comfortable. It asks us to slow down in an industry that rewards speed (and I personally hate this) and to complicate systems that promise simplicity. I think that’s exactly why our generation of designers matters. We’re the ones translating AI into lived experience, inheriting our mentors’ (or elders?) work. We decide what gets built, how it behaves, and who it serves.