Today, I’m going to talk about responsible AI. But before we get into it, I want to start with an imaginary story—not to point fingers at anyone, but to help set the stage for the rest of our conversation.
Let’s say a bank uses an AI system to help loan officers decide which applicants should be approved for a mortgage. The system analyzes tons of data and gives each application a risk score, along with a confidence level.
The interface is designed to be super efficient; loan officers see the score, a green/red flag, and a brief recommendation like “Approve” or “Decline.” Simple, clean, and fast.
But there’s a problem here. Can you spot it? Let’s walk through the process together, step by step.

The AI is trained on past loan decisions, repayment records, income data, zip codes, and other variables.
It learns that applicants from certain neighborhoods have higher default rates, not because of individual behavior, but due to historical inequalities like lack of investment or unstable job markets.
So it begins to associate certain zip codes or job types with higher risk, even when the individual applicant has a solid credit score and stable income.
The UX design reinforces this logic: it gives the officer a clear signal (green or red), but offers no explanation, no chance to challenge or contextualize the score.
The loan officer, under time pressure, starts trusting the system. They decline the application. The applicant never knows why.
Technically, the AI was accurate based on patterns in the training data, and the risk score made statistical sense.
But ethically, it was flawed because the signal was based on structural bias, not personal financial behavior.
And the UX made it worse by hiding complexity and encouraging overtrust.
Unfortunately, this so-called "imaginary" scenario reflects what happens all too often in real life, and in some cases, the consequences are even more damaging but just less visible.
So, what does it mean for an AI system to be “responsible”? And who gets to decide?
A responsible AI system isn’t just about smart code or cool features. It means the AI is designed, built, and used in a way that’s ethical and trustworthy, not just functioning correctly, but aligning with the goals, needs, and values of the people who interact with it. That’s the essence of good design:
Creating technology that supports human life, not just automates it.
That includes sticking to human values and what society generally sees as “right.” We’re talking about crucial principles like fairness, safety, transparency, and respecting human rights. It means building systems that anticipate when things go wrong and respond constructively—what design calls "designing for error," a proactive way to handle failure that respects the messiness of real-world use. It's not just about avoiding harm; it's about creating space for understanding, recovery, and resilience.
But here's the big question:
Who decides what “responsible” looks like?
The answer? It’s not just one person or group. It’s a mix.
👩🏻⚖️ First up, you’ve got policymakers and regulators. These folks set the rules. They pass laws like the EU AI Act and create standards around things like transparency, safety, and fairness. They also ensure companies are held accountable through audits or impact assessments. And on a global scale, they coordinate between countries so there’s some shared ethical baseline. These constraints operate like invisible design affordances: they guide behavior, define boundaries, and shape the possibilities of the system as a whole.
🛠️ Then there’s the people actually building the tech—developers, designers, and those cross-functional teams working behind the scenes. These are the ones turning big ethical ideas into tangible features and interactions. It's not just about implementing a function. It’s about crafting experiences that are understandable, usable, and appropriate for the context in which they're deployed. That’s where human-centered design truly matters, putting people first, from concept to deployment.
👨🏻💻 Designers, in particular, play a critical role here. They make the invisible visible. They help users understand what AI is doing, why it's doing it, and what their options are. This is the role of feedback, a core principle of interaction design: users need clear, timely signals that allow them to build a mental model of the system. Without this, even the most “advanced” AI will feel alien or even threatening.
🌎 And finally, there’s us. The users. The public.
We’re not just on the sidelines here. We have a role to play too, by staying informed, asking questions, pushing for explanations when AI makes decisions that affect us, and calling out issues when something feels off. Public feedback and awareness add pressure for companies to step up and do better. This kind of feedback loop isn’t just political; it’s a design necessity. Systems without feedback become brittle and disconnected. With it, they become adaptive and accountable.
So yeah, responsibility in AI isn’t one person’s job. It’s shared. It’s this ongoing mix of design, governance, tech, and public engagement. And when it works right, it’s not just about making AI smart; it’s about making it human-aware.
In the end, good AI, like all good design, is about dignity. Not just function.
Why is accuracy not enough in AI design?
So here's a question a lot of people ask: if an AI system is super accurate, isn’t that enough? Well, no, not really. Accuracy matters, for sure. But it’s just one piece of a much bigger puzzle.
Let me explain.
An AI system can be technically accurate but still cause real-world harm. Like, imagine a system that nails its predictions 95% of the time, but consistently fails for certain groups of people. That’s not just a bug—that’s a serious fairness issue.
Responsible AI means looking beyond numbers and asking, who is this system working for—and who might it be hurting?
So, accuracy doesn’t guarantee ethical or social responsibility. We have to think about bias, discrimination, and whether the system respects human rights. These are things accuracy scores can’t show us.
Then there’s transparency. People need to understand how AI decisions are made. If the system’s just a black box spitting out answers, even if they’re technically right, how do we hold it accountable? How do users trust it, or know when to challenge it? Responsible design means making the system more explainable and more aligned with how real people think.
Another big one is robustness. Just because an AI works well in testing doesn’t mean it’ll hold up in the real world. What happens when it faces edge cases or unexpected inputs? Or worse, what if someone intentionally tries to mess with it? Responsible AI design means building in safety nets, so the system can fail gracefully, not dangerously.
And let’s not forget UX and trust. The way AI shows up in a user interface really matters. If it acts overconfident, people might trust it too much. If it’s unclear or confusing, people might ignore it, even when it’s right. Good design helps people understand the AI’s limitations, gives them control, and avoids misuse. That’s a big part of making AI truly human-centered.
Finally, there’s governance and accountability. A highly accurate system still needs to be traceable, auditable, and compliant with laws and standards. You need to know who’s responsible if something goes wrong and how to fix it.
So yeah, accuracy is important. But if we stop there, we’re missing the point.
Responsible AI is about more than just getting the right answer. It’s about building systems that are fair, transparent, safe, accountable, and built for people, not just performance metrics.
What unique role does UX play in shaping responsible AI systems?
Alright, so we’ve talked about what makes AI responsible and why accuracy alone doesn’t cut it, but there’s one piece we haven’t really dug into yet. And that’s UX—User Experience.
You might not think about UX as central to AI ethics, but trust me, it plays a huge and totally unique role in making AI systems responsible.
Why? Because UX is basically the bridge. It’s what turns all those high-level ideas—fairness, transparency, accountability—into stuff people can actually see, use, and understand. It’s how ethics shows up on screen.
First off, UX puts the user front and center. That means asking, Does this AI actually help people? Does it respect their needs, their values, their context? UX designers are often the ones raising red flags if a feature might mislead users or take away their sense of control. They’re advocating for people, every step of the way.
Then there’s the whole job of translating ethics into design. Think about things like labeling AI-generated content clearly, or giving users understandable explanations when an AI makes a decision. Or showing confidence levels, like, how sure is the AI about what it’s saying? These aren’t just nice touches. They’re essential for making the system transparent and usable.
And yeah, UX plays a huge role in preventing overtrust. Just because an AI spits out an answer doesn’t mean users should blindly accept it. So UX teams design things like confirmation steps, or warnings when decisions are high-stakes.
The goal is to support thoughtful use, not passive dependence.
Another big one? Inclusion and accessibility. Good UX makes sure these systems work for everybody, across different ages, abilities, and backgrounds. That means diverse user testing, accessible interfaces, and making sure AI doesn’t just work for a narrow slice of users. It’s a core part of fairness.
Plus, UX research helps uncover how people actually experience AI in the real world. Maybe users don’t understand a dashboard. Or maybe the way the system presents options is confusing or even misleading. UX researchers catch this early, then designers iterate and improve. It’s a continuous loop.
And beyond all that, UX brings in a sociotechnical perspective. That means thinking about the whole system—the people, the context, the power dynamics—not just the tech. It’s about designing AI that works with people, not just at them.
Last but not least, UX is often the glue in multidisciplinary teams. Designers are the ones collaborating with engineers, ethicists, legal teams, even regulators—to make sure what gets built actually reflects shared goals around responsibility.
So, in conclusion:
UX isn’t just about how something looks or feels. It’s a key part of how AI becomes something people can trust, understand, and use responsibly.
👉🏻 By the way, our first AI masterclass was a success! Thank you for your support and interest. If you didn’t get a spot the first time, we’re running it again on Monday, June 30, 2025, at 7 PM EDT. We hope to see you there.
I like your channel’s diving into AI ethics. Not enough people are talking about that.