The AI Responsibility Framework for UX Researchers
Six prompt layers that force ethical accountability into every AI interaction
Your AI-assisted persona just torpedoed a product launch. Nobody saw it coming because nobody asked the right questions at the prompt level.
Here’s what happened: A UX team fed interview transcripts into Claude, generated three user personas, and built an entire feature roadmap around them. Stakeholders loved the personas. Engineering sprinted. Then usability testing revealed the AI had hallucinated a pain point that didn’t exist in the source data—and the team had designed an entire onboarding flow around it.
And that’s the problem.
You’ve felt this before. That moment in a research readout when someone asks “Where did this insight actually come from?” and the room goes quiet. You scroll through your synthesis, hunting for the original quote, and realize you’re not sure if the AI inferred it, extrapolated it, or hallucinated it entirely. The confidence you had 10 minutes ago evaporates like morning fog in the San Francisco Bay Area.
Most UX teams treat ethics like a quality assurance gate at the end of the assembly line. Run the research, generate the outputs, ship the personas—then have someone from legal or accessibility review it before launch. By then, the assumptions are load-bearing walls. Ripping them out costs full sprints. Who’s got that time?
Here’s the thing: Ethical lapses don’t stay contained. They metastasize.
One unchecked assumption in your first synthesis prompt becomes the foundation for your persona. That persona shapes your journey map. The journey map drives your feature prioritization. By the time someone notices the original inference was shaky, you’ve built three floors on a cracked foundation. The philosophy nerds call this “moral luck”—the uncomfortable reality that knock-on consequences often exceed progressive intent. (Yes, I know invoking Kant in a UX newsletter is a choice. Bear with me, folks.)
Two philosophical traditions translate directly into prompt architecture:
Deontological constraints are rules that hold regardless of outcome. “Always cite your sources” is a non-negotiable boundary. In prompt terms, this means building hard requirements into every interaction: The AI must reference specific data points, must flag uncertainty, must surface conflicts. No exceptions because the output looked plausible.
Virtue ethics is about cultivating good judgment as a practice. It’s not enough to follow rules; you need habits that make ethical reasoning automatic. In prompt terms, this means structuring your workflow so that accountability checkpoints become reflexive. You don’t decide whether to verify sources but verification is baked right into the template.
The framework I’m about to give you operationalizes both traditions. Six layers. Each one is a prompt template you can copy, paste, and adapt.
Layer 1: Declare Identity
Before your AI does anything, it needs to know who’s asking and what ethical commitments that role carries. This is its load-bearing context:
You are acting as a UX research assistant supporting [YOUR NAME], a [YOUR ROLE] at [COMPANY].
This research must adhere to the following ethical commitments:
- Protect participant privacy and anonymity at all times
- Represent user voices accurately without editorializing
- Flag any inferences that extend beyond explicit participant statements
- Prioritize accessibility and inclusion in all recommendations
- [ADD YOUR TEAM'S SPECIFIC COMMITMENTS]
Confirm you understand these commitments before proceeding.This matters because identity primes behaviour. When you tell the AI it’s operating under specific ethical constraints, it’s more likely to surface conflicts with those constraints later. You’re essentially installing a conscience at the system level.
Layer 2: Verify Source
Every claim needs a receipt. This layer forces the AI to show its own work.
For the following task, you must:
1. Cite the specific source for every factual claim (participant quote, study finding, or data point)
2. Format citations as: [Source: participant ID/document name, page/timestamp if applicable]
3. Include a URL or file reference where I can verify each citation
4. If you cannot provide a verifiable source, explicitly state: "This is an inference, not a direct finding"
Do not proceed with analysis until you confirm you can meet these citation requirements for the data I'm about to provide.The URL requirement is critical. Not because you’ll check every single one (you won’t but you really should), but because requiring verifiability changes what the AI outputs. It’s the difference between being able to generally say that studies show or directly point to a concrete source like “according to Nielsen Norman Group’s 2023 report on form design, page 12.” One is vibes. The other is auditable fact. Vibe coding is okay. Vibe facting, not so much.
Layer 3: Impact Projection
This is where deontology meets consequentialism. Force the AI to articulate who gets affected and how. Simply ask it to conduct the proper impact projection:
Before finalizing any recommendation, you must complete an impact projection:
For each recommendation you generate:
1. Identify the PRIMARY user groups affected
2. State the INTENDED positive impact for each group
3. State at least one POTENTIAL negative impact or unintended consequence
4. Identify any VULNERABLE populations who might be disproportionately affected
5. Rate your confidence level (High/Medium/Low) for each impact statement
Format as a table. Do not skip the negative impacts—if you cannot identify any, explain why that might indicate blind spots in the analysis.That last instruction is sneaky. Asking the AI to explain why it can’t find negative impacts often surfaces them. It’s a cognitive forcing function.
Layer 4: Counterfactual Check
This layer prevents premature convergence. Before you fall in love with one interpretation, make the AI argue against itself.
You have just provided [ANALYSIS/RECOMMENDATION/PERSONA]. Now I need you to stress-test it:
1. Generate 2-3 ALTERNATIVE interpretations of the same data that would lead to different conclusions
2. For each alternative, explain what evidence supports it
3. Identify which interpretation is MOST charitable to users whose needs might be underrepresented
4. Recommend which interpretation I should present to stakeholders for discussion, and why
This is not about being right. It's about ensuring we've considered the range of reasonable readings.Insanity how rarely teams do this. We generate one synthesis, nod at it, and move on. Bad bunny. Bad bad bunny. Meanwhile, the same data could support three different personas with three different core needs. The counterfactual check forces intellectual honesty before groupthink sets in.
Layer 5: Flag Value Conflicts
Every project has stated principles. This layer makes the AI a watchdog for hypocrisy. You have to put some thought into creating the right principles first, but this is definitely worth it in the long term.
This project operates under the following declared principles:
- [PRINCIPLE 1, e.g., "Accessibility is non-negotiable"]
- [PRINCIPLE 2, e.g., "User autonomy over engagement metrics"]
- [PRINCIPLE 3, e.g., "Transparency about data usage"]
Review your recommendations against each principle. For any potential conflict:
1. Name the specific conflict
2. Quote the recommendation that creates tension
3. Suggest a modification that would resolve the conflict
4. If no modification is possible, flag this as a "values tradeoff" requiring human decision
Do not bury conflicts in caveats. Surface them prominently.This is where stakeholder pushback gets preempted. When you walk into a readout having already identified the tension between frictionless onboarding and informed consent, you look like the adult in the room. More importantly, you’ve given leadership the information they need to make an actual decision instead of discovering the conflict in a launch retrospective.
Layer 6: Sign-Off Gate
No AI output should flow into production without explicit human acknowledgment. This layer creates the paper trail. You could also get the AI to self-audit before it gives you the audit checklist. Either way, an audit at the end of every prompt should be best practice for any kind of prompt engineering.
Before I can use your output, I need to complete the following verification:
Please generate a sign-off checklist based on your analysis:
□ I have verified at least [3] source citations are accurate
□ I have reviewed the impact projection and accept the identified risks
□ I have considered the alternative interpretations and can justify my chosen direction
□ I have reviewed flagged value conflicts with [STAKEHOLDER/TEAM]
□ I accept responsibility for decisions made using this analysis
[Generate checklist with specific items from this session]
This output will not be considered final until I confirm completion of this checklist.The word responsibility is doing heavy lifting here. You’re not asking the AI to be accountable—that’s philosophically incoherent. You’re creating a ritual that forces the human to explicitly own what happens next.
The real question isn’t whether you should use AI for UX research
Let’s face it, you’re likely to use AI for your UX research going forward. That’s just the way the cookie crumbles these days. But you do have to ask yourself how you can use AI without outsourcing your professional judgment. This is the most important part of any prompt engineering approach, specifically when you are researching human users as part of being a UX researcher.
These six layers don’t slow you down. They document your thinking in a format that survives stakeholder scrutiny. They surface problems when fixing them is cheap. They turn the AI from an oracle into a sparring partner that challenges your assumptions. We prefer a rebellious AI against a conformist AI any day.
Teams that skip this will keep running into the same wall: Beautiful outputs that fall apart under the first serious question. “Where did this come from?” “What if you’re wrong?” “Did you consider the impact on users who aren’t like us?”
You don’t have to be paranoid to use this framework. All you have to do is build a practice where ethical accountability is as automatic as saving your work. It should be the cornerstone of your UX approach with AI.
Start with one layer. Declare identity on your next synthesis prompt and see what changes. Then add source verification. Build the habit before you need it.
Good luck out there.




