What this is
Twenty-one structured methods, one orchestrating agent
Each skill is a complete AI prompt encoding an established design ethics method — from Brignull's dark-pattern taxonomy to Fogg's behavior model to Stanford's STF-ET tool chain. Together they make rigorous ethical analysis available inside the tools designers and product managers already use, producing concrete artifacts: stakeholder maps, dark-pattern audits with named statutes, behavioral forecasts with cascade analysis, signed ethical contracts.
Why
Access, not replacement
The discipline is rich and well-researched. What's harder is reaching it on demand. A product team that can't add a full-time ethical-design specialist can still bring 21 validated methods into a sprint review with a single prompt. The skills don't replace expertise — they integrate it, so that ethical thinking happens as part of shipping rather than after.
What's inside
21 methods
| Skill | What it produces |
|---|---|
| Another Lens | Surfaces designer bias and converts insight to a Design Decision Spec decide |
| Anti-Heroes | Identifies who gets harmed by a design even when it works as intended forecast |
| Bad Design Canvas | 12-category adversarial audit of a product's potential harms audit |
| Black Mirror Brainstorming | Dystopian misuse scenarios to surface non-obvious risks forecast |
| CIDER | Audits exclusionary assumptions embedded in a design audit |
| Critical Interviewing | Research protocol with non-obvious harms inventory and interview guardrails decide |
| DAH Cards | Six harm categories with manifesto option audit |
| Digital Ethics Compass | Four-direction audit with stakeholder map and objective-function risk table audit |
| Ethical Contract | Cross-disciplinary signed commitment with bias audit and red lines align |
| Ethicography | Analyzes team decisions over time for ethical trajectory and 12-month forecast audit |
| Fair Patterns | Dark pattern audit with jurisdiction-specific statutes and vulnerable-population matrix audit |
| Humane Design Guide | Six-sensitivity audit with named mechanisms and exploitation-stack analysis audit |
| Inverted Behavior Model | Behavior forecast with worst-possible-design, convergence check, and 5-stage cascade forecast |
| Motivation Matrix | Maps the five human drives a product activates and whether ethically forecast |
| Normative Design Scheme | Three-lens decision support with Universal Law Test and Triad Conflict Matrix decide |
| Pledge Works | 5-part operationalized pledges with "what we refuse to build" register align |
| Responsible Design Prism | Five-axis ethical posture rating with stakeholder map and mechanism audit audit |
| STF-ET | Stanford's 5-tool chain for long-term ethical futures forecast |
| Value Dams and Flows | Maps stakeholder value conflicts with power analysis align |
| Values Levers | Identifies levers given the user's role to shift culture toward ethical design align |
| Worrystorming | Structured worry session that reframes concerns as design values forecast |
Foundation
Every method cites its source.
Built on published research, not invented principles
None of these skills were made up. Each one encodes an established design ethics method — Brignull's dark-pattern taxonomy, Fogg's behavior model, Stanford's STF-ET toolchain, the Center for Humane Technology's sensitivity framework, and others. The discipline is four decades deep. This package makes it reachable at the moment a decision is being made, not after.
Automated evaluation can tell you whether a skill follows its own methodology. It can't tell you whether that methodology produces insight a practicing designer would trust. That takes human evaluators — designers, PMs, ethicists — running these skills against real decisions and reporting back.
If you use any of these methods in your work, open an issue or start a discussion. Negative results are as valuable as positive ones.
Help evaluate5-minute quick start
Try it right now
You don't need to install anything. Open your AI assistant (Claude, ChatGPT, Cursor) and try one of these prompts:
If you're designing a feature
"I'm designing [feature]. Help me identify who might be harmed even if it works as intended. Consider vulnerable groups and unexpected use cases."
This uses the Anti-Heroes method
If you're reviewing existing design
"Review this [product/page/design] for dark patterns or manipulative design elements. List each with the type and potential impact."
This uses the Fair Patterns method
If the team is stuck on a decision
"Our team is debating [decision]. Help me break this deadlock by examining it through three lenses: universal law test, stakeholder impact, and power dynamics."
This uses the Normative Design Scheme method
Get started
Two paths in
Through the agent
1- Clone the repository locally or add it to your AI assistant's context
- Point your AI assistant at the repo and describe your situation (e.g., "I'm designing a notification system and worried about notification fatigue")
- The
AGENT.mdorchestrator routes you to the right method — no need to pick one yourself - Follow the agent's guidance to get your ethical analysis artifact
Pick a skill directly
2- Browse the
tutorials/directory or the skill index above - Choose a method that matches your current need (e.g., "Fair Patterns" for dark pattern audits)
- Load that skill's
SKILL.mdfile into your AI agent as a system prompt - Paste your situation and get the structured output