Ethical Use of AI
"With great power comes great responsibility." Spider Man
In Chapter 5, Results Verification, you learned how to verify if an AI’s output is accurate using the SENSE
framework. But truth is only part of the equation. Ethics is about something else: What is the right thing to do when using AI—especially when others are affected?
This chapter introduces two practical frameworks—FAIR and RISK—to help you think clearly and act responsibly in real-world situations.
Why Ethics Matters
AI tools are powerful and persuasive. They can generate convincing text, images, or code instantly. But with great power comes responsibility:
- Are you unintentionally spreading biased or unfair information?
- Are you presenting AI work as your own?
- Are you respecting others’ privacy?
- Are you making a decision that could impact someone’s rights or well-being?
This chapter won’t give you all the answers—but it will help you ask better questions.
Key Risks to Watch Out For
🤖 Bias and Discrimination
AI tools can reflect and amplify societal biases (e.g., gender, race, class).
Example: An AI that writes job ads may reinforce stereotypes if trained on biased data.
📉 Misinformation and Hallucinations
AI can make up facts or present incorrect data confidently.
Example: An AI-generated article includes fake statistics or citations.
🔐 Privacy and Security
Sharing personal or sensitive data with AI tools (even by accident) can be risky.
Example: Asking AI to summarize a confidential work email might leak details.
🧰 Misuse and Over-reliance
AI is a tool—not a replacement for human responsibility.
Example: Copying AI answers without checking can lead to errors or legal issues.
Sources: Montreal Declaration for Responsible AI, OpenAI on Hallucinations
Everyday Scenarios to Think About
- Generating a social media post using AI? Be clear if it’s AI-generated.
- Writing marketing content? Double-check for biased or exclusionary language.
- Using someone’s personal story in training data? Get permission first.
- Using AI to screen job candidates? Be aware of bias and legal risks.
- Getting AI help to summarize sensitive documents? Consider confidentiality.
- Asking AI for life advice? Use judgment, not just output.
The FAIR Framework
Generating and publishing content
When using AI to write, generate, or publish something, run it through the FAIR test:
- F – Fairness: Is this output free of harmful stereotypes or biased assumptions?
- A – Attribution: Are you being transparent about what was generated by AI?
- I – Impact: Could this cause harm, confusion, or unintended consequences?
- R – Rights: Does this respect people’s privacy, consent, and intellectual property?
The RISK Framework
Making Decisions
Use this checklist when the AI is influencing a decision, judgment, or recommendation:
- R – Relevance: Is the AI’s input actually relevant and suited to this decision?
- I – Integrity: Are you keeping your own standards for accuracy, fairness, and truth?
- S – Sensitivity: Is the topic sensitive (e.g., health, identity, legal)? Proceed with care.
- K – Knowledge Gap: Are you assuming the AI knows more than it really does?
Practical Reflection Prompts
- What’s the worst misuse of this AI output I can imagine? How do I guard against it?
- Would I be comfortable showing this to the person it’s about?
- Am I hiding the fact that AI helped me?
Learn More
- See the Results Verification chapter for how to check facts.
- If you’re working in a professional context, check your organization’s AI or data policy.
- Refer to external codes like:
- Mozilla: Trustworthy Artificial Intelligence
- OECD AI Principles
- UNESCO AI Ethics Recommendation
- Montreal Declaration
- OpenAI Usage Policies