🤖 AI Critic & Chat
Get automated quality feedback on your content and ask follow-up questions in plain language.
Overview
The AI Critic is your automated quality assurance assistant. It analyzes your content for common issues across multiple dimensions - factual accuracy, pedagogical quality, bias, and more. When the feedback is too technical, use the AI Chat feature to get plain-language explanations.
Automated Analysis
AI reviews your content across 6 quality dimensions in seconds.
Severity Levels
Issues categorized by priority: Must Fix, Should Address, Nice to Have.
AI Chat
Ask questions about any criticism and get plain-language explanations.
Actionable Feedback
Specific suggestions on how to improve your content.
Using AI Critic
Open the Critic Modal
Click the 🔍 AI Critic button in the content editor toolbar. This opens the critic options panel.
Select Review Type
Choose the type of analysis you need. Each type focuses on different quality aspects.
Wait for Analysis
The AI analyzes your content (usually 5-15 seconds). A loading animation shows progress.
Review Feedback
Read through the structured feedback with severity indicators and specific recommendations.
Ask Questions (Optional)
Click 💬 Ask About This Criticism to chat about any feedback you don't understand.
Critic Types
Choose the right critic type based on your content and concerns:
| Type | What It Analyzes | Best For |
|---|---|---|
| 🎯 Factual Accuracy | Facts, dates, formulas, statistics, references, citations | STEM content, history, technical subjects |
| 📚 Pedagogical Review | Learning objectives alignment, scaffolding, difficulty progression, prerequisite concepts | Any instructional content |
| ⚖️ Bias & Inclusivity | Cultural stereotypes, gender bias, socioeconomic assumptions, inclusive language | Social studies, diverse classrooms, global audiences |
| 📝 Assessment Quality | Question clarity, answer correctness, distractor plausibility, SATA formatting | SATA/assessment content |
| 🔧 Technical Quality | JSON structure, rendering issues, formatting errors, code syntax | Content with code, equations, or complex structures |
| 🎓 Comprehensive Review | All of the above combined | Final review before publishing, high-stakes content |
Severity Levels
Each issue is categorized by its priority:
🔴 Must Fix
Critical issues that should be addressed before publishing:
- Factual errors
- Wrong correct answers in assessments
- Harmful or offensive content
- Broken JSON structure
- Major bias issues
🟡 Should Address
Important improvements that significantly enhance quality:
- Unclear explanations
- Weak distractors
- Missing scaffolding
- Minor bias concerns
- Outdated information
🟢 Nice to Have
Optional enhancements for polished content:
- Additional examples
- Formatting improvements
- Style consistency
- Enhanced explanations
- Accessibility improvements
💬 AI Chat Feature
When AI Critic feedback is too technical, use the chat feature to get plain-language explanations.
How It Works
Click "Ask About This Criticism"
After receiving feedback, click the 💬 Ask About This Criticism button to open the chat modal.
Chat Context is Preserved
The AI already knows your content, the criticism type, and the specific feedback. You don't need to repeat information.
Ask in Natural Language
Type your question as you would ask a colleague. The AI understands context.
Get Actionable Explanations
The AI explains technical terms and provides specific suggestions for how to fix the issue.
Example Questions to Ask
"What does this mean?"
"How should I fix this?"
"Can you give me an example of what you mean?"
"Why is this considered a problem?"
"What would be a better way to phrase this?"
"Is this really necessary to fix?"
"Can you show me before and after?"
Example Conversation
"The distractors exhibit insufficient semantic plausibility. Cognitive discrimination load is minimal, reducing assessment validity for competency differentiation."
You Ask: "What does that mean? How do I fix it?"
AI Chat Response:
"In plain language: your wrong answers are too obviously wrong. Students can eliminate them without really understanding the topic - they're just picking the answer that 'sounds right.'
To fix this:
• Make wrong answers sound more similar to the right answer
• Use common misconceptions as distractors
• Include partially correct statements
Example: Instead of 'The sky is made of cheese' (obviously wrong), try 'The sky appears blue due to light absorption' (sounds plausible but incorrect - it's actually scattering)."
Quick Question Buttons
The chat modal includes quick buttons for common questions:
- What does this mean? - Get a plain-language explanation
- How do I fix this? - Get specific action steps
- Give me an example - See before/after examples
Recommended Workflow
For best results, follow this quality assurance workflow:
1. Generate/Edit Content
↓
2. Run AI Critic (start with specific type for your content)
↓
3. Review 🔴 Must Fix issues first
↓
4. If unclear → Click "Ask About This Criticism"
↓
5. Make corrections in the editor
↓
6. Re-run AI Critic to verify fixes
↓
7. Address 🟡 Should Address issues
↓
8. Final Comprehensive Review before publishing
↓
9. Preview in student context
↓
10. Publish ✅
Tips & Best Practices
✅ Do's
- Always run AI Critic before manual review (it catches technical issues quickly)
- Use specific critics for targeted feedback
- Ask questions when feedback is unclear
- Fix 🔴 issues before publishing
- Run Comprehensive Review for high-stakes content
- Re-run critic after major edits
❌ Don'ts
- Don't blindly accept all AI suggestions - use your judgment
- Don't skip the critic for "simple" content
- Don't ignore 🟡 issues for student-facing content
- Don't rely solely on AI - add your expertise
- Don't publish without at least a quick critic check
Limitations
While AI Critic is powerful, be aware of its limitations:
- Domain-Specific Accuracy: AI may miss subject-specific errors in specialized fields. Always have domain experts review technical content.
- Cultural Context: AI may not understand local cultural nuances. Apply your knowledge of your students' backgrounds.
- Pedagogical Judgment: AI suggests best practices but doesn't know your specific classroom context. You know your students best.
- False Positives: AI may flag issues that aren't actually problems in your specific context. Use judgment.
- Language Nuance: For non-English content, AI criticism may be less accurate. Consider having native speakers review.