5.1 Agent Architecture Overview (10 minutes)
100 Agents Across 17 Categories:
- Core Learner Modeling (4)
- Tutoring Interaction (11)
- Content Generation (17)
- Pedagogical Strategies (15)
- Assessment (11)
- Cognitive Support (10)
- Social-Emotional Learning (9)
- Social & Collaborative (6)
- Domain-Specific (6)
- Accessibility & Inclusion (3)
- Teacher Support (4)
- Analytics & Research (7)
- AI Ethics & Fairness (4)
- Longitudinal Learning (4)
- Real-World Learning (4)
- Parent & Community (3)
- System Intelligence (2)
10 Coordination Levels:
| Level |
Name |
Agents |
Time |
Use Case |
| 1 | Lightning | 4 | 1-2s | Quick checks |
| 2 | Quick | 11 | 2-4s | Recommended |
| 3 | Standard | 20 | 4-8s | Comprehensive tutoring |
| 5 | Professional | 50 | 12-18s | Advanced analysis |
| 10 | Complete | 100 | 25-40s | Full ecosystem |
5.2 Demo: "Show AI Thinking" Workflow Visualization (15 minutes)
Step 1: Launch Learner Onboarding
- Click "Get AI Recommendation" from dashboard
- Enter learning goal: "Understand photosynthesis"
- Select Analysis Level 2 (Quick, 11 agents, 2-4s)
- Submit
Step 2: View Recommendation Results
- System recommends: "Socratic Playground (SPL)"
- Rationale: "You have foundational knowledge but need application practice"
- Click "🧠 Show AI Thinking" button
Step 3: Explore Workflow Timeline
- Visual timeline shows 11 agent invocations
- Color-coded events (agent invoked, workflow started, completed)
- Playback controls (play/pause, speed: 0.5x, 1x, 2x, 4x)
- Click individual agents to see their reasoning
Key Teaching Points:
- Transparency: See exactly which agents were consulted
- Reasoning: Understand why SPL was recommended
- Trust: Educators can validate AI decisions
- Research: Complete audit trail for analysis
5.3 Hands-On Activity: Try Different Analysis Levels (10 minutes)
Instructions:
- Return to Learner Onboarding
- Try Analysis Level 1 (Lightning, 4 agents) - Notice speed
- Try Analysis Level 3 (Standard, 20 agents) - Notice depth
- Try Analysis Level 5 (Professional, 50 agents) - Notice comprehensiveness
- Compare workflow visualizations across levels
Observation Prompts:
- How does recommendation quality change with analysis level?
- What's the tradeoff between speed and depth?
- Which level for rapid in-session recommendations vs. high-stakes decisions?
5.4 Group Discussion: Agent Coordination (5 minutes)
Discussion Questions:
- How does explainability build trust in AI recommendations?
- What insights can educators gain from workflow visualizations?
- How could 10-level coordination support different use cases?
Key Takeaways
- 100-agent architecture spans 17 functional categories
- 10 coordination levels balance speed vs. comprehensiveness
- "Show AI Thinking" provides complete transparency
- Workflow visualization builds trust and enables research