Design Philosophy
Most AI training fails because it treats "AI literacy" as a single skill. This architecture treats it as a system of interconnected competencies that scale with role requirements.
Why Current AI Training Fails
- Content tied to specific tools becomes obsolete in months
- One course can't serve engineers and admins equally
- "Completed" doesn't mean "competent"
- No connection between training and job performance
- Generic content that doesn't address real workflows
This Architecture Solves For
- Foundations that transfer across any platform or tool
- Modular design: assemble builds specific to each role
- Skill demonstrations prove actual capability
- Every module ties to measurable job outcomes
- Depth levels scale from awareness to mastery
Core Design Principles
| Principle |
Why It Matters |
How It Manifests |
| Platform Agnostic |
Interfaces change quarterly; mental models don't |
Teach how LLMs work, not where to click |
| Andragogy First |
Adults learn through relevance and application |
Every concept tied to immediate job tasks |
| Transfer Oriented |
Training doesn't equal performance without design |
Assessments mirror real work, not trivia |
| Modular Scalability |
One course can't serve all roles equally |
Role builds assembled from shared modules |
Depth Levels
Each module can be delivered at three depth levels, allowing precise calibration to role requirements.
Awareness
Know it exists, grasp basic concept. For adjacent roles or foundational understanding.
Working
Apply consistently in daily work. Primary competency level for most users.
Mastery
Optimize, troubleshoot, teach others. For power users, trainers, champions.
Builds by Role
Modules combine into certification tracks tailored to each role. Each build selects relevant modules at appropriate depth levels, with tailored assessments. Organizations can customize builds or create new ones from the module library.
General Workforce
All employees · Baseline AI literacy
F1
F2
F3
F4
F5
S1
S3
S4
A1
Assessment: Knowledge verification + 2 prompt challenges
Executive Assistant
EAs supporting leadership · Power user track
F1 to F4
S1
S2
S3
S5
A1*
A5*
A8
L1
Assessment: 4 challenges + workflow portfolio
People Managers
Anyone managing direct reports
F1 to F5
S1
S3
S4
A1
A8
L3
L4
Assessment: Challenges + team adoption plan
Customer Service
Frontline roles serving customers
F1
F3
F4*
F5
S1
S3
A1
A6*
Assessment: Customer scenario challenges
Marketing & Creative
Content creators, marketers, creative roles
F1 to F5
S1
S2
S3
A1*
A2
A7*
L1
Assessment: Creative brief + campaign artifact
Technical / IT
Engineers, IT, technical roles
F1 to F4*
S1*
S2*
S3
A4
A10
L2
T1
T4
Assessment: Code + security review
*
Mastery level. Modules marked with * are delivered at mastery depth. Unmarked modules default to working level. Organizations can adjust depth levels based on specific role requirements.
Additional Builds Available
Software Engineers · L&D Professionals · Finance & Analysts · HR & Talent · Project Managers · Operations
Custom Build Service
Builds tailored to each role can be designed for unique organizational functions. Module selection + depth calibration + custom assessment.
Assessment Framework
Three assessment types ensure certification reflects actual capability, not just completion. Each scales with depth level and integrates with standard LMS platforms.
📋
Knowledge Verification
Confirm conceptual understanding and appropriate judgment in AI contexts.
- Scenario based judgment questions
- Misconception identification
- Policy application scenarios
- "What would happen if..." predictions
⚡
Skill Demonstration
Prove practical capability through actual AI use with defined scenarios.
- Prompt challenges with rubrics
- Evaluation graded by LLM
- Tasks completed in multiple steps
- Error identification & correction
📁
Portfolio Application
Demonstrate transfer to real work context with actual job artifacts.
- Work artifact submission
- Before and after comparison
- Process documentation
- Reflection narrative
LMS Implementation Considerations
TRACKING
xAPI over SCORM
Captures granular interaction data, not just completion status
VISIBILITY
Manager dashboards
Competency indicators connect learning to performance conversations
CREDENTIAL
Digital badge + LMS record
Portable recognition with internal visibility
RETENTION
Spaced retrieval prompts
Triggers after completion combat forgetting curve
ACCESS
Modular resource library
Modules accessible as standalone references after certification
MAINTENANCE
Quarterly review cycle
Evergreen foundations + flagged volatile sections
Measurement Framework
LEVEL 1
Reaction
Satisfaction, perceived relevance, NPS
LEVEL 2
Learning
Assessment scores, rubric performance
LEVEL 3
Behavior
Application tracking at 30, 60, 90 days
LEVEL 4
Results
Productivity metrics, time savings