The Problem We're Actually Solving
You're drowning in repetitive work. Writing measurable learning objectives that hit the right Bloom's level. Building rubrics that actually align with assignments. Formatting content to meet Quality Matters standards. Checking hundreds of documents for accessibility compliance. Drafting diplomatic emails to faculty about course updates.
Meanwhile, the AI discourse is all "revolution" and "transformation" with zero practical guidance. Vendors promise to automate everything. Thought leaders talk about the future of education. Nobody's telling you how to actually integrate AI into the work you're doing this Tuesday.
I've been doing instructional design for 7+ years, managing ~100 courses per semester at FIU Online. I've tried the tools, built some of my own, and figured out what actually works. Here's what I've learned.
What Actually Works
Writing Measurable Learning Objectives
Before AI, writing objectives was either too generic or too time-consuming to get specific. Here's the difference:
Before (typical):
"Students will understand financial statements."
After (AI-assisted):
"Students will analyze quarterly financial statements to identify three key performance indicators that signal potential cash flow problems and recommend specific corrective actions based on industry benchmarks."
The AI understands Bloom's taxonomy, knows the difference between "analyze" and "evaluate," and can generate objectives that are specific, measurable, and appropriate for the course level. You provide the topic and learning level, it provides the specific, measurable language.
Generating Rubric Criteria
Hand your assignment description to AI and get back detailed rubric criteria that actually align. Not just the dimensions (which you probably already know), but the specific performance indicators for each level.
Example: "Case study analysis assignment" becomes a rubric with specific criteria like "Identifies all relevant stakeholders and their interests" (excellent) vs "Identifies most stakeholders but misses key perspectives" (good). The kind of specificity that makes grading consistent and feedback meaningful.
Drafting Module Overview Pages
Those module introduction pages that nobody reads because they're too generic? AI can draft engaging, specific overviews that connect the content to the bigger picture. Provide the module topics and learning objectives, get back an overview that actually motivates students to engage.
Accessibility Checks
This one hits close to home. I built Alt-Scan because I was spending hours manually checking course files for missing alt text. AI can now:
- →Suggest meaningful alt text for complex images (not just "image of chart")
- →Review heading structure for logical hierarchy
- →Flag color contrast issues and suggest alternatives
- →Analyze reading level and suggest simplifications
QM Alignment Gap Identification
Upload your course materials and get a gap analysis against Quality Matters standards. AI can spot when learning objectives don't align with assessments, when course policies are missing required elements, or when navigation could be clearer. Not perfect, but a solid first pass that catches the obvious issues before peer review.
Faculty Communication
Writing diplomatic emails about course updates is an art. AI can help draft messages that explain required changes without sounding accusatory, suggest implementation timelines that are realistic, and frame feedback in terms of student success rather than compliance.
What Doesn't Work (Yet)
Let's be honest about the limitations. AI is great at structure and process, terrible at context and nuance.
Replacing Subject Matter Expertise
AI can help you write better learning objectives, but it can't tell you which concepts are actually important in advanced organic chemistry or what misconceptions students typically have about regression analysis. Faculty expertise is irreplaceable.
Auto-Generating Entire Courses
Course design is about understanding learners, context, and constraints. AI can generate content that looks good on the surface but misses the nuances that make courses actually work. It's a tool for accelerating good design decisions, not making them for you.
Understanding Institutional Context
Every institution has its quirks. Specific LMS configurations, unique policy requirements, particular student populations. AI doesn't know that your students struggle with time management or that your faculty are resistant to video assignments. You do.
Nuanced Accessibility Decisions
AI can flag obvious accessibility issues and suggest technical fixes, but it can't make judgment calls about when an alternative format is truly equivalent or how to balance accessibility with pedagogical goals. Those decisions require human expertise.
The Workflow: Integration, Not Replacement
Here's how I actually use AI in my daily work:
Use AI for First Drafts, Human Expertise for Quality
AI gets you from blank page to working draft fast. But every output needs human review. The AI-generated rubric might miss domain-specific criteria. The learning objective might use the wrong Bloom's level for your student population. Use AI to start, use your expertise to finish.
Always Verify Against Standards
AI outputs look polished but aren't always accurate. Every AI-generated learning objective should be checked against your program's standards. Every rubric should be validated against QM criteria. Every accessibility suggestion should be tested with actual assistive technology.
Build a Prompt Library
Good AI outputs come from good prompts. Document what works. When you find a prompt that consistently generates useful rubric criteria, save it. When you develop effective phrasing for learning objectives in your discipline, keep it.
Check out our growing collection of ID-specific prompts at /prompts. These are tested, practical prompts for the work you're actually doing.
Start Small, Scale What Works
Don't try to AI-ify your entire workflow at once. Pick one repetitive task. Use AI to accelerate it. Learn what works and what doesn't. Then expand to the next task. Build competence gradually.
Try It Yourself
The best way to understand how AI fits into your workflow is to try it on real work. Here are three practical tools you can use right now:
Alt-Scan
Upload course files (PDF, DOCX, PPTX) and get a detailed report of missing alt text. See exactly which images need description and get suggestions for meaningful alternatives.
Course Visualizer
Map the connections between learning objectives, activities, and assessments. Spot alignment gaps before they become problems.
Prompt Library
Copy-paste-ready prompts for learning objectives, rubrics, accessibility checks, and faculty communication. Tested by working IDs, refined through real use.
The Bottom Line
AI won't replace instructional designers. But IDs who thoughtfully integrate AI into their workflow will be more efficient, more effective, and more valuable than those who don't. The question isn't whether to use AI. It's how to use it well.
Start with one task. Use AI to accelerate it. Learn what works. Scale what's valuable. The future of instructional design isn't about AI doing our jobs. It's about us doing our jobs better with AI as a tool.
Victor Iglesias
Instructional Design Consultant at FIU Online, managing ~100 courses per semester. QM Peer Reviewer and builder of practical tools that solve real ID problems.
March 2, 2026 • 8 min read