
AI vs Subject Matter Expert: Learning Design Changes
5 days ago
5 min read
1
4
Who Needs a Subject Matter Expert Anymore?
If you've worked in instructional design for any length of time, you've probably noticed a shift. It used to be that our first step when building a course or workshop was to meet with a subject matter expert. The SME was the go-to person, the one with all the technical knowledge, the real-world experience, and the insight that could turn a generic learning experience into something truly useful.
Lately, that dynamic has been changing. In some cases, the SME is no longer involved at all. Instead, I’m being handed a stack of documents (company SOPs, internal guides, a thought leaders’ book) and asked to design training from that. Other times, clients are asking me to use generative AI to create the content. "Just ask ChatGPT," they say, or "We’ve got a knowledge base. Can you work from that?" It's become AI vs subject matter expert.
This trend isn’t necessarily good or bad. But it does deserve a closer look, especially for those of us who are trying to build meaningful, effective learning in today’s workplace.
Let’s talk about what we gain and what we risk when we shift away from SME-driven design.

The Way It Was
Traditionally, the SME was at the centre of instructional design. They were the ones who had walked the walk. Our role was to extract their knowledge, organize it, and translate it into something that learners could apply. The SME could explain the grey areas that never make it into a policy. They could describe what "good" looks like in action. They could give us stories and examples, and often, they could validate that the content made sense in the real world.
When I worked with SMEs, I often found myself asking the same types of questions. Why is this process done this way? What happens when something goes wrong? What do new employees usually struggle with? Their answers helped me shape learning that felt authentic. And that authenticity builds trust, which is key to engagement.
But the reality is, SMEs are often busy. They have day jobs. Sometimes they’re not available. Sometimes they’re not even identified at all. And that’s where the shift begins.
Working Without a SME
When I’m asked to build a course from documents alone, the design process looks a little different. I’m no longer asking questions in real time. Instead, I’m reading SOPs, trying to piece together not just what the process is, but also why it exists. I’m scanning for contradictions, outdated sections, or missing steps. I have to become both detective and translator.
One advantage of working from documents is speed. I don’t have to wait for someone to be available. If the material is clear and up to date, I can get started right away. In regulated industries, documents are often the source of truth. The process must follow the SOP, and training needs to reflect that. In those cases, designing from documentation can make a lot of sense.
But documents can also be painfully dry. They tell people what to do, not how to do it well. They rarely address edge cases. And they almost never include the kind of human detail that helps people connect the dots.
So, I often find myself writing between the lines, making educated guesses, and asking questions no one is around to answer. This creates a risk: the course might be technically accurate but disconnected from the lived experience of the learner.

Enter: AI
In the past couple of years, generative AI has made its way into instructional design workflows. And like many of you, I’ve experimented with using tools like ChatGPT and Copilot to help with outlines, draft content, write sample scenarios, and more.
When used thoughtfully, AI can be a helpful collaborator. It’s especially good at getting past the blank page. If I need a first draft of a lesson on communication skills or time management, I can ask the AI and have something to work with in seconds. It can help identify gaps, summarize content, or even rephrase dense material into plain language.
But AI also brings its own challenges. It doesn’t know the organization, the learners, or the workplace culture. Its output can be bland or generic unless carefully shaped. And sometimes it just gets things wrong. If I’m working on technical or highly contextual content, I can’t rely on AI to be accurate. I need to cross-check everything, and that takes time.
Perhaps more importantly, AI lacks judgment. It can produce content that sounds plausible but has no grounding in reality. Without human oversight, it can miss nuance, reinforce bias, or offer examples that fall flat.
So, Who Is the Expert Now? AI vs Subject Matter Expert
As instructional designers, we’re being asked to take on more responsibility in this changing landscape. When we aren’t working directly with SMEs, we become the ones interpreting the content, filling in gaps, and deciding what matters most. That’s a big shift. It means we need a better foundation in both the subject matter and the learner’s context. We need to be willing to question the material we’re given, whether it came from a handbook or an AI prompt.
This can be empowering. It allows us to shape learning more independently. But it can also be risky if we don’t have access to someone who can validate our assumptions.
There’s also a risk of designing in a vacuum. When learning is created from documents or AI alone, it may look good on paper but fall short in practice. Learners know when something is off. They know when a scenario doesn’t feel real or when the training misses the pain points they face every day. That disconnect can lead to disengagement, or worse, non-compliance.
When SMEs Are Still Essential
Despite the trend toward more self-directed content creation, there are still plenty of times when SME input is non-negotiable. If the material involves safety, legal obligations, or highly specialized knowledge, we need someone who can verify the content. If the goal is to teach judgment, decision-making, or soft skills in complex situations, we need someone who has lived it. If the learners are likely to push back or ask, "But what about this?" we need someone who can anticipate those questions.
That said, the role of the SME is evolving. In many projects, they are no longer involved from the start, defining the scope or structuring the flow of learning. Instead, they’re brought in later, once content and activities have already been drafted. Their role becomes one of validation, not direction. They are asked to review, fact-check, and provide feedback on what already exists, rather than shape it from scratch.
This approach can work well, especially when you have a well-informed instructional designer who understands both the learning goals and the audience. It’s efficient. It respects the SME’s time. And it allows the designer to bring a consistent structure and learner-centred approach to the table before asking for expert review.
But it also depends heavily on the quality of that collaboration. If the SME isn’t given enough context, or if they don’t feel empowered to challenge what’s already been created, their input can become more of a rubber stamp than a true contribution. That’s not ideal, especially when the subject matter is complex or sensitive.
A well-designed review process can strike the right balance. It gives the SME room to clarify, correct, or expand as needed, without requiring them to build from scratch. It also positions the instructional designer as a true partner, as someone who can frame the content in ways that work for learners, while still staying true to the subject.
Designing with Purpose
In the end, whatever tools or sources we use, it still comes down to designing learning that works for real people in real contexts. That’s where our expertise makes the difference.