3 Ways to Scale Learning with AI
Discover how to scale your learning infrastructure for the future of work. Learn to leverage AI, automation, and adaptive systems to personalize learning, close skills gaps, and drive organizational growth. Walk away with strategies to build a future-ready learning ecosystem.
Started with you all. And I think we're ready to rock and roll. Is that right, Matthew? We are good to go. Great. Well, good afternoon, everyone. Thank you for joining today's webinar, three ways to scale learning with AI.
I'd like to thank today's sponsor, Canvas by Instructure, Discover, the world leading user friendly LMS designed to simplify teaching and enhance student learning. I'm Alexandria Clapp, the senior content manager at ATD, and I will be your moderator today. Today's speaker is excited to hear from you so you can chat with him and our fellow participants throughout the session by selecting everyone in the chat box. Let's practice now. Tell us what you're excited to learn about today.
Why did you decide to join us? We're really excited. And we will have time for some q and a at the end of our session, so I will be saving any new or unanswered questions to be addressed at those times. And now I'm very pleased to introduce today's speaker. Zach Pendleton is a chief architect at Canvas by Instructure, and we're looking forward to learning more about how we scale learning in the age of AI. So without further ado, I'm gonna pass it over to Zach to take it away for us today, and thank you all for dropping in what you're thinking about in our chat.
Fantastic. Thank you so much, Alexandria, and thank you all for coming. It's so good to virtually meet you and to see see the chat. So as a reminder, please don't be afraid to use the q and a feature or the chat to to ask any questions you have. It'll be a little tough for me to monitor during the session, but we'll have plenty of time at the end to answer questions about anything that, may come up or that, that you feel like I didn't cover.
So I'm gonna go ahead and share my screen. Let's see. Here we go. Okay. So as mentioned, my name is Zach Pendleton.
I am the chief architect at Instructure where we make Canvas learning management system. And I have the the fortune of a good fortune of having celebrated just last week my fourteen year anniversary there. As chief architect, I spend a lot of time with organizations, with schools, and with individuals talking about teaching and learning and thinking about ways that we can connect humans more effectively through the systems that we use. Now, that's not always been my life. In fact, I I was thinking as I was celebrating my fourteen year anniversary at this company, what my professional journey has been like.
Now if I write a bio about myself, it almost always contains something like this. It gives a quick description of the the degrees that I have, my educational experience, and then it talks about my work experience and and mentions the work that I do now. But the truth is a little more complicated for me, certainly, and I I think probably for most of us, because while you all seem like pretty upstanding people who have your lives together, I usually have no idea what I'm doing, and so my educational journey wasn't that clear cut. It looked a little bit more like this. I I went to school, and then I took a break.
And then I went back to school, but I changed my major. And then I I graduated. I went and I got a job, and I I worked there for a couple years, but I had a a bad day and decided I was over it. And so I found a law school with a late enough admissions deadline for me to apply, and I applied and went to law school on a lark. Now while in law school, I I taught myself a little bit of programming, and I I paid for law school by working as a programmer and found out that I loved that.
So when I graduated, I ended up continuing working as a programmer, not being an attorney. And so while today, my resume may look pretty clear cut, the reality is it's pretty convoluted. But I I think that I'm not alone. I think that most of us have educational journeys that look like this. And that means that, we all need, something unique from from the way that we learn, the way that we teach, and and the world around us is changing too.
Now when I talk with folks, what I hear regularly is this idea that things seem to be changing faster than they ever have before. And I used to think that that was just something that old folks like myself would tell tell ourselves to feel better about not quite understanding why things were happening. But I think there's some truth to it, actually. Because if you think about each technology that comes along as a wave. And so this charts it out here.
Each of these waves builds on the previous wave. And that means that over time, this wave of technology gets bigger and bigger, and it's able to grow faster. It's able to connect with more people. And and drive bigger change. So whereas something like the steam engine took decades to be widely adopted across the world, something like ChatGPT is announced and within a week has millions of users.
K? So the world is moving faster. So So we've got our confusing bespoke educational journeys inside of a world that is rapidly shifting out from underneath us, and we have a bit of a problem in teaching and learning. Mostly around the skills that we're trying to teach our employees and what they need to be effective at their positions. So this gap between the skills that employers say that they need the skills that workers are showing up with is wider now than it's ever been, in part because the world is changing so fast underneath them. But the good news is that most of these in demand skills, things like strategic thinking, digital communication skills, project management, are things that are durable, but they do need to be taught.
Now, how are organizations dealing with this? What we're finding is that most organizations are thinking, about the skill crisis not by just hoping that the right candidate falls in their laps, but by reviewing the existing talent they have and investing in training to reskill and upskill those folks. I think that's great news for most of us inside of learning and development. As as people people, I I think we're all can agree that investment in our folks is is pretty exciting, And our employees want that too. Right? The research shows that almost all employee learning is geared towards helping them be more effective in the job that they're in right now. So here you see sixty percent of employees say that they invest in upskilling to perform their current job more effectively.
Half of employees see skill development as an avenue for their continued career growth, and then only four percent of employees are saying they're out there learning so that they can specifically change jobs or companies. I think that's good news for us. And then what it means is that despite all of this technological chaos around us and despite this skills gap, this is our race to lose. We we have the opportunity here to match the employees we have with with the skills they need to do great things, to keep them in the company, and to keep us all moving in the same direction. But I don't think we can do that with the tools that we have right now.
The atmosphere around us, the shifting skills landscape, the hard truth that we don't even know what skill we're going to need to be teaching in six months means that we can't rely on the old analog systems that we have used in the past. We need something new. We've gotta rethink how we teach and how we learn. Now, thankfully, we've got artificial intelligence, to help with that. I think AI is well suited to that, and that's why we're on the webinar today.
And so today, I wanna talk about using AI to build a future of learning, that is personalized and predictive. The idea here is that learning will need to be first precision focused. We want the right content to the right person at the right time in the right format, and we can't just stamp out the same content for everybody anymore. Second, I think we can use AI, to provide better analytics to anticipate what we're going to need as an organization or what our learners will want, before it becomes a crisis, and AI helps us do that. And then finally, we'll talk about using AI to build tailored learning experiences.
So thinking about how we can provide personal learning journeys for everybody inside of our organization. So first, let's talk about precision learning. Really, as I said, this is about delivering the right content to the right people at the right time. Now that sounds easy. It sounds obvious, but we all know that the reality here is a little more complicated.
I I take this fictional course catalog. This is clearly not your course catalog at all because you would never have large, time consuming courses with broad descriptions, that require employees to invest hours and hours and hours of time. K? Now I've worked at companies like this, and I can tell you I never used these companies' LMS. Right? I I I would take my mandatory learning, But if I needed to learn something, even if it was taught in a class like one of these, I would go to Google. Alright? And this is what we're competing with in l and d.
Historically, we've had these monolithic courses, but the world's not monolithic anymore. And skills based approaches are not monolithic. I I don't want to learn how to do everything in Excel. I wanna learn how to do a v lookup because I've got a report that's due, and I need just that one piece of information. K? So how do we take learning from this to this.
Well, we've got a little bit of a challenge in front of us because today, we've got our highly specialized content inside of our organizations, but that content's all bundled together. And it can be kinda boring and it's slow to learn. But the Internet has a lot of generic content that's very engaging and is just in time. So what we're going to try to do with AI is build something that's highly specialized. We're gonna lean on that great content that we have that's just locked away.
And and pull it out into something that can be a little more engaging for our learners and can better meet their needs where they are. So the first use case I wanna talk about today, is using artificial intelligence to decompose existing content and learning outcomes into micro content. So let me go ahead and see. I'm having a problem again with my screen share. Give me just one second.
There we go. Okay. Perfect. So Today, in the demos, I'm gonna be using a few different tools, but I'm gonna start here with Anthropix Claude. So while I'm using Claude here, you you could do the same things with any other large language model.
You could be using ChatGPT from OpenAI. You could be using Gemini from Google, so forth. Okay. So where am I? Oh my goodness. I think I may have this backwards.
Okay. I am so sorry. I'm a little turned around here. We're gonna come back to this demo. Give me just a moment.
K? So the second use case is predictive analytics. Now when we think about rapidly changing skill frameworks, taxonomies, tools, The issue for us is how we keep up. Right? We know how we've built content in the past. It's really hard to build content that fast in the future, and it's really hard to identify what content we ought to build. A great example of this problem for me comes from an institution that I worked with closely in the Netherlands.
And they were a technology institution, And so they were trying to build curriculum that was really responsive to educator needs, was meeting current industry trends. But what they found was that by the time they had developed a course, it was already out of date. So we wanna avoid that. So what does proactive L and D programming look like? Well, I think that it's it's a mix of a lot of things. Right? First, it starts with your company goals, and then we've gotta look at market conditions.
We need to understand how those are influencing where we wanna go and how we can get there. But then we also need to look at our company culture. We wanna know what people say. We need to understand what people are doing because that culture engine is going to drive our success just as much as our company goals and directions are. And great l and d content takes all of that into account.
It harmonizes with where the company's going, but it also listens to what people want. Alright? As we remember, most employees see skills development and see upskilling as a way to help them be better at their current job. So they're really asking for this type of content. But it's difficult for us to build, and it's difficult for us to know. So what have we done in the past? Well, we we use our engagement surveys.
Certainly have sent out plenty of these in my life. But the problem here is that these cycles feel a little bit like that process that that school in the Netherlands was going through. Right? We send out an engagement survey. We spend a month waiting for people to take it, and then we we probably spend another month trying to understand that feedback. We're aggregating it.
We're building out custom reports and tools. And then last, we act on it, and that may take us multiple months, alright, to the point that now I I've spent six months on something so that if I'm getting employee feedback about what ought to be taught, it's already too late by the time I get to this act phase. So our goal here is to speed this up. This is another place where I think we can use artificial intelligence. Alright? We hear about it a lot as a tool that helps us save time.
I think what what we'll talk about here is a way to save time, but also a way to extend our abilities. So we're going to start with employee feedback. We'll be looking at any employee behavior that may be in our systems, and then we can use AI to take that large dataset, summarize it, classify it, report for us, and then use that to feed our program design. And what's exciting about this is that not only does it save me from having to look row by row by row in my engagement data, but it allows me to build reports that I probably wouldn't have known how to build before. So let's look at how to do this.
So we're gonna plan for our curriculum needs using a large language model, and some employee survey data. Okay. So now this is going to work. Let's see. So I'm gonna go ahead here and start by uploading a file.
I have here an employee engagement results CSV file. K? So this is just standard CSV data. I've got employee IDs, questions on a Likert scale. I've got some free text. K.
I'm gonna go ahead and attach that into Claude, and we can get started really simply. Right? I can just ask for a summarization of this. Let's see. Can you please build me a summary report of this employee engagement CSV. I'd like some charts and graphs, and the focus should be on designing a teaching and learning curriculum.
And then let's say, in fact, give me ten course ideas most relevant to this feedback. K. So in this prompt, first, notice I'm giving context about what this is. Large language models like context. And what I ought to be doing is go even further here and describe what type of organization that I work at.
I could get more specific here, but I think this will be fine for now. But I am describing to that context on the outcome. And then I'm asking for a concrete number of ideas. I think asking for a concrete number of things and asking for more than you think you need, is a really great hack when working with large language models because these ideas are cheap. And candidly, not all of them are gonna be very good, so we wanna get a lot of ideas.
So Let's go ahead and run that now. So now this is going to spin. It will read through that CSV. And, hopefully, what you can see here is that the large language model is gonna go out and call out to other things. Right? It's it's probably going to write some code.
Yep. There we go. It's exciting. Right? So now we're I'm becoming a data scientist because the large language model is writing Python code for me. Now I I didn't know how to do that, but I didn't need to know.
It took care of it for me. So we'll let this run for just a moment. Let's see. K. Think this would be a little faster.
While this runs, I'll say. This this idea of thinking of work that runs adjacent to your work. Is really a powerful place to get started with large language models. And you think about projects where you own a piece of it and other people own other parts because there's a need for specialization. Large language models are really good at softening the boundaries there between those job roles.
So, you know, I'll call the democratization of expertise, but it means something like you may not have to go to a report writer to write a report anymore. I could ask a large language model to write SQL for me and then execute that. Or here, for instance, I the large language model now is able to write code for me. So I I think if you Yeah. Again, the where I think that really shines is finding workflows that rely on multiple people, not because that's the most efficient way to do it, but because that's how specialization worked out in the past.
If you can pull some of that work into your role frequently, you can do it faster, the large language model. You save the experts time for things that they're really excited about that are at the edge of of their expertise, but you also give yourself a lot more autonomy. Okay. Let's see what we've got. So our Our Our analysis is complete.
Our employees are struggling. Oh, no. Our NPS is negative thirty one. Professional development satisfaction is the lowest metric. My goodness.
Well, it's a good thing that we're building curriculum here. Hi. Top gap training infrastructure and top ten priority courses. And then I have look at all these charts. Oh my goodness.
Okay. So this is exciting. K? So I've taken my employee engagement data. I very quickly have been able to chart this out into a number of things that may be interesting for me, Already, I'm seeing something. Top ten requested changes, reduced meeting load.
Man, that seems like a really good idea for professional development and training, teaching people how to have fewer meetings. K? And so now I can take all of these insights, turn them into these courses. The reality is I may have these courses already in my system. Right? I just need to break them apart. And and I can very quickly turn around training content on the order of days and maybe weeks and not the months that it would have taken before.
Hi. The third pillar here is this tailored experience, building a unique learning path for each learner. So, again, we've got a lot of content. People have their own journeys. We want people to to in the way that's best for them, but I think we also wanna recognize how learning takes place, which is a little more like my my journey at the beginning.
Right? It's up and down and around and doesn't always happen in a linear fashion. So custom learning experiences happen in a few different ways. I I think that this is the pattern that I've seen work in in my experience, starting first with assessment. We've gotta understand where employees are at, and this can be pretty simple. Right? I've seen a lot of organizations just use something like free Google Forms, essentially create a personalized version of a BuzzFeed quiz for someone.
Right? People love talking about themselves. Let's get them doing that and use that as our starting point. And then it becomes a bit of a challenge from there. That's the easy part. Then we need to figure out how we map their learning.
Into the content that we have in our organization, ideally in a way that's format agnostic so that people can take little bits of content, they can take larger bits of content. And then last, we've gotta figure out how to scale that because this piece here can be done manually, but it's very time consuming. So our third use case here is building out custom pathways for employees using feedback in our existing curriculum. So I'm gonna go back into Claude here. Now this time, I'm gonna upload two different files.
I have here just a a CSV, one row, from someone in my organization, who took one of these quizzes. Right now, they work in support. They're interested in becoming a software engineer. And then I'm gonna take this here. This is my course catalog, my organization.
K? So we'll take both of those things. And now what I'm going to ask for here is a custom curriculum map. K? So I'm uploading the course catalog for my software company's internal training LMS. And some survey information from one of my employees. I'd like to build.
Custom learning pathway for the student to help them achieve Their career goals. Please course half. Okay. Along with explanations of your selections. K.
Let's see where that gets us. So, again, this is something I could have done by hand. Right? I I could read that employee's needs, and I could go look through the course catalog, or they could go look through the course catalog, but we don't because it takes a lot of time, a lot of courses there, and I'm not sure what I've got. K. Let's see.
Our technical support specialist wants to transition from junior software engineer or to a junior software engineer within twelve to eighteen months. Recommended learning pathway. Let's see. Twenty six weeks. Okay.
And now I'm getting courses called out along with rationale. Phase one, phase two, phase three. K. We skipped advanced courses. We avoided cloud specific deep dives.
They already have an AWS certification. That's good to know. And there were no mobile or specialized tracks. K. Twenty six weeks to ten hours a week with a buffer means that they get there before their twelve month target.
This is a really great artifact, that as a manager, I'd be pretty excited to go sit down and share with this employee. Now I can tweak this. As a human, they can tweak it, but this looks like a pretty good place to get started. And that happened so quickly that we were able to do it inside of a webinar, And I didn't need to do any manual work. K.
So with something like that, I'm able to Make better use of the content I have, make it more discoverable for employees, but also provide to employees really clear direction. Another great example of this is something that, you know, we built internally where we took all of our courses and then mapped dependencies across them. So I can click on something like marketing strategy development, and I can see what courses I need to take to prepare me for that course and then what opportunities that course opens up for me. Right? And fun fact, we went through a process exactly like the one I showed you, and then asked an AI to go ahead and and build this web application for us, which takes a little bit of back and forth, but all AI generated end to end. K.
Let's see. How do we get back to my slides? There we go. Okay. So as we think about using AI in these ways, you know, what I've shown you here is a great way to get started. And, again, I I think you can do these things inside of any LMS.
You can do them with any large language model, any tool. But you will reach a point where doing these things manually, even with an AI assisting you, may be difficult to scale. So I wanna talk a little bit now about how we scale up, how we think about scale, and some things that you could do to extend these types of practices. To to make it easier to implement them once you've prove proven that they work with these types of examples. So true scalability, I I think relies on these three things.
The first one, intelligent automation, thinking about what flows can be automated, where in a flow a system auto exist, and just as importantly, where should the humans be in that flow to ensure that a human remains in the loop and in control of what's happening. And then on the technology side, I wanna call out one concept in two ways. So first, this modular architecture. As as things change so quickly, we want to be able to very quickly adapt our technology to it. And that means we can't have systems that are monolithic, or difficult to replace or difficult to substitute pieces in and out of.
K? We want systems where it's very easy, to slot components in and out, and have a good interoperability story. And then that allows us to build on the human side these adaptive frameworks. Right? Systems that that we can use in different ways. They're very flexible so that when a new technology like large language models come out, for instance. We can we don't need to whole scale replace our entire tech stack.
If we've got a modular architecture, we can just implement a new component inside of them and then adjust our processes. So around intelligent automation, You know, I I I spoke at the beginning a little bit about thinking about chunking out your content, breaking down web content into smaller learning artifacts. At scale, you can't be just dragging and dropping content into large language models. You need to do this in a systematic way. And the way to do that is something called retrieval augmented generation.
The idea here is that you you take your existing course content and then, break it down and index it in the way that a large language model would understand that content. So that essentially captures the semantics of all of those pieces of content. And then once all of those pieces are stored out by what they are, we can reassemble them into courses that we want that are smaller or that are single pages or other things. The power in this approach is is allowing students, for instance. Maybe they go to a course catalog and they search for something about violins.
They could get back results about guitars and cellos and mandolins, because the system, this rag index, is really looking at the semantic relationships between concepts and realizes that all of those things are stringed instruments. Right? That's really great from a discoverability viewpoint, but I also think it helps us make better use of the content we have because it helps us find connections across courses that we may not see today. So I I call this out. I think RAG is an unfortunate acronym, but it's really an important concept to just know and to have been exposed to because it's a good thing to talk to your vendors about or your internal IT teams. So retrieval augmented generation, really allows you to break down, decompose, and reassemble content at scale in a way that's, I I think matches very closely how people learn.
Now I talked about content there, and you saw I was uploading CSVs before. And I I wanna just mention quickly a a note about content. Content matters a lot. K. Your business knowledge, the content in your organization, your processes in a world of large language models, those things remain your differentiators.
K? You know what makes you special and your business special, and you need to lean into that. I'll say if you're not aligned to that or if you're not actively working on building your own content and investing in your own folks, you're going to fail with or without AI. Now we see this really dramatically inside of traditional education. A number of early studies have shown that if we just take what students are doing today in a classroom and we drop AI into that process and give it to students, learning outcomes actually get worse. K? AI alone is not going to help us or solve our problems if we're not investing properly in the things that came before it.
K. And then thinking about how we use it responsibly. And I I think to do that, we've gotta own our content. Now a great example of of how this goes wrong, what happens when we don't get the right content in front of systems, comes out of Google's new AI overview feature. K? If you remember when this feature came out, what was it, last year.
Right? So we could ask a question to Google, and now instead of just getting search results, we get this nice AI overview on top. Well, someone thought they would be cheeky, and they would ask Google, how much glue should I put on my pizza? And, the AI overview came back, and it suggested an eighth of a cup. Said that it would help the cheese from falling off the pizza. Right? Literally gluing it on, and that it it would improve the consistency. Oh, that's that's embarrassing, but it's it's funny, and it probably didn't hurt anybody.
K? Well, a bunch of news outlets wrote about this. So then after that, someone came back and asked again, how much glue do I add to my pizza? Well, this time, they got an AI overview that didn't just tell them an eighth of a cup. It said, according to a May twenty twenty four article in Business Insider, you should add an eighth of a cup. K? So what happened? We started with bad data because we didn't have any data. K? That got codified somewhere or leaned on, and now we've got a large language model that's citing back to the bad data from the first example.
I I I love this example, First, because I like the idea of using glue to keep cheese from falling off my pizza. But second, because I think it it illustrates really clearly what happens if we've got bad data or the wrong data, and we just throw a large language model into the mix. K? It it adds to the confusion and doesn't fix it. So now the second thing I think you wanna look for as you think about scaling out these practices is looking for systems that have modular architecture, systems that rely on open standards, that have great existing kind of integration tools. They've got APIs.
Good example, anybody who's got a wide partner ecosystem probably definitely has this. Right? Now I I mentioned this stuff not because I'm an engineer and I love talking about APIs and architecture, though I do, But because it says here, this is really not about the technology. It's about giving you choice. Right? It means that it is an example inside of my LMS. If I don't like the discussion boards inside of my LMS, I wanna be able to replace them with discussion boards that I do like.
Right? That's a simple example, but it shows the power of something that's built in a modular way from the ground up. K. And then on the flip side, what we see if we build those types of modular experiences is what I call these adaptive frameworks. It makes it really easy to repackage, reformat, and shuffle content to bring it to where the learner is. So the content may be on my desktop LMS.
Can I get it out and give it to somebody on a mobile phone? Right? I I can if that LMS has got really great APIs. Or, I've got a bunch of SCORM content. Can I chunk that out, break it apart, and can I deliver it as single, small, consumable HTML pages for someone? Right? The answer is yes if I get a system that's built in this way. It's when we do that that we can start to scale out really meaningfully because we can reach folks where they are and we can reach folks in different ways, and we don't have to lean on just one approach, which we know is not gonna capture everybody's interests or capture everybody's attention. So I recommend putting together some type of interoperability vendor checklist.
This is a, I I think, a good place to start. It doesn't have to be the whole thing. You, again, you know your people and you know your systems better than I do. But when you meet with new vendors, asking them things about APIs. Right? What APIs do they have? Are they able to update your system without expensive migrations or time consuming migrations that suggest that they're they're built in a way that is scalable and extensible.
Right? Is there a plug in framework? Can you talk to some of those other partners? So that when you buy a system, you're thinking, again, not about just what that system does, but about all of these discoverable latent use cases that it could be good for once we make it available inside of our broader teaching and learning ecosystem. And when we do that, learning moves away from just a to do, something we've gotta check the box on for compliance, and I think really becomes the engine of growth in our organizations where we're very early in the pipe identifying what skills we have inside of our organization, what skills we're lacking. And then we can scaffold new content very quickly, get it out to where people need it to be, and then iterate and repeat. And so I I'll I'll I'll wrap up here and move to the questions by saying it's really think these three places where I see most organizations starting to have success with AI. So Precision learning, thinking about how we take our existing content, how we break it down into smaller pieces.
Second, predictive analytics, getting out ahead of what people need, again, either through automating things like survey result and analysis or by managing better surveys, and then using that information to help everyone in our organization build a tailored, unique learning path that reflects their journey where they wanna go. So with that, let's let's cut over to questions. Let me Stop my share for a moment and see what we've got. Alright. Let me jump back to one that I saved up here.
How can you use AI to help make SCORM files into microlearning? Yeah. You know what? I'm thank you so much for asking that. Let's let's do it. Let me sorry. I'm just gonna, you know, have to tolerate me being an engineer here for just a moment because I'm gonna open a terminal.
I'll share my screen. Okay. I'll share the whole thing again. There we go. Okay.
So here, I have this runtime basics SCORM twenty two thousand four. This is just the sample, the golf course, if you're familiar with that one. K? I'm gonna do this inside of this terminal because this is just a little easier to do quickly, but you could do this with Claude or other things. So I'm gonna go ahead and and open Anthropic here again. Okay.
And let's here, I've got that SCORM class. Let's say run or let's read through this SCORM package and identify Ten learning outcomes. Pick the first learning out and rewrite the content as a small HTML page. K. So, again, I think this prompt is is good enough here.
You're going to usually know what things you want. I would include those things. I would also include information about the audience. So existing skill level, desired outcomes, so forth. But you'll see what I've got happening here now is the large language model reading through all of the content, you'll see because I told it it was a SCORM package, it started with the manifest.
Alright. It's It's pretty exciting. K. So there we go. The ten outcomes of this course.
Play the game, the par system, scoring terminology, rules of golf, so forth. So already, I've identified what learning outcomes are here. Gives me some idea of how to break this thing apart. I told it to create one for the first learning outcome play of the game. So now it's working on that right now.
Right? So that that's what I would do, some variation of this. And can you remind us for folks who might be a little bit confused at what you were just modeling, what platform that was, what exactly folks are looking at? Yeah. So sorry. That was that was Claude. So that that LLM that we used earlier, it was just on inside of my my console there.
And I the reason why I did that was because it saved me from having to just drag and drop all of those SCORM files out onto the web UI, but I could've done it that way. Okay. Cool. Okay. I'm saving lots of questions.
We have we have lots of questions. So we I don't know if we'll get to them all, but thank you all for sharing them. Okay. I was gonna jump to Amanda's because she asked a good question. You were putting info into Claude, and she asked about governance violating potential companies' governance policies by putting it into an l LLM.
So can you talk about closed environments or what you all recommend, how how that Yes. That is a great question. Okay. I I will say a few things here. First, I will I'll excuse my own behavior by saying that was all fake data.
Okay? I I didn't upload the engagement results of any of my friends or family here. So that's a very reasonable concern. If if I were to just open ChatGPT or Claude or Google Gemini and I did not have an enterprise license agreement All of the data that I'm uploading into those systems and all of the chats that I'm having are being saved, and they're being used to train future versions of those models, which means that I may if I'm uploading sensitive information, like either PII or company IP, I'm exposing that to leakage in the future. K? So definitely don't do that. Now what you can do, First, an enterprise agreement.
All of these providers provide enterprise packages that will stop them from saving saving the data and using it for training. K? So that's the first thing I would recommend you do. The second thing you can look at It's It's it's similar, but a little different. If you have a a relationship with any, like, cloud provider, right, maybe you're hosting some things in Amazon Web Services or in Azure or on Google Cloud, those providers all have AI systems as part of those cloud packages as well. The advantage to those is that the data is going to stay inside of that cloud where you've already got your data.
So if you have any type of regionalization concerns, you know, if you are subject to GDPR or things like that, looking at those cloud providers can be a really attractive option to keep your data even a little safer and and in the places where you already are. Yeah. That's a great question. Okay. I'm gonna jump to another one.
Are you familiar with any inexpensive certifications or credentials or courses that help build credibility in using AI in learning programs. I'll drop in a few things that I know about in the chat too. Okay. That's great. Yeah.
Actually, I'm not aware of any certifications. I do know that we have a number of our customers who provide free training content for using AI, mostly in an education context. I'm I'm happy. I'll I'll I'll send the slides out later, but I'll add a a slide with links to those resources too. Yeah.
I I will say one thing I I would say, certification or not, is that I I hope everybody is taking time to explore and experiment with AI. I think that research shows pretty consistently when people don't use it, they either are afraid of it or think of it as a magic tool that solves everything. And when people use it, they develop much more kind of nuanced opinions about it that are, as a result, are much more open to its use because they understand where it's appropriate and not. I I dropped in, just a generic AI resources page that ETD has, and there's a few things on there if folks want to explore. Our education team has a mix of different levels of types of, like, courses and certificates.
So there's a new one that, is in collaboration with Josh Cavalier that's applying AI, but there's also workshops. And there are other there are other, like, vendors out there who offer free courses so you don't have to pay money. And then I noticed at demo day I'm who was it? Umu mentioned they have, like, an AI literacy course that you can take. So, if you go and check out the ATD, demo day for emerging technologies that just happened. I'll find the link for you all.
You can find things like that too. A UMU offers one, but there's also, like, free ones that you can find. And I noticed that some of the actual LLMs themselves offer some types of certificates, that are free where you can get a better understanding of them themselves. I think I'm kind of reiterating some of what Zach said. So there are a lot of things out there, but it's a great question in terms of what do you recommend because it's can almost be overwhelming that there are a lot of things out there.
That's fair. Okay. Let me keep jumping because we have more more questions for you, Zach. What about on screen help for software I use for example okay. This is first time I'm reading this.
I'm a nurse using a new patient scheduling software. A new patient note feature is now available. How could I use AI to help the nurse learn how to use the new note feature? Okay. Great question. I'll put that in the chat for you too so you can, like, see what reference Okay.
Zach. So I think what I've seen work here is taking documentation that you have in probably some traditional format. Right? It's probably since it's a SCORM course or it's a PDF or something that's describing to people how to use that feature, The easiest way to do this, and I'll share again my screen really quickly here so that you can see this because I think this is a a pretty powerful approach for a lot of things. But using something like Google's Notebook LM, so you can then upload that content or lots of pieces of content, and that creates for you a custom chatbot here that now knows about these things. K? So in this example here, I've I've uploaded my my company's employee handbook.
K? This gives me the chatbot. I can listen to it as a podcast. I can list watch a video about it. It breaks it down into, like, a mind map for me. Hopefully, that'll that'll generate.
But now I can ask this thing something like, you know, do I get Columbus Day off of work? And now, normally Google Gemini wouldn't know that. Right? But here now, it's read my employee handbook. It can tell me that is not a holiday that I get. Here are the holidays that I get, and then it cites back to where in the document that's listed. I can go find that that actual source.
So I I shared this one. I know there are a few tools that do this. I think Notebook LM is a really powerful one, but it's a it's a great way to build kind of custom chatbots, that you could use, for training and development really quickly. K. Was just dropping another resource in the chat.
That is awesome. I'm gonna keep jumping because we have more questions. That's great. Do you have any recommendations for how to deploy this approach in a large HRIS environment? Let's say, for example, a tool like Workday. As a learning professional, I am not an engineer.
How would you recommend I gain buy in from our technical in house experts and decision makers? Yeah. Good question. First, I'll say Workday is a great example of one of those extensible open systems that I talked about earlier. That's a that's a good piece of infrastructure to to build things like this on top of. I I would recommend in those situations because as you scale, right, as you know, you're gonna have to, at at some point, invest in building infrastructure or connections or connecting APIs, things like that.
Don't start with scale. Start by showing the art of the possible. So I I would say, to build buy in, you have to to start by doing things that you know don't scale, but that are awesome. Right? And and then once once you get the vision of the thing out there, then that's when I would loop in the technologist and say, okay. Now what are ways that we can we can scale this up? Some things I I will say with AI, I would be very careful right now trying to pitch it as a cost savings tool.
I I know some organizations feel like they can they can save a bunch of money by using AI, either by reducing headcount or in or in other ways. I think that's a little dicey as as a first value prop. Right? I I would look at increasing efficiency. I would look at improving performance. And, again, I I would use the tool to align those early efforts really closely to what the business is is passionate about.
Awesome. And I think we might have actually gotten to most of our questions. So I just dropped in a note for folks. If I accidentally missed your question or if you have a new question, don't be shy. We still have time.
Let's see. Looking here. Oh, University of Michigan. It's a a great partner of ours. That's good to know they've got a cert.
And I I'm just scrolling back. There's some that I did skip over. Okay. Yeah. Let's go to Kristen's, where I did see some just confusion about what LLMs are and asking for examples.
But let's jump to these new ones, because I think we hopefully cover that in the context of the session. What is the best way to ensure accuracy of the AI data? Any recommendations there? Yeah. Okay. So people talk about AI is lying to you as hallucination. Right? And and there are things we can do to minimize it, but I I will say it's not going away With the way that the current generation of generative AI tools are built, hallucination will always exist in them, though it can be reduced.
The way to reduce it is to focus on context and data. Give the large language model as much information about your problem space as you can and ask it to cite back to that. That will reduce hallucination most of the time to a point where you can pretty safely use the tools. Things like Notebook LM, as I just showed, will cite back to to where things happened inside of the document so you can know and verify for yourself. But that that's what I would recommend.
I I think you're, yeah, you're right to be a little distrustful of of them. It's good to to verify. And, yeah, as you're saying, verify. Also, just adding that human oversight back into, like, a a review process, not ever trusting it as point blank. Let's jump to Sherry's question.
It would also be beneficial to use AI to recommend best ways to measure training or identify best metrics? How would you approach that with AI. Yeah. Okay. So I find AI is really good at helping me ideate. It's not really great at giving me awesome ideas always right out of the gate, especially in a place where I know a lot.
Right? So if you're, you know, an expert in a space, I wouldn't go to AI and say, hey. Just give me ten ideas or write this presentation for me or something. That that's probably gonna disappoint you Because really AI, you you think about it, It all of these answers are essentially just kind of regressing to the mean of the domain. Right? Right in the middle. And that means that if my knowledge is below the middle, it's a really great tool.
If my knowledge is above the median, then I I get mixed results. So I would start with some ideas about how you wanna measure your training and then ask AI to expand on it. Right? Follow that that trick where you say, okay. Here are five things that I think are important for measuring results in my company. Now give me fifty more.
And and then sift through those and and use those to ideate. And then on the end of that, as we showed, I would then take it a step further and say, great. These are the ones I wanna use. Here's what my data looks like. Now write code for me that helps me measure and visualize those things.
Cool. Alright. Do you have any suggestions for bringing in learning consultants to help small teams with all of these changes, ideas around AI AI. I think I just recently saw a report too that teams have had more success with implementation when they're bringing in the third party vendor instead of making something in house? Couple of reasons for that, but curious what your recommendations are. Yeah.
That's great. Okay. I would say, first, start with your business objectives. Right? I I don't I wouldn't bring somebody in and say, hey. Our goal is to use a bunch of AI.
I say our goals are one, two, and three. You know, this is where we're headed. These are the challenges we're facing. Now let's bring in the consultant who has the expertise in the AI space and say, what we wanna do is map out how creative we could get with these processes if we use something like AI. Right? So I I think that way, you you're focused on the outcomes, and then you're leaning on them.
To help you adjust the processes, the flows, and then they can suggest places where AI may not be a good fit, you know, or or or maybe a a great option. Cool. And maybe this will be our last question because we're starting to run out of time and starting to see people saying thank you so much, and they're having to hop to their next call. So, Robert, we'll end on your question. For a chatbot, would I need a digital adoption platform to get something onto a web page or a proprietors proprietary software I work on? So you would need if I'm understanding the question correctly, that software does need the hooks to be able to kind of inject the the chatbot in.
That that is true. Right? And that's where I I think asking those questions around modular architecture, composability, and things is really important. The other thing I'll call out there is that and if if the systems you're using have APIs, there's some emerging standards with these large language models, something called MCP, that will actually allow the AI to connect to that system and take action on your behalf. Which which can be pretty exciting when we talk about workflow automation. But but, yeah, that's right.
You you'd be leaning on the the system to have all those hooks in place. Cool. Well, this has been so helpful, Zach. You are a wealth of knowledge. This was so much fun.
I wish we could keep just picking your brain. If you have questions for Zach and the Canvas team, can they reach out to you? Are you Yeah. Absolutely. So you you know my name now? This is Zach at Instructure dot com. Perfect.
We'll put that in the chat. Thank you again to our sponsor, Canvas by Instructure. This was awesome. I wanna remind you all that we are gonna send you a follow-up email, you will have a link to the recording. If you had to hop early or you didn't make us live and you're tuning in in the future, thank you so much, and we hope you have a wonderful afternoon and join us for future webinars. Thanks, everyone.
I'd like to thank today's sponsor, Canvas by Instructure, Discover, the world leading user friendly LMS designed to simplify teaching and enhance student learning. I'm Alexandria Clapp, the senior content manager at ATD, and I will be your moderator today. Today's speaker is excited to hear from you so you can chat with him and our fellow participants throughout the session by selecting everyone in the chat box. Let's practice now. Tell us what you're excited to learn about today.
Why did you decide to join us? We're really excited. And we will have time for some q and a at the end of our session, so I will be saving any new or unanswered questions to be addressed at those times. And now I'm very pleased to introduce today's speaker. Zach Pendleton is a chief architect at Canvas by Instructure, and we're looking forward to learning more about how we scale learning in the age of AI. So without further ado, I'm gonna pass it over to Zach to take it away for us today, and thank you all for dropping in what you're thinking about in our chat.
Fantastic. Thank you so much, Alexandria, and thank you all for coming. It's so good to virtually meet you and to see see the chat. So as a reminder, please don't be afraid to use the q and a feature or the chat to to ask any questions you have. It'll be a little tough for me to monitor during the session, but we'll have plenty of time at the end to answer questions about anything that, may come up or that, that you feel like I didn't cover.
So I'm gonna go ahead and share my screen. Let's see. Here we go. Okay. So as mentioned, my name is Zach Pendleton.
I am the chief architect at Instructure where we make Canvas learning management system. And I have the the fortune of a good fortune of having celebrated just last week my fourteen year anniversary there. As chief architect, I spend a lot of time with organizations, with schools, and with individuals talking about teaching and learning and thinking about ways that we can connect humans more effectively through the systems that we use. Now, that's not always been my life. In fact, I I was thinking as I was celebrating my fourteen year anniversary at this company, what my professional journey has been like.
Now if I write a bio about myself, it almost always contains something like this. It gives a quick description of the the degrees that I have, my educational experience, and then it talks about my work experience and and mentions the work that I do now. But the truth is a little more complicated for me, certainly, and I I think probably for most of us, because while you all seem like pretty upstanding people who have your lives together, I usually have no idea what I'm doing, and so my educational journey wasn't that clear cut. It looked a little bit more like this. I I went to school, and then I took a break.
And then I went back to school, but I changed my major. And then I I graduated. I went and I got a job, and I I worked there for a couple years, but I had a a bad day and decided I was over it. And so I found a law school with a late enough admissions deadline for me to apply, and I applied and went to law school on a lark. Now while in law school, I I taught myself a little bit of programming, and I I paid for law school by working as a programmer and found out that I loved that.
So when I graduated, I ended up continuing working as a programmer, not being an attorney. And so while today, my resume may look pretty clear cut, the reality is it's pretty convoluted. But I I think that I'm not alone. I think that most of us have educational journeys that look like this. And that means that, we all need, something unique from from the way that we learn, the way that we teach, and and the world around us is changing too.
Now when I talk with folks, what I hear regularly is this idea that things seem to be changing faster than they ever have before. And I used to think that that was just something that old folks like myself would tell tell ourselves to feel better about not quite understanding why things were happening. But I think there's some truth to it, actually. Because if you think about each technology that comes along as a wave. And so this charts it out here.
Each of these waves builds on the previous wave. And that means that over time, this wave of technology gets bigger and bigger, and it's able to grow faster. It's able to connect with more people. And and drive bigger change. So whereas something like the steam engine took decades to be widely adopted across the world, something like ChatGPT is announced and within a week has millions of users.
K? So the world is moving faster. So So we've got our confusing bespoke educational journeys inside of a world that is rapidly shifting out from underneath us, and we have a bit of a problem in teaching and learning. Mostly around the skills that we're trying to teach our employees and what they need to be effective at their positions. So this gap between the skills that employers say that they need the skills that workers are showing up with is wider now than it's ever been, in part because the world is changing so fast underneath them. But the good news is that most of these in demand skills, things like strategic thinking, digital communication skills, project management, are things that are durable, but they do need to be taught.
Now, how are organizations dealing with this? What we're finding is that most organizations are thinking, about the skill crisis not by just hoping that the right candidate falls in their laps, but by reviewing the existing talent they have and investing in training to reskill and upskill those folks. I think that's great news for most of us inside of learning and development. As as people people, I I think we're all can agree that investment in our folks is is pretty exciting, And our employees want that too. Right? The research shows that almost all employee learning is geared towards helping them be more effective in the job that they're in right now. So here you see sixty percent of employees say that they invest in upskilling to perform their current job more effectively.
Half of employees see skill development as an avenue for their continued career growth, and then only four percent of employees are saying they're out there learning so that they can specifically change jobs or companies. I think that's good news for us. And then what it means is that despite all of this technological chaos around us and despite this skills gap, this is our race to lose. We we have the opportunity here to match the employees we have with with the skills they need to do great things, to keep them in the company, and to keep us all moving in the same direction. But I don't think we can do that with the tools that we have right now.
The atmosphere around us, the shifting skills landscape, the hard truth that we don't even know what skill we're going to need to be teaching in six months means that we can't rely on the old analog systems that we have used in the past. We need something new. We've gotta rethink how we teach and how we learn. Now, thankfully, we've got artificial intelligence, to help with that. I think AI is well suited to that, and that's why we're on the webinar today.
And so today, I wanna talk about using AI to build a future of learning, that is personalized and predictive. The idea here is that learning will need to be first precision focused. We want the right content to the right person at the right time in the right format, and we can't just stamp out the same content for everybody anymore. Second, I think we can use AI, to provide better analytics to anticipate what we're going to need as an organization or what our learners will want, before it becomes a crisis, and AI helps us do that. And then finally, we'll talk about using AI to build tailored learning experiences.
So thinking about how we can provide personal learning journeys for everybody inside of our organization. So first, let's talk about precision learning. Really, as I said, this is about delivering the right content to the right people at the right time. Now that sounds easy. It sounds obvious, but we all know that the reality here is a little more complicated.
I I take this fictional course catalog. This is clearly not your course catalog at all because you would never have large, time consuming courses with broad descriptions, that require employees to invest hours and hours and hours of time. K? Now I've worked at companies like this, and I can tell you I never used these companies' LMS. Right? I I I would take my mandatory learning, But if I needed to learn something, even if it was taught in a class like one of these, I would go to Google. Alright? And this is what we're competing with in l and d.
Historically, we've had these monolithic courses, but the world's not monolithic anymore. And skills based approaches are not monolithic. I I don't want to learn how to do everything in Excel. I wanna learn how to do a v lookup because I've got a report that's due, and I need just that one piece of information. K? So how do we take learning from this to this.
Well, we've got a little bit of a challenge in front of us because today, we've got our highly specialized content inside of our organizations, but that content's all bundled together. And it can be kinda boring and it's slow to learn. But the Internet has a lot of generic content that's very engaging and is just in time. So what we're going to try to do with AI is build something that's highly specialized. We're gonna lean on that great content that we have that's just locked away.
And and pull it out into something that can be a little more engaging for our learners and can better meet their needs where they are. So the first use case I wanna talk about today, is using artificial intelligence to decompose existing content and learning outcomes into micro content. So let me go ahead and see. I'm having a problem again with my screen share. Give me just one second.
There we go. Okay. Perfect. So Today, in the demos, I'm gonna be using a few different tools, but I'm gonna start here with Anthropix Claude. So while I'm using Claude here, you you could do the same things with any other large language model.
You could be using ChatGPT from OpenAI. You could be using Gemini from Google, so forth. Okay. So where am I? Oh my goodness. I think I may have this backwards.
Okay. I am so sorry. I'm a little turned around here. We're gonna come back to this demo. Give me just a moment.
K? So the second use case is predictive analytics. Now when we think about rapidly changing skill frameworks, taxonomies, tools, The issue for us is how we keep up. Right? We know how we've built content in the past. It's really hard to build content that fast in the future, and it's really hard to identify what content we ought to build. A great example of this problem for me comes from an institution that I worked with closely in the Netherlands.
And they were a technology institution, And so they were trying to build curriculum that was really responsive to educator needs, was meeting current industry trends. But what they found was that by the time they had developed a course, it was already out of date. So we wanna avoid that. So what does proactive L and D programming look like? Well, I think that it's it's a mix of a lot of things. Right? First, it starts with your company goals, and then we've gotta look at market conditions.
We need to understand how those are influencing where we wanna go and how we can get there. But then we also need to look at our company culture. We wanna know what people say. We need to understand what people are doing because that culture engine is going to drive our success just as much as our company goals and directions are. And great l and d content takes all of that into account.
It harmonizes with where the company's going, but it also listens to what people want. Alright? As we remember, most employees see skills development and see upskilling as a way to help them be better at their current job. So they're really asking for this type of content. But it's difficult for us to build, and it's difficult for us to know. So what have we done in the past? Well, we we use our engagement surveys.
Certainly have sent out plenty of these in my life. But the problem here is that these cycles feel a little bit like that process that that school in the Netherlands was going through. Right? We send out an engagement survey. We spend a month waiting for people to take it, and then we we probably spend another month trying to understand that feedback. We're aggregating it.
We're building out custom reports and tools. And then last, we act on it, and that may take us multiple months, alright, to the point that now I I've spent six months on something so that if I'm getting employee feedback about what ought to be taught, it's already too late by the time I get to this act phase. So our goal here is to speed this up. This is another place where I think we can use artificial intelligence. Alright? We hear about it a lot as a tool that helps us save time.
I think what what we'll talk about here is a way to save time, but also a way to extend our abilities. So we're going to start with employee feedback. We'll be looking at any employee behavior that may be in our systems, and then we can use AI to take that large dataset, summarize it, classify it, report for us, and then use that to feed our program design. And what's exciting about this is that not only does it save me from having to look row by row by row in my engagement data, but it allows me to build reports that I probably wouldn't have known how to build before. So let's look at how to do this.
So we're gonna plan for our curriculum needs using a large language model, and some employee survey data. Okay. So now this is going to work. Let's see. So I'm gonna go ahead here and start by uploading a file.
I have here an employee engagement results CSV file. K? So this is just standard CSV data. I've got employee IDs, questions on a Likert scale. I've got some free text. K.
I'm gonna go ahead and attach that into Claude, and we can get started really simply. Right? I can just ask for a summarization of this. Let's see. Can you please build me a summary report of this employee engagement CSV. I'd like some charts and graphs, and the focus should be on designing a teaching and learning curriculum.
And then let's say, in fact, give me ten course ideas most relevant to this feedback. K. So in this prompt, first, notice I'm giving context about what this is. Large language models like context. And what I ought to be doing is go even further here and describe what type of organization that I work at.
I could get more specific here, but I think this will be fine for now. But I am describing to that context on the outcome. And then I'm asking for a concrete number of ideas. I think asking for a concrete number of things and asking for more than you think you need, is a really great hack when working with large language models because these ideas are cheap. And candidly, not all of them are gonna be very good, so we wanna get a lot of ideas.
So Let's go ahead and run that now. So now this is going to spin. It will read through that CSV. And, hopefully, what you can see here is that the large language model is gonna go out and call out to other things. Right? It's it's probably going to write some code.
Yep. There we go. It's exciting. Right? So now we're I'm becoming a data scientist because the large language model is writing Python code for me. Now I I didn't know how to do that, but I didn't need to know.
It took care of it for me. So we'll let this run for just a moment. Let's see. K. Think this would be a little faster.
While this runs, I'll say. This this idea of thinking of work that runs adjacent to your work. Is really a powerful place to get started with large language models. And you think about projects where you own a piece of it and other people own other parts because there's a need for specialization. Large language models are really good at softening the boundaries there between those job roles.
So, you know, I'll call the democratization of expertise, but it means something like you may not have to go to a report writer to write a report anymore. I could ask a large language model to write SQL for me and then execute that. Or here, for instance, I the large language model now is able to write code for me. So I I think if you Yeah. Again, the where I think that really shines is finding workflows that rely on multiple people, not because that's the most efficient way to do it, but because that's how specialization worked out in the past.
If you can pull some of that work into your role frequently, you can do it faster, the large language model. You save the experts time for things that they're really excited about that are at the edge of of their expertise, but you also give yourself a lot more autonomy. Okay. Let's see what we've got. So our Our Our analysis is complete.
Our employees are struggling. Oh, no. Our NPS is negative thirty one. Professional development satisfaction is the lowest metric. My goodness.
Well, it's a good thing that we're building curriculum here. Hi. Top gap training infrastructure and top ten priority courses. And then I have look at all these charts. Oh my goodness.
Okay. So this is exciting. K? So I've taken my employee engagement data. I very quickly have been able to chart this out into a number of things that may be interesting for me, Already, I'm seeing something. Top ten requested changes, reduced meeting load.
Man, that seems like a really good idea for professional development and training, teaching people how to have fewer meetings. K? And so now I can take all of these insights, turn them into these courses. The reality is I may have these courses already in my system. Right? I just need to break them apart. And and I can very quickly turn around training content on the order of days and maybe weeks and not the months that it would have taken before.
Hi. The third pillar here is this tailored experience, building a unique learning path for each learner. So, again, we've got a lot of content. People have their own journeys. We want people to to in the way that's best for them, but I think we also wanna recognize how learning takes place, which is a little more like my my journey at the beginning.
Right? It's up and down and around and doesn't always happen in a linear fashion. So custom learning experiences happen in a few different ways. I I think that this is the pattern that I've seen work in in my experience, starting first with assessment. We've gotta understand where employees are at, and this can be pretty simple. Right? I've seen a lot of organizations just use something like free Google Forms, essentially create a personalized version of a BuzzFeed quiz for someone.
Right? People love talking about themselves. Let's get them doing that and use that as our starting point. And then it becomes a bit of a challenge from there. That's the easy part. Then we need to figure out how we map their learning.
Into the content that we have in our organization, ideally in a way that's format agnostic so that people can take little bits of content, they can take larger bits of content. And then last, we've gotta figure out how to scale that because this piece here can be done manually, but it's very time consuming. So our third use case here is building out custom pathways for employees using feedback in our existing curriculum. So I'm gonna go back into Claude here. Now this time, I'm gonna upload two different files.
I have here just a a CSV, one row, from someone in my organization, who took one of these quizzes. Right now, they work in support. They're interested in becoming a software engineer. And then I'm gonna take this here. This is my course catalog, my organization.
K? So we'll take both of those things. And now what I'm going to ask for here is a custom curriculum map. K? So I'm uploading the course catalog for my software company's internal training LMS. And some survey information from one of my employees. I'd like to build.
Custom learning pathway for the student to help them achieve Their career goals. Please course half. Okay. Along with explanations of your selections. K.
Let's see where that gets us. So, again, this is something I could have done by hand. Right? I I could read that employee's needs, and I could go look through the course catalog, or they could go look through the course catalog, but we don't because it takes a lot of time, a lot of courses there, and I'm not sure what I've got. K. Let's see.
Our technical support specialist wants to transition from junior software engineer or to a junior software engineer within twelve to eighteen months. Recommended learning pathway. Let's see. Twenty six weeks. Okay.
And now I'm getting courses called out along with rationale. Phase one, phase two, phase three. K. We skipped advanced courses. We avoided cloud specific deep dives.
They already have an AWS certification. That's good to know. And there were no mobile or specialized tracks. K. Twenty six weeks to ten hours a week with a buffer means that they get there before their twelve month target.
This is a really great artifact, that as a manager, I'd be pretty excited to go sit down and share with this employee. Now I can tweak this. As a human, they can tweak it, but this looks like a pretty good place to get started. And that happened so quickly that we were able to do it inside of a webinar, And I didn't need to do any manual work. K.
So with something like that, I'm able to Make better use of the content I have, make it more discoverable for employees, but also provide to employees really clear direction. Another great example of this is something that, you know, we built internally where we took all of our courses and then mapped dependencies across them. So I can click on something like marketing strategy development, and I can see what courses I need to take to prepare me for that course and then what opportunities that course opens up for me. Right? And fun fact, we went through a process exactly like the one I showed you, and then asked an AI to go ahead and and build this web application for us, which takes a little bit of back and forth, but all AI generated end to end. K.
Let's see. How do we get back to my slides? There we go. Okay. So as we think about using AI in these ways, you know, what I've shown you here is a great way to get started. And, again, I I think you can do these things inside of any LMS.
You can do them with any large language model, any tool. But you will reach a point where doing these things manually, even with an AI assisting you, may be difficult to scale. So I wanna talk a little bit now about how we scale up, how we think about scale, and some things that you could do to extend these types of practices. To to make it easier to implement them once you've prove proven that they work with these types of examples. So true scalability, I I think relies on these three things.
The first one, intelligent automation, thinking about what flows can be automated, where in a flow a system auto exist, and just as importantly, where should the humans be in that flow to ensure that a human remains in the loop and in control of what's happening. And then on the technology side, I wanna call out one concept in two ways. So first, this modular architecture. As as things change so quickly, we want to be able to very quickly adapt our technology to it. And that means we can't have systems that are monolithic, or difficult to replace or difficult to substitute pieces in and out of.
K? We want systems where it's very easy, to slot components in and out, and have a good interoperability story. And then that allows us to build on the human side these adaptive frameworks. Right? Systems that that we can use in different ways. They're very flexible so that when a new technology like large language models come out, for instance. We can we don't need to whole scale replace our entire tech stack.
If we've got a modular architecture, we can just implement a new component inside of them and then adjust our processes. So around intelligent automation, You know, I I I spoke at the beginning a little bit about thinking about chunking out your content, breaking down web content into smaller learning artifacts. At scale, you can't be just dragging and dropping content into large language models. You need to do this in a systematic way. And the way to do that is something called retrieval augmented generation.
The idea here is that you you take your existing course content and then, break it down and index it in the way that a large language model would understand that content. So that essentially captures the semantics of all of those pieces of content. And then once all of those pieces are stored out by what they are, we can reassemble them into courses that we want that are smaller or that are single pages or other things. The power in this approach is is allowing students, for instance. Maybe they go to a course catalog and they search for something about violins.
They could get back results about guitars and cellos and mandolins, because the system, this rag index, is really looking at the semantic relationships between concepts and realizes that all of those things are stringed instruments. Right? That's really great from a discoverability viewpoint, but I also think it helps us make better use of the content we have because it helps us find connections across courses that we may not see today. So I I call this out. I think RAG is an unfortunate acronym, but it's really an important concept to just know and to have been exposed to because it's a good thing to talk to your vendors about or your internal IT teams. So retrieval augmented generation, really allows you to break down, decompose, and reassemble content at scale in a way that's, I I think matches very closely how people learn.
Now I talked about content there, and you saw I was uploading CSVs before. And I I wanna just mention quickly a a note about content. Content matters a lot. K. Your business knowledge, the content in your organization, your processes in a world of large language models, those things remain your differentiators.
K? You know what makes you special and your business special, and you need to lean into that. I'll say if you're not aligned to that or if you're not actively working on building your own content and investing in your own folks, you're going to fail with or without AI. Now we see this really dramatically inside of traditional education. A number of early studies have shown that if we just take what students are doing today in a classroom and we drop AI into that process and give it to students, learning outcomes actually get worse. K? AI alone is not going to help us or solve our problems if we're not investing properly in the things that came before it.
K. And then thinking about how we use it responsibly. And I I think to do that, we've gotta own our content. Now a great example of of how this goes wrong, what happens when we don't get the right content in front of systems, comes out of Google's new AI overview feature. K? If you remember when this feature came out, what was it, last year.
Right? So we could ask a question to Google, and now instead of just getting search results, we get this nice AI overview on top. Well, someone thought they would be cheeky, and they would ask Google, how much glue should I put on my pizza? And, the AI overview came back, and it suggested an eighth of a cup. Said that it would help the cheese from falling off the pizza. Right? Literally gluing it on, and that it it would improve the consistency. Oh, that's that's embarrassing, but it's it's funny, and it probably didn't hurt anybody.
K? Well, a bunch of news outlets wrote about this. So then after that, someone came back and asked again, how much glue do I add to my pizza? Well, this time, they got an AI overview that didn't just tell them an eighth of a cup. It said, according to a May twenty twenty four article in Business Insider, you should add an eighth of a cup. K? So what happened? We started with bad data because we didn't have any data. K? That got codified somewhere or leaned on, and now we've got a large language model that's citing back to the bad data from the first example.
I I I love this example, First, because I like the idea of using glue to keep cheese from falling off my pizza. But second, because I think it it illustrates really clearly what happens if we've got bad data or the wrong data, and we just throw a large language model into the mix. K? It it adds to the confusion and doesn't fix it. So now the second thing I think you wanna look for as you think about scaling out these practices is looking for systems that have modular architecture, systems that rely on open standards, that have great existing kind of integration tools. They've got APIs.
Good example, anybody who's got a wide partner ecosystem probably definitely has this. Right? Now I I mentioned this stuff not because I'm an engineer and I love talking about APIs and architecture, though I do, But because it says here, this is really not about the technology. It's about giving you choice. Right? It means that it is an example inside of my LMS. If I don't like the discussion boards inside of my LMS, I wanna be able to replace them with discussion boards that I do like.
Right? That's a simple example, but it shows the power of something that's built in a modular way from the ground up. K. And then on the flip side, what we see if we build those types of modular experiences is what I call these adaptive frameworks. It makes it really easy to repackage, reformat, and shuffle content to bring it to where the learner is. So the content may be on my desktop LMS.
Can I get it out and give it to somebody on a mobile phone? Right? I I can if that LMS has got really great APIs. Or, I've got a bunch of SCORM content. Can I chunk that out, break it apart, and can I deliver it as single, small, consumable HTML pages for someone? Right? The answer is yes if I get a system that's built in this way. It's when we do that that we can start to scale out really meaningfully because we can reach folks where they are and we can reach folks in different ways, and we don't have to lean on just one approach, which we know is not gonna capture everybody's interests or capture everybody's attention. So I recommend putting together some type of interoperability vendor checklist.
This is a, I I think, a good place to start. It doesn't have to be the whole thing. You, again, you know your people and you know your systems better than I do. But when you meet with new vendors, asking them things about APIs. Right? What APIs do they have? Are they able to update your system without expensive migrations or time consuming migrations that suggest that they're they're built in a way that is scalable and extensible.
Right? Is there a plug in framework? Can you talk to some of those other partners? So that when you buy a system, you're thinking, again, not about just what that system does, but about all of these discoverable latent use cases that it could be good for once we make it available inside of our broader teaching and learning ecosystem. And when we do that, learning moves away from just a to do, something we've gotta check the box on for compliance, and I think really becomes the engine of growth in our organizations where we're very early in the pipe identifying what skills we have inside of our organization, what skills we're lacking. And then we can scaffold new content very quickly, get it out to where people need it to be, and then iterate and repeat. And so I I'll I'll I'll wrap up here and move to the questions by saying it's really think these three places where I see most organizations starting to have success with AI. So Precision learning, thinking about how we take our existing content, how we break it down into smaller pieces.
Second, predictive analytics, getting out ahead of what people need, again, either through automating things like survey result and analysis or by managing better surveys, and then using that information to help everyone in our organization build a tailored, unique learning path that reflects their journey where they wanna go. So with that, let's let's cut over to questions. Let me Stop my share for a moment and see what we've got. Alright. Let me jump back to one that I saved up here.
How can you use AI to help make SCORM files into microlearning? Yeah. You know what? I'm thank you so much for asking that. Let's let's do it. Let me sorry. I'm just gonna, you know, have to tolerate me being an engineer here for just a moment because I'm gonna open a terminal.
I'll share my screen. Okay. I'll share the whole thing again. There we go. Okay.
So here, I have this runtime basics SCORM twenty two thousand four. This is just the sample, the golf course, if you're familiar with that one. K? I'm gonna do this inside of this terminal because this is just a little easier to do quickly, but you could do this with Claude or other things. So I'm gonna go ahead and and open Anthropic here again. Okay.
And let's here, I've got that SCORM class. Let's say run or let's read through this SCORM package and identify Ten learning outcomes. Pick the first learning out and rewrite the content as a small HTML page. K. So, again, I think this prompt is is good enough here.
You're going to usually know what things you want. I would include those things. I would also include information about the audience. So existing skill level, desired outcomes, so forth. But you'll see what I've got happening here now is the large language model reading through all of the content, you'll see because I told it it was a SCORM package, it started with the manifest.
Alright. It's It's pretty exciting. K. So there we go. The ten outcomes of this course.
Play the game, the par system, scoring terminology, rules of golf, so forth. So already, I've identified what learning outcomes are here. Gives me some idea of how to break this thing apart. I told it to create one for the first learning outcome play of the game. So now it's working on that right now.
Right? So that that's what I would do, some variation of this. And can you remind us for folks who might be a little bit confused at what you were just modeling, what platform that was, what exactly folks are looking at? Yeah. So sorry. That was that was Claude. So that that LLM that we used earlier, it was just on inside of my my console there.
And I the reason why I did that was because it saved me from having to just drag and drop all of those SCORM files out onto the web UI, but I could've done it that way. Okay. Cool. Okay. I'm saving lots of questions.
We have we have lots of questions. So we I don't know if we'll get to them all, but thank you all for sharing them. Okay. I was gonna jump to Amanda's because she asked a good question. You were putting info into Claude, and she asked about governance violating potential companies' governance policies by putting it into an l LLM.
So can you talk about closed environments or what you all recommend, how how that Yes. That is a great question. Okay. I I will say a few things here. First, I will I'll excuse my own behavior by saying that was all fake data.
Okay? I I didn't upload the engagement results of any of my friends or family here. So that's a very reasonable concern. If if I were to just open ChatGPT or Claude or Google Gemini and I did not have an enterprise license agreement All of the data that I'm uploading into those systems and all of the chats that I'm having are being saved, and they're being used to train future versions of those models, which means that I may if I'm uploading sensitive information, like either PII or company IP, I'm exposing that to leakage in the future. K? So definitely don't do that. Now what you can do, First, an enterprise agreement.
All of these providers provide enterprise packages that will stop them from saving saving the data and using it for training. K? So that's the first thing I would recommend you do. The second thing you can look at It's It's it's similar, but a little different. If you have a a relationship with any, like, cloud provider, right, maybe you're hosting some things in Amazon Web Services or in Azure or on Google Cloud, those providers all have AI systems as part of those cloud packages as well. The advantage to those is that the data is going to stay inside of that cloud where you've already got your data.
So if you have any type of regionalization concerns, you know, if you are subject to GDPR or things like that, looking at those cloud providers can be a really attractive option to keep your data even a little safer and and in the places where you already are. Yeah. That's a great question. Okay. I'm gonna jump to another one.
Are you familiar with any inexpensive certifications or credentials or courses that help build credibility in using AI in learning programs. I'll drop in a few things that I know about in the chat too. Okay. That's great. Yeah.
Actually, I'm not aware of any certifications. I do know that we have a number of our customers who provide free training content for using AI, mostly in an education context. I'm I'm happy. I'll I'll I'll send the slides out later, but I'll add a a slide with links to those resources too. Yeah.
I I will say one thing I I would say, certification or not, is that I I hope everybody is taking time to explore and experiment with AI. I think that research shows pretty consistently when people don't use it, they either are afraid of it or think of it as a magic tool that solves everything. And when people use it, they develop much more kind of nuanced opinions about it that are, as a result, are much more open to its use because they understand where it's appropriate and not. I I dropped in, just a generic AI resources page that ETD has, and there's a few things on there if folks want to explore. Our education team has a mix of different levels of types of, like, courses and certificates.
So there's a new one that, is in collaboration with Josh Cavalier that's applying AI, but there's also workshops. And there are other there are other, like, vendors out there who offer free courses so you don't have to pay money. And then I noticed at demo day I'm who was it? Umu mentioned they have, like, an AI literacy course that you can take. So, if you go and check out the ATD, demo day for emerging technologies that just happened. I'll find the link for you all.
You can find things like that too. A UMU offers one, but there's also, like, free ones that you can find. And I noticed that some of the actual LLMs themselves offer some types of certificates, that are free where you can get a better understanding of them themselves. I think I'm kind of reiterating some of what Zach said. So there are a lot of things out there, but it's a great question in terms of what do you recommend because it's can almost be overwhelming that there are a lot of things out there.
That's fair. Okay. Let me keep jumping because we have more more questions for you, Zach. What about on screen help for software I use for example okay. This is first time I'm reading this.
I'm a nurse using a new patient scheduling software. A new patient note feature is now available. How could I use AI to help the nurse learn how to use the new note feature? Okay. Great question. I'll put that in the chat for you too so you can, like, see what reference Okay.
Zach. So I think what I've seen work here is taking documentation that you have in probably some traditional format. Right? It's probably since it's a SCORM course or it's a PDF or something that's describing to people how to use that feature, The easiest way to do this, and I'll share again my screen really quickly here so that you can see this because I think this is a a pretty powerful approach for a lot of things. But using something like Google's Notebook LM, so you can then upload that content or lots of pieces of content, and that creates for you a custom chatbot here that now knows about these things. K? So in this example here, I've I've uploaded my my company's employee handbook.
K? This gives me the chatbot. I can listen to it as a podcast. I can list watch a video about it. It breaks it down into, like, a mind map for me. Hopefully, that'll that'll generate.
But now I can ask this thing something like, you know, do I get Columbus Day off of work? And now, normally Google Gemini wouldn't know that. Right? But here now, it's read my employee handbook. It can tell me that is not a holiday that I get. Here are the holidays that I get, and then it cites back to where in the document that's listed. I can go find that that actual source.
So I I shared this one. I know there are a few tools that do this. I think Notebook LM is a really powerful one, but it's a it's a great way to build kind of custom chatbots, that you could use, for training and development really quickly. K. Was just dropping another resource in the chat.
That is awesome. I'm gonna keep jumping because we have more questions. That's great. Do you have any recommendations for how to deploy this approach in a large HRIS environment? Let's say, for example, a tool like Workday. As a learning professional, I am not an engineer.
How would you recommend I gain buy in from our technical in house experts and decision makers? Yeah. Good question. First, I'll say Workday is a great example of one of those extensible open systems that I talked about earlier. That's a that's a good piece of infrastructure to to build things like this on top of. I I would recommend in those situations because as you scale, right, as you know, you're gonna have to, at at some point, invest in building infrastructure or connections or connecting APIs, things like that.
Don't start with scale. Start by showing the art of the possible. So I I would say, to build buy in, you have to to start by doing things that you know don't scale, but that are awesome. Right? And and then once once you get the vision of the thing out there, then that's when I would loop in the technologist and say, okay. Now what are ways that we can we can scale this up? Some things I I will say with AI, I would be very careful right now trying to pitch it as a cost savings tool.
I I know some organizations feel like they can they can save a bunch of money by using AI, either by reducing headcount or in or in other ways. I think that's a little dicey as as a first value prop. Right? I I would look at increasing efficiency. I would look at improving performance. And, again, I I would use the tool to align those early efforts really closely to what the business is is passionate about.
Awesome. And I think we might have actually gotten to most of our questions. So I just dropped in a note for folks. If I accidentally missed your question or if you have a new question, don't be shy. We still have time.
Let's see. Looking here. Oh, University of Michigan. It's a a great partner of ours. That's good to know they've got a cert.
And I I'm just scrolling back. There's some that I did skip over. Okay. Yeah. Let's go to Kristen's, where I did see some just confusion about what LLMs are and asking for examples.
But let's jump to these new ones, because I think we hopefully cover that in the context of the session. What is the best way to ensure accuracy of the AI data? Any recommendations there? Yeah. Okay. So people talk about AI is lying to you as hallucination. Right? And and there are things we can do to minimize it, but I I will say it's not going away With the way that the current generation of generative AI tools are built, hallucination will always exist in them, though it can be reduced.
The way to reduce it is to focus on context and data. Give the large language model as much information about your problem space as you can and ask it to cite back to that. That will reduce hallucination most of the time to a point where you can pretty safely use the tools. Things like Notebook LM, as I just showed, will cite back to to where things happened inside of the document so you can know and verify for yourself. But that that's what I would recommend.
I I think you're, yeah, you're right to be a little distrustful of of them. It's good to to verify. And, yeah, as you're saying, verify. Also, just adding that human oversight back into, like, a a review process, not ever trusting it as point blank. Let's jump to Sherry's question.
It would also be beneficial to use AI to recommend best ways to measure training or identify best metrics? How would you approach that with AI. Yeah. Okay. So I find AI is really good at helping me ideate. It's not really great at giving me awesome ideas always right out of the gate, especially in a place where I know a lot.
Right? So if you're, you know, an expert in a space, I wouldn't go to AI and say, hey. Just give me ten ideas or write this presentation for me or something. That that's probably gonna disappoint you Because really AI, you you think about it, It all of these answers are essentially just kind of regressing to the mean of the domain. Right? Right in the middle. And that means that if my knowledge is below the middle, it's a really great tool.
If my knowledge is above the median, then I I get mixed results. So I would start with some ideas about how you wanna measure your training and then ask AI to expand on it. Right? Follow that that trick where you say, okay. Here are five things that I think are important for measuring results in my company. Now give me fifty more.
And and then sift through those and and use those to ideate. And then on the end of that, as we showed, I would then take it a step further and say, great. These are the ones I wanna use. Here's what my data looks like. Now write code for me that helps me measure and visualize those things.
Cool. Alright. Do you have any suggestions for bringing in learning consultants to help small teams with all of these changes, ideas around AI AI. I think I just recently saw a report too that teams have had more success with implementation when they're bringing in the third party vendor instead of making something in house? Couple of reasons for that, but curious what your recommendations are. Yeah.
That's great. Okay. I would say, first, start with your business objectives. Right? I I don't I wouldn't bring somebody in and say, hey. Our goal is to use a bunch of AI.
I say our goals are one, two, and three. You know, this is where we're headed. These are the challenges we're facing. Now let's bring in the consultant who has the expertise in the AI space and say, what we wanna do is map out how creative we could get with these processes if we use something like AI. Right? So I I think that way, you you're focused on the outcomes, and then you're leaning on them.
To help you adjust the processes, the flows, and then they can suggest places where AI may not be a good fit, you know, or or or maybe a a great option. Cool. And maybe this will be our last question because we're starting to run out of time and starting to see people saying thank you so much, and they're having to hop to their next call. So, Robert, we'll end on your question. For a chatbot, would I need a digital adoption platform to get something onto a web page or a proprietors proprietary software I work on? So you would need if I'm understanding the question correctly, that software does need the hooks to be able to kind of inject the the chatbot in.
That that is true. Right? And that's where I I think asking those questions around modular architecture, composability, and things is really important. The other thing I'll call out there is that and if if the systems you're using have APIs, there's some emerging standards with these large language models, something called MCP, that will actually allow the AI to connect to that system and take action on your behalf. Which which can be pretty exciting when we talk about workflow automation. But but, yeah, that's right.
You you'd be leaning on the the system to have all those hooks in place. Cool. Well, this has been so helpful, Zach. You are a wealth of knowledge. This was so much fun.
I wish we could keep just picking your brain. If you have questions for Zach and the Canvas team, can they reach out to you? Are you Yeah. Absolutely. So you you know my name now? This is Zach at Instructure dot com. Perfect.
We'll put that in the chat. Thank you again to our sponsor, Canvas by Instructure. This was awesome. I wanna remind you all that we are gonna send you a follow-up email, you will have a link to the recording. If you had to hop early or you didn't make us live and you're tuning in in the future, thank you so much, and we hope you have a wonderful afternoon and join us for future webinars. Thanks, everyone.
During this event, you will learn to:
• Use AI to do more with less in your learning program.
• Build data-informed skill-development programs that cater to employees’ real needs.
• Align learning to measurable business impact.