Success Stories from the Generative AI Revolution for Workforce Development

Generative AI is transforming the way we learn, work, and prepare for the future. In this session, we’ll explore real-world success stories where AI has reshaped workforce development—enhancing training programs, streamlining, upskilling, and bridging the gap between education and employment. Join us to discover how institutions and organizations are leveraging AI to build more agile, inclusive, and future-ready talent pipelines.

Video Transcript
Well, hi, everyone. Thank you all for joining today's webinar, success stories from the generative AI revolution for workforce development. Now I'm Bianca Woods. I'm a content manager here at ATD. Very delighted to be joining you as today's webinar host or moderator. Now among other things, that means I'm here to keep a super close eye on the chat.

And if you happen to ask any questions that don't get answered in the moment, I've got you covered. I'll hang on to those, and we can revisit them at the end of the session if we've got time. Now before we get into today's event, I wanted to just really quickly let you know about two other upcoming ATD events that you might also be interested in checking out. And don't worry, I'm gonna be popping links to both of them in the chat in just a sec. Now if you wanna strengthen your core learning and development skills, so things like instructional design, facilitation, and evaluating impact, while also keeping on top of the latest emerging trends in our field like AI, We would love to have you at this year's CORE four conference.

Now we've got sessions for people at every point in their learning and development career, from beginners to industry veterans. And this event's happening from September twenty ninth to October first in Orlando, Florida. And there's also a curated livestream option available as well if you're looking for virtual events. Now if you're interested in building out your strategic skills, then check out our OrgDev conference. This is a premier event for organizational development professionals where you'll explore topics like culture, talent strategy and management, leadership and employee development, bridging organizational gaps, and building strategic business partnerships.

And this year, that event is taking place from October twentieth to twenty second in Houston, Texas. And as promised, if you wanna check these out after the webinar, I've just popped the links to both in the chat. Now we're gonna jump into today's exciting session. And I wanted to start by thanking our sponsor, Instructure, the makers of Canvas. With a powerful learning ecosystem, Instructure helps organizations design and deliver training that supports real outcomes, whether that's onboarding new employees, upskilling teams, certifying skills, or educating customers.

And I'm also so pleased to be welcoming our speaker for today, Ryan Lufkin. Now Ryan is the vice president of global academic strategy at Instructure, where he works to enhance the academic experience for educators and learners worldwide. And with more than two decades in the ed tech world, he's got experience with pretty much every major technology pro program that institutions use to develop or deliver education. And that's everything from the invest to the SIS and everything in between. So I'm gonna pass it over to Ryan to get today started.

Perfect. Thank you, Bianca. Appreciate it. And thank you, Joy, for setting us up. I'm gonna walk through my slides today.

And if you would, go ahead and drop your information in the or if you have questions, go ahead and drop them in the chat at any point during the conversation. We'll have time at the end for q and a. But if there's something that's timely, Bianca's gonna jump in and ask those questions as well. So, as Bianca said, my name is Ryan Lufkin. I'm the vice president of global academic strategy here at Instructure, the makers of Canvas.

If you're not familiar with Canvas, Canvas is the most widely adopted, learning management system in North America, the fastest growing worldwide. So at this point, about, a third of all k twelve students, about half of all university students, and a growing number of of employees across North America are using Canvas on a daily basis to learn. And so in my role, as the as a member of the academic strategy team, I actually get to travel all over the globe and talk to colleges and universities, businesses, schools about their needs. Specifically, we talk a lot about AI, but we talk a lot about all of the trends that are facing education. And so, I was in Chile two weeks ago.

I was in actually, last week. It's a blur. I was in Chile last week. I was in, Regina, Canada three weeks ago. I'll be in Spokane next week.

So, I I I get out quite a bit, and it's it's amazing because I love to bring that perspective back and share it with you all. When we actually started the academic strategy team here at Instructure two years ago, we said, okay. How are we going to figure out the trends that are impacting, lifelong learning across the globe? And we came up with what we call the impactful eight. And you can see those eight here, and they they pretty much impact every, everyone trying to deliver teaching and learning, across, every vertical. Right? So that operational efficiency and effectiveness, how do we make the most of the systems we're using? How do we modernize our systems? How do we make sure we're connecting with students where they wanna be connected with? It's a it's a quickly evolving education landscape across the board, and so how do we keep pace with that? Second is lifelong learning.

Right? We used to go to school for twelve, sixteen, maybe twenty years and then work at a at an individual job for up to thirty years. I know my parents, you know, my dad had thirty five years working for, the airline that he worked for, and that's just no longer the case. We're not staying at jobs as long. We're yeah. The average is now somewhere around three years, and there's a need for constant upskilling, reskilling, and even preparing, for encore careers.

We may switch halfway through what it would be a traditional career path. How do we, as educators, educational institutions, as employers, support our our, workforce in that development? Right? The we often hear, you know, what if I what if I train them and they leave? What if you don't train them and they stay? And that is one of the biggest challenges. And then how do we align, you know, the that full learning by full learning experience? Assessment, how do we actually measure the mastery of skill? This has been turned on its head. There's a lot of that's been written around cheating with AI. I'm gonna argue that, if you're still considering the use of AI cheating, you're probably, doing it wrong, and we'll talk a little bit more about that.

Generative AI weaves its way through all of these topics. We'll we'll deep dive on that here shortly. But then evidence based design. Right? How do we use that data that we now have to provide, better learning experiences? How do we actually use that data to power AI tools? How do we use it efficiently, effectively, ethically to make sure we're delivering the most we can for students? The other aspect then, all learning matters. Right? It used to be, you know, you look for a bachelor's degree, maybe an associate's degree, maybe a master's degree.

And those were the the the real currency in education. What we're finding now is we've gotta have incremental steps. And so we see badges, certificates, credential programs. Right? How do we actually quantify that learning and give it value beyond maybe just the logo? Right? I have a I have a certificate from Oxford in AI regulation and management. Right? And the value really is with Oxford.

But if I had taken that with another institution, how do make sure that has value? Right? There's a there's a lot to be to be explored around that. One of the most exciting and one of things I think is gonna be a central focus of of what we're talking about here today is this education and industry collaboration. More and more, we're seeing, these skills gaps being identified, and we're seeing employers work directly with universities to, create programs to make those students immediately effective in the workplace. Right? There's a great story of of L'Oreal working with Arizona State University and Mesa Community College, to provide, upskilling opportunities for students that are already in in the workforce. Right? They're already, they've got a the they've got their, certificate for, cosmetology or whatever, and they are they are upskilling to be able to manage other employees to be able to run their own businesses.

What does that look like, and how do we make more of those programs scalable? And then there's really the future of learning. Right? This is this is everything from, virtual reality to augmented reality to the brain science behind, how do we make sure that that learning sticks? How do we make sure we're not teaching to the test, but we're actually developing real skill that are durable and stick with with potential employees? So that's it. We're gonna dive really quickly into into AI. Right? And and a lot of people talk about, oh, we've been doing AI for years. Right? And in theory, we have.

Right? AI has been around. The term was actually coined in the nineteen fifties. And at some level, when we talk about machine learning or those elements, it is AI. Right? But what we're fundamentally talking about today when we talk about AI is, is the the large language model that's called generative AI. Right? The first of those was launched by OpenAI called ChatGPT on November thirtieth of twenty twenty two.

It's my birthday, so it's easy for me to remember that day. But on that day, they launched what was fundamentally a transformative tool for a number of reasons. There was a a what's called the transformer model, that was developed by Google that allows these AI models to remember and build on, the text thread that they're having. But they're trained on much many, many more, models. Right? You know, billions now, even trillions of parameters that these these tools are being trained on.

And how do we make sure that we are are keeping pace with that development? Honestly, I think, it was it was, one week, and it took I think I'm trying to remember the number. It's escaping me. But ChatGPT hit about five million users in a week. It was the fastest growth of a of a website in the history of the Internet, and that has continued to grow as we've seen a proliferation of models, everything from Claude, Gemini, Copilot, and a myriad of other models, that are out now in the marketplace. Within the education space, what we immediately saw was a very distinct, chasm open up in that classic adoption curve.

Right? Many of you are familiar with this classic adoption curve. Those early adopters, early majority, late majority, and then those laggards. Right? And you can see those students in green jumped ahead. And and I will reference my children a couple of times in this presentation. I have a I a twenty year old daughter who is a, going into her junior year at the University of Utah, And so I'm watching her experience prepare for a job and look at what does, what does life after college look like.

And then I have a fourteen year old son who's about to enter high school, and I'm watching his his path and and how he's being taught education. And they're very different experiences, and I'll talk a little bit more about that. But what we saw is students embrace these tools and jumped ahead. Educators didn't. What's interesting is more recent, more recent numbers are much more rosy.

Right? This is from the the digital education council. This is a global survey. And they said, you know, sixty one percent of educators have have said they've used AI in the classroom. Right? Sounds pretty good. More than half.

In reality, what we see is eighty eight percent of those that half, right, are have used it moderately limited or minimal use. Right? They're not diving wholeheartedly into embracing these tools within the classroom. And in many cases most cases, they're simply not teaching the use of these tools in the classroom. And there's a number of reasons for that. So the the you know, when we they are using it, we understand they're using it in very specific ways.

Right? Creating teaching materials, supporting administrative tasks, that time saving aspect of it. They're teaching students to evaluate AI. This is a pretty small, majority. Right? This is I'll tell you right now that while my daughter's educators, professors in college, most of them are outlining how to use these tools, how they find it, acceptable use within their classroom. My son is is being told not to use them.

They've banned them from Chromebook, all the AI tools that they can they can list. They've banned from Chromebooks, and, they told students that AI is cheating, and they're that's that's it. And I will tell you, I I was in a podcast recently and and said, you know, we're gonna be higher education and and businesses are gonna be overwhelmed by, AI feral students, students that have are very adept at using these tools, but have never been taught how to ethically and appropriately apply these tools, when to use them, when not to. And that's a challenge. We also on the flip side of that, we start seeing employers who are jumping the gun and saying, I'm gonna start laying off employees, and we're gonna use AI.

I'll tell you why that's that's a that's a pretty big challenge here in the very near future. When when educators aren't using that, we know why. Right? They don't have the resources. They don't they're not sure where to start. Right? They haven't been given the guidance.

They're concerned about potential negative impacts. The there there's more risk than reward. And the and the bottom three there really are fear, fear based. Right? There's so much fear if you are older like myself, and you've survived the nineteen eighties. You you remember nineteen eighty two, was a movie called Blade Runner, nineteen eighty three was a movie called War Games, and nineteen eighty four was a movie called Terminator.

Right? We were conditioned to fear AI. So, naturally, when AI comes, we fear it. Right? We we, can't acknowledge that it's that it's, positive. We we jump right to that that robots are gonna take over the world. And that's unfortunate.

The the bigger challenge too is that educators truly are trying to understand, you know, how to apply these models. And in most cases, because there's been such a focus on academic integrity and cheating, they look at them as simply writing tools. I've I I had a professor a few weeks ago say, I'm a good writer. I'm not gonna use AI. And I was like, well, then don't use it for writing.

Use it for first pass grading. Use it for summarizing your emails. Use it use it as an assistant. Right? But there's a there's a a serious lack of curiosity, or and that that to be fair, a lot of it is because it's kind of fear based. They just don't know.

No one's modeling what good looks like quite yet within, the classroom, using AI in the classroom. And and we're starting to see some of those come out, and I'll talk a little bit more about what that looks like. But but it's it's in that period of transition. Globally, and I I spent a lot of time in Latin America. I've been in Colombia, Mexico, Brazil, Chile this year.

And I will tell you there's a lot more optimism around the use of AI. Right? Students in Latin America, for some reason, seem a little more focused on using AI in productive ways, not to cheat, not to avoid learning, but to enrich their learning. Where in the US, where actually most of the the innovation is being driven, we're seeing a lot more hesitancy. We're seeing a lot more of that fear prevail. And so it's interesting to understand that while we all started November thirtieth twenty twenty two on an even on an even start line, we're seeing different perspectives globally emerge from that.

The reason it's incredibly important that we as a community drive AI literacy, in our educators, in our students is because most of the jobs as as the World Economic Forum looks at what the jobs are from the future, we see that most of the jobs are, focused around data, AI usage. Right? We've got to accept the use of these tools. We actually have to understand what these tools are capable of and what they're not capable of so that we don't see this proliferation of jumping the gun and applying these tools in ways that put, companies and individuals at risk, and really teach the fundamentals so we as a society can understand when these tools are being used to create content so that we're not being just you know, the fake news, the deep fake videos, things like that. When we talk about AI literacy, it's pretty simple. Right? There's a there's a four steps around AI literacy that are the basics of understanding.

Right? The first is no one understand AI. AI, at its core, is a black box that we ask via a chatbot, questions, and then it gives us an output. Right? And that output could be text, visual, audio, video, code. Right? Increasingly, it's pretty good or any code. And so what happens within that black box, that large language model, to some extent, is is, beyond our ability to understand.

Right? And that's why we get things like confidently incorrect answers, hope sometimes called hallucinations. We get we get resources that are simply made up. Right? None of these are nefarious. It is essentially the easiest term I can, usually use is it's like autocorrect or autocomplete to the nth degree. It's trying to guess what you want based off of the you've given it and give you that answer.

Right? That's why number two is actually really important. How to use and apply AI. You often hear, talk about prompt literacy, prompt writing, and this is where you build enough context in what you're giving that large language model so it improves the outcome. And this isn't a one and done. Right? There's a there was a recently, OpenAI came out and said, you know, saying thank you to your chatbot is is consuming so much energy.

That was kind of an unproductive headline, in my opinion, and opinion of many others. Because when you actually have a conversation or you make it an iterative process, that's when these tools work best. Rarely is the first answer the best answer, but when you build on that by having a conversation you know, I was I was actually pulling together some information. I was I was giving a presentation. I said, hey.

Give me three different, options for a title for this presentation. And it gave them to me and then said, would you like to I said, great. Those are amazing. Really well done. Then it said, awesome.

If you like those, would you like some mic drop moments? And I was like, tell me more. Let's hear more. Right? And what it gave me wasn't I what I would consider mic drop moments, but there were some interesting points that that I was able to actually say, oh, actually, I could I could I could lean into that a little harder. Right? I could I could do that. But if I hadn't have said thank you, if I hadn't made it a conversation, I wouldn't have got that additional prompt.

Right? And that's what's so important about these tools. And we actually look at the the the amount of usage. Most common users are not driving enough, energy usage to make a difference. You probably burn more energy idling your car, as you as you left for work if you have a commute, than you did, ask saying thank you to your chat GB team. So, really important there.

The third piece is to evaluate, what comes out of the other end. Right? And this is incredibly important. This is the biggest, I think, question right now around AI tools. I recently had an executive at my very own company say, oh, I did a little research on the side, and I came up with the top fifty, universities using AI. And here's a here's a grid I did.

Go ahead and, you know, do some thought leadership with that. And I was like, wow. Okay. I don't usually get, you know, that level of resource from from, other, executives that I can run with. And I said, great.

And, you know, I looked at it, and I said, okay. University of Michigan, Maisie, Very familiar with it. And then I looked at the second line, and I said, Notre Dame. That's not Notre Dame's AI tool. And I went through and started searching it.

And as I went through the list and googled, they were all fake. They none of them were added. And so what he had done, which which is what a lot of us would do would do is say, okay. I'm familiar with this one. These are the these are the information I want.

Let me populate that first line, and then let me ask AI to autofill the rest of it for me and create that for me. And so that AI tool, whichever he used, just went in and said, okay. I'm gonna auto populate this based off of what what we might call those things. Right? It didn't actually go out and do the research to, verify that those were real and even included links, links that didn't go anywhere. Right? And it didn't do that from a nefarious standpoint.

It did that because that was the expedient answer that it thought the person wanted. Right? And and this is what we need to remember. When it when these tools give us false, fault confidently false answers, It's not doing it out of any sort of attempt to, just, you know, confuse us or or distract us. It's just trying to give us what we want and and coming short because we didn't give it enough parameters. But because the person that sent me that list actually hadn't understood that that was of common occurrence, he didn't double check it before he passed it on.

Now I could have run out and done, some some thought leadership with that and really looked quite foolish when that was all deemed to be fake information later on. Right? We need to make sure that we've got experts in the loop, and that's why you often hear human in the loop AI. That's where we are in the world. In the future, you'll have human on the loop, which means it's overseeing, agents doing a lot of different things, less checking those answers, but more checking the processes. That's another story for later.

But we are too early in the the development of these tools to hand off, jobs of any level, even especially the high risk jobs, to AI to AI tools without without, you know, human experts monitoring that constantly. And then the fourth piece is AI ethics. And this is this is everything from when do we replace humans in jobs to, what does the energy consumption look like to what data do we consume, when do we actually, notify individuals that we're using AI as part of a process, what level of transparency do we provide, it's a pretty big discussion. When students are using these tools now we've moved we're talking to teachers. Now we're gonna move to students.

We understand how they're using it as well. Searching for an information, checking grammar, summarizing documents. My son, a couple of years ago, when this was first coming out, said, dad, I don't understand why I can use Grammarly, but I can't use ChedGBT. What's what's the difference? These are tools at my fingertips. And no one was telling him because they were just saying it was cheating.

And it it took me back to the first class that I ever taught, when I was I was still in college, and I had a student halfway through the year show me a paper, hand in a paper that was cut and paste. They'd cut text and cut images and pasted them, and I was like, well, that's cheating. Right? In my experience, because they hadn't opened the National Geographic and handwritten all of the answers out of there, it was cheating. It was a shortcut. It would save them the labor that I associated with learning.

We are seeing something very similar to this. Right? And when we look back at the historical references, whether it was the calculator, whether it was the Internet, whether it was Wikipedia, we're seeing a very similar evolution in the responsibility that that idea of, you know, needing to do the work. The pain is the learning. Right? How do we how do we, know, figure out how to address that in the future? One of the really interesting things right now is, you know, Claude, that is a large language model developed by Anthropic, is came out with what they call a learning mode. And what's really interesting about learning mode is and they they published a a paper about it with this graph.

And what's interesting is they said, okay. If a student is asking for a direct answer, to solve a problem, that's where cheating lives. Right? That is I'm just just give me the answer. Tell me tell me, you know, what what the, third king of of, the the modern English monarchy is. Right? If you if you ask me the direct answer, that is we're cheating less.

Right? But then there's also the the the output creation, right, which is, okay. Now give me photos of them. You know? There's learning involved in some aspect of that. But when you actually look at the collaborative approach to that problem solving piece, that's where the Socratic method lives. Right? It's not going to give you the direct answer.

It's going to ask you guided questions that lead you to that answer. And once you understand that you can train a large language model to act in that capacity, the world opens up. Right? The idea of these different use cases opens up. Oh, it's not just gonna give them the answer. It's not just a cheating tool.

I can put guardrails around how I use this large language model to be an extension for me, the educator, when I may not be able to sit with a student. Pretty powerful. Right? And so that that's one aspect. I think a lot of these large language models are starting to lean into, this aspect of it. The other piece that's interesting is we're seeing a shift in the fundamental Bloom's taxonomy.

Right? The the the kind of foundation for learning. And what's amazing again, I was I was at San Jose State University probably a month ago, month and a half ago. And what was fascinating, we had a panel of educators, and they were talking about how they had embraced AI, and they were using it. And I was sitting with folks from OpenAI and AWS and Microsoft, and they were giving us their feedback. And what was interesting, there was a there was a professor of product design, and he said, we used to use fifty percent of the course time to create an idea, model that idea for a package design, create all the specs, do all of that.

And then we would use, about ten percent at the end of the course, I would have them reach out to individuals within the industry of their choice to kind of pressure test that that concept, that design. He said, now that design process takes ten percent. And what I have them do is reach out to three industry experts and actually get feedback from three industry experts. So you can imagine for, I know, my children, the idea of picking up the phone and calling another human is, rather overwhelming. That idea that that they have to lean into those human skills is really powerful.

So as we start talking about how we how we train individuals, how we teach our courses, when we start thinking about how we shift, AI allows us to shift for more of that mundane work to more of the human skills. There's a lot of opportunity there to shift our thinking. And why this is important now? And, again, I I tell people a lot, because I I speak to people of you know, some people are are very advanced, and what I'm talking about is is, is yeah. Of course, you've heard it before, and many of you may feel that way. I talked to people that that most of this is new.

They've they have feared AI. They haven't spent a lot of time digging into it. And so, you know, I I always assure them that getting on board is not hard. Right? You can easily jump into AI, start using these tools, and become adept at using them really quickly. The the the challenge is it's constantly evolving, but you don't have to be using the model that's based off of, you know, three trillion parameters.

You can use the free version of ChatGPT and accomplish some pretty amazing tasks. Then you can dive into other tools. I'll talk a little bit more about that a little bit. But why that's important is because we are in an evolutionary stage. Right? We move from that generalistic chatbot, where, again, I can go ask ChatGPT to help me, write a presentation or write a recommendation letter, and it'll give me the output.

Right? Now we're starting to see more of the subject matter expert. And and there's actually I'll talk a little bit about an example of this later. But based off of the the databases that we appoint these large language models at and train them on, we can get very specific, very smart tools within a very narrow, area of focus. And and this is where we're moving. Actually, there's a there was a paper that came out, two days ago, where, Anthropic is actually talking about the fact that the future really is smaller large language models.

We might get an individual large language model in every course that is very knowledgeable about the content of that course and very little else. Right? This is an evolving state. Now we're right now we're somewhere between that subject matter expert and the AI agents stage. What we're starting to see, you'll you'll hear a lot about agentic AI or AI agents. And these are AI tools that can log into a system on in multiple systems in most cases and accomplish tasks on your behalf.

This is like saying, hey. You know, you know, Moneypenny, James Bond's assistant's name. Moneypenny, go out and and it happens to be my AI's name. I know. Weird.

But, hey, Moneypenny, go out and do some research on who I would need to pull together on, on, an action item, based off of my email, and then go out and schedule us the time to meet as soon as possible for everyone to come together. And it would be able to go into my email, look at that that past information, and identify five people, go out and look at our calendars, and say the soonest time you're available is, you know, two weeks from Thursday at two PM. Right? Let me schedule that, send it off, put all that information. Imagine the time savings of an agent. In many cases, they have to log in to different systems, but the ability for them to accomplish these tasks on your behalf really opens the door much beyond that just chatbot model that we've been in to this much more proactive model.

And this is where we're going in the very near future. We're already seeing a lot of businesses, a lot of universities really focus on an individual partner. Like, you know, if you're a Google shop, you're you're, you're using Gemini, and you wanna use Gemini across all your systems. Microsoft shop, you wanna use you wanna use Copilot across all your systems. We're seeing a lot more of that.

And and there's a number of reason that's beneficial to us as far as a lot of schools, lot of businesses. We're starting to see a proliferation of large language models. The management of those tools and the data consumption by those tools, puts businesses at risk. Right? So there's there's benefits across the board there. What's interesting is is and you see that for the next case of of NextAI use cases from the Bond group.

Right? This is the the capital firm. I I throw this out there because they're they're very smart about where watching where the market goes and what's next, and they're saying personalized education. And they're not saying higher education. They're not saying this is personalized education across the board is one of their top frontiers, as of as of, you know, two months ago, last month. And pretty narrow focus.

Right? Middle discovery, precision manufacturing, robotics, autonomous scientific research, supply chain operations, personalized education. The idea that based off of, your data, your knowledge of an individual, you can deliver, just in time, personalized education, personalized learning experience for them is the future. And so how are we addressing that? And this is I don't think this is five years from now. This is over the next year or two. We'll be leaning into this pretty aggressively.

I I love to pack resources into my presentations that everyone can use because I think a lot of these are are very helpful. A lot of these are very higher education centric, but, whether you're an educator or, you know, you can benefit from these. So OpenAI, is one of the the vendors that offers teaching with AI tools. How do you teach using AI? How do you how do you, align AI to your your courses, to your training materials. How do you build that in? They offer that MIT, University of Michigan, Tech De Monterrey in Mexico, University of Sydney.

I've got several more slides of different institutions and businesses using these. So you're not starting from scratch. As you apply these tools, there are so many different resources out there that you can actually go and discover, and pull those into your classroom. You don't have to feel as though you're you're starting from scratch. And, again, one of the things I I talk about with a lot of folks is dive in and start using these tools.

Early on in that that Titan partner slide, there was a great quote that said educators that had used these tools had a much more positive outlook on the ability to use these tools in the classroom, and there's a reason for that. When you start using these tools, you understand what they're capable of, but you also understand what they're not capable of, what they struggle with. Right? And and it's it's so funny. There are there are certain tasks it really excels at, and there are tasks you know, I actually said, you know, I don't wanna rewrite this article that I'd written two years ago. Can you go write this article and update it with you know, for today with a number of additional references and things like that? Sure.

It ran off. And what I came back with was truly terrible. Right? It wasn't good. And I was like, was my original article that bad? Right? Now I went back and looked at it. Was like, no.

It's good. I'm a good writer. It just based off of the I didn't spend the time, and I didn't wanna spend the time to write a detailed enough prompt to give enough material to really have a, positive out you know, really solid output. That's on me. I might as well have just gone and written it myself, which I ended up doing.

Right? This is really important. And so the more you use these tools, you more you understand what they're good at. And most of them have free versions or very low cost versions for individuals. And so as I developed a training, an AI, training process for my team, I I actually sent them out to use to do Google's AI basics course that's available for free. Sometimes it's free.

Sometimes it's forty nine dollars, but essentially free to go out and learn the basics of these tools. It's a great model for helping, people overcome their fear of this. And, frankly, it's nothing that you and your individual teams can develop yourself. Right? It it's simply, how do I write a great prompt, take a good use case that might be familiar with members of my organization, and then apply that and have them walk through that model and and measure the output? It's a pretty solid process just for opening the door. And then once they're there, they can go try these different models.

Right? And there's there's great tools for images, videos, and other. One of the ones that's really getting a lot of of buzz right now is Notebook LM. It's actually owned by Google. And one of things that you can do is you can actually take a course, grab all the materials, and and pull them into NotebookLM. And then you can actually I hear I see Pam's going, I love NotebookLM.

Can actually say, I want to watch the oh, I wanna consume all these materials as a podcast. Not only that, as you're listening to the podcast, you can actually hit pause and ask questions and to dive deeper, and they'll answer based off of the knowledge that's consumed. That's the type of personalized education that we're talking about here. Right? And it's the type of thing that that, companies are starting to build into their performance so you don't have to grab all the material and throw it out. But how do I, as a student, want to consume? Everybody learns differently.

Right? We've talked a lot post COVID, and and this conversation was pre COVID, but it it really gained a lot of traction during COVID was how do we make sure that we don't lose accessibility for students as we create more technology enhanced learning. Right? And technology actually closes a lot of those accessibility gaps. But for students that are more visual learners, for students that that need that, repetitive you know, they can do flashcards. You can do all sorts of things, and and notebooks, one of those things. And then there's there's tools like Conmego and and the other tools that are actually, great, student assistance, basically, that are starting to have a demonstrable impact.

Right? One the things I love, and this one's great, AI for education has started to archive prompts. Right? And and I talked a little bit about how prompt engineering is an incredibly important skill. How do you write a prompt to get the outcome you want? Well, they started saying, okay. What what is the task? Is it administrative task? Is it something around testing and assessment? Is it something, for lesson planning? Right? They've created prompts for those. You can go out and and my there's the the the references here, but there's a link in it here as well.

But you can actually go out and borrow their prompt, customize it for what you want, and build off of that. It's one of the reasons I've always loved education. It's so collaborative. We don't, we don't tend to hoard our information. We share it so broadly.

And this is a perfect example of that, where as you start building prompts, go out, look what's out there, and build off of those. Incredibly helpful. I mentioned a little bit that that those teaching assistants are starting to have an impact. This was from, I think, end of last year, and it was a physics course at at Harvard. And they had one section that had a student tutor.

They had one section, that did not, and they decided to monsterably higher outcomes from the course that did. Right? And the reason isn't because it's replacing an educator. I am a firm believer that the the connection of teacher to student is what drives education. We walk around every day with all of the world's knowledge in our pockets. We still need guides to learning.

We still need educators, and it's incredibly important. Right? And so what's what's changed is students. Right? Students demand more flexibility. My daughter who lives on campus at the University of Utah, she also wants to take online courses because they fit into her schedule better. She has to go to work.

She has a social life, and she wants to make sure that she can take those classes at times that are convenient to her. And so even though she lives on campus, she'll take online courses, and that's that shift. Universities used to think online courses were more for for adult learners. Right? They very much siloed those. And what we see now is is students at every level level are demanding more flexibility.

And so that means how do I engage in my with my course outside of defined class times? How do I get individual support that my educator may not have enough time to give me because they I'm in a in a relatively large class. Right? If I'm doing corporate training, how do I I may not even know exactly who to go to for follow-up questions or to dive deeper. AI tools can support that process. And so here's here's some examples from education. These are largely, in the education space, but they're they're a good example of how, education is evolving.

And that and I I will say, it's so interesting. In January, I was at AI conference at the University of Florida. And within higher education, there is still the idea that, with some educators and some even universities, that their job is not to prepare students for, jobs, but to make better members of society. And there's a gentleman that raised his hand and expressed that opinion, and it was actually great to hear at the groan from the rest of the audience. Because what he was overlooking is, fundamentally, AI literacy and preparing students to use AI is good not just for jobs, but for our society.

Right? In a world where deepfakes are almost indistinguishable from real, where the imagery of the videos becomes so good that we can't tell the difference, at least understanding how AI works helps us understand what's possible and helps us say, no. I'm gonna be a little skeptical about that image because it could be AI. Ask ourselves that. Right? Not just take it for for granted. And that's why every member of our society needs to understand how to use AI tools effectively for the good of our whole society.

Right? It also will help for jobs in the future because we, you know, we talk a lot about, you know, Bianca and I were actually having conversation earlier. The idea that, you know, educators that that, will not be replaced by AI, but educators who don't adopt AI will be replaced by those who do. Right? These tools save too much time, create far too, interactive an experience. We've gotta do that. So when I talk about, University of Michigan, University of Michigan was probably the first institution they've they've they've been at the forefront, but they were the first tool that actually came out with a siloed version of ChatGPT.

They created Maisy, which was their own firewall version of ChatGPT for students and educators to use to protect student privacy, IP, things like that. What's incredibly, interesting is when they rolled out Maisy, they said, hey. Use Maisy. Don't use the public versions of of these large language models. And the students said, no.

We think you're monitoring us. We think you're looking over our shoulder, and we don't want to use what you want us to use. They hadn't even considered that. And so they kinda went back to the drawing board. They rebranded Maisie as and they brought students and educators into the room and said, okay.

How do we make sure that we have your trust in this? How do we make sure that you're gonna use these tools we develop because we've developed them with your input? And then they relaunched Maisy, and they've gone on to to offer a number of tools. So you can build using Maisy. They've got GBT tool kits so you can, customize those materials for everything from research to your own teaching assistants. Pretty incredible. And then they've got a mobile tool as well.

So they've they've really they set the standard. They were one of the first they were also one of the first to have an acceptable use policy. And if your organization doesn't have an acceptable use policy around AI, you definitely need to focus on that. It's not just plagiarism. It's not misrepresenting your work.

AI tools are part of our our workforce, toolkit. We need to embrace them, but we also need to provide the the guidance and guidelines. People want that that clarity. ASU, if you're not familiar with Arizona State University, they're they're one of the most innovative schools in the world, around providing education to, as many users as possible, you know, innovating with technology, providing new tools. So they've got a lot of great guidance on how do you teach, and learn with generative AI.

Their guiding tenants are really amazing. But one of the reasons I love this is they they've rolled out Adobe Express, Copilot, Zoom AI, Companion across their their, their organization to literally everyone involved. But they've also developed a a vendor IT risk assessment that with the graph here at the bottom, which is incredibly important because I actually had a a startup that I met at at, a conference called ASU GFC two years ago. Already reached out to me and say, hey. Do you wanna buy our URL? Do you wanna buy our IP? Do you wanna buy our beta list? And my first question was they they basically run out of money, and they were trying to sell their assets.

But I was alarmed because I don't their beta partners probably don't understand that their list is now being sold to someone else. And that's alarming. Right? As these as these, startups run out of money, how do we make sure that they are making the right decisions? How do we make sure we choose the right partners? And their process for doing so is is really pretty, impressive. RMIT is is a university down in Sydney, Australia. And, actually, Edward, I will.

Actually, I'll I'll, I have a link we can share afterwards. Sorry. There was a question on the chat around, sharing the basic AI training, with my staff, and I'll I'll share some resources around that because, there's a lot of great, tools out there. So RMIT, what they did was, they rolled out Vowel similar to Mazie at University of Michigan. But what they found is that students were using there were some very specific use cases that most students started gravitating towards.

And so what they did was say, hey. We can improve the experience by making these tools better trained on that data associated with that use case. And so they've got Imagino. It's a you can it role play, with, a persona. You've got Quizical.

It's their their quizzing tool. You can do practice quizzes, and and it'll get you ready for that. Essay Mate is a you can upload your essays, and it'll give you feedback. Poly Chat, helps you understand their their the policies across their campus. Prompto gives you feedback on how to build better AI prompts.

Right? They've trained these different models. And it's it's really interesting because what we've started to see, there there's one that's that's not here, but it's, because it's a little higher risk. But one of the things we started seeing on university campuses across the globe is students to turning to the general use chat tools, AI models, for mental health counseling because I am gonna have a conversation. I can get feedback. You can help analyze my feelings, and I don't have to run into it on campus.

I don't it doesn't cost me anything. But these models aren't really trained for that, and there's a lot of liability associated with that. And so there's lot of universities struggling with how do we make sure that as we kind of have a tendency to anthropomorphize these tools, how do I make sure that that as we go into some of those riskier areas, we're not we're not exposing students or or information, to risk? Tech to Monterrey, again, one of the one of the most innovative schools in Latin America located in in Monterey, Mexico. They set up a a third party lab with WiseLine, and they're doing a number of innovative things. But they they rolled out, one of the first to actually have AI training courses.

So they've actually got forty four programs and fifty different AI courses in how to use AI. These aren't courses using AI, but for students into how to use AI. So one of the most robust curriculums I've seen, if you actually go out and explore their website and their their Gale facility, a lot of those those courses are, at least the course descriptions and things like that are available to help you think about how are they rolling these out to students. How could we potentially explore some of the use cases? Again, curiosity around the use cases is one of the biggest challenges, and I love going out to businesses or campuses and hearing about how they're using AI in ways that I haven't even thought of. And it's fascinating.

And so the more we can prompt that curiosity, the better off we are. The California State University system, has set up a a, with their AI commons and a group of vendors to help guide what does the future of the workforce look like using AI. I'll actually be out in Sacramento in a week, basically talking to them about what does this look like. How do we, as vendors, help support both the universities and the businesses that those graduates will go to in the workforce? This is another aspect too. Right now, what we see, as you look at data around jobs is people are asking for skill, experience with AI.

Not specific skills, not specific there's no there's no kind of measurement of what they're looking for. They just want some experience with AI to help bring this into, into the businesses. And we're hoping to build a little more complexity, a little more rigor into what does that look like. What does what does basic AI literacy look like? What does more advanced literacy look like? What does, ethics the code of ethics and understanding that around look like, and how do we present that to employers. Right? And so everyone from NVIDIA to Google to Instructure are working together on that.

It's a pretty amazing collaboration. And there's a lot of those collaborations happening across the globe. Two of them, University of Florida. NVIDIA has given them, now that they've just upgraded to their second supercomputer. And so they're they're one that's burning a lot of energy, but they're pushing the boundary of what AI is capable of in ways that are more research focused and incredibly, in-depth.

Right? They're crunching massive amounts of data to try to solve all the world's problems. Right? Penn State doing very similar with IBM. I throw these out there because there there's great examples of bridging that gap between universities and business for the good of both future employees, but for the business and the university. Right? How do we how do we build together in really powerful ways? This is one that I think is really important. So as we as we look at education industry collaboration and we start thinking of what can we do, One of the big one of the biggest pieces we're seeing is a rise in experiential learning.

Right? How do we design real world a AI projects that prepare students for how they're gonna use AI when they get to the workforce? This is one of those those secrets about universities. Universities don't always know exactly what how to prepare the jobs because the jobs are evolving so quickly. And without that closed loop, it's often easy to get that skills gap developing. Right? And that's what we see. And so how do we how do we work together to create more world scenarios, improve the programs around AI so students are coming out with skills that would make them immediately, useful? Launch AI integrated apprenticeships.

Right? That that apprenticeships, internships. Right? How do we actually put students to work bridging that gap, that experiential learning in really powerful ways? Creating AI learning labs. Right? How do we universities are very open to this. How do we have businesses come in and work with schools to actually create that communication back and forth that is gonna close those loops, and experiment in those use cases that neither side has thought of. Right? Embed AI across curriculum.

Right? That's something that universities are working on. But, frankly, businesses are working on it as well. Right? How do we build that AI literacy program in a way that we bring everybody forward? Right? And and we shouldn't have the you know, frankly, there's no reason to opt out of AI because you think it's gonna turn into evil robots. That's a little, but we had people doing that early on. I actually had I had, on a web a podcast in January of twenty three, I had someone say, we have to make this all opt you know, we have to make it so students can opt out if I don't wanna interact with AI.

I was like, that is we've moved beyond that. Right? That two years later, we've we've we've moved, incredibly beyond that, but that those certain concerns still come up, and we still need to provide transparency to support that. And then the facilitating mentorships from industry. Right? The more that we can create those different tendrils reaching in, apprenticeships are amazing, but even mentorships and and giving guidance between faculty and and employers. Right? Faculty and and professionals in the industry is incredibly helpful.

This is what Gartner thinks the next, the next phase looks like. Right? We're moving into the agentic AI phase. We need to prepare for that. But AI governance platforms, how are we measuring the usage of all those tools, disinformation security, post quantum quantum cryptography? This is where it goes past my level of knowledge. Right? But exploring those new frontiers of commune of communication like we're seeing NVIDIA and University of Florida doing.

But one of the things I love is that human machine synergy. We can lean into the human skills that AI will never be able to do. Right? And it's the communication. It's the empathy. It's the ability to manage others.

There's so many aspects of this. When we actually understand how AI works, we can actually lean into these and take a realistic movement forward, not just, you you know, jump ahead and say we're gonna lay off employees, because AI is gonna do the job. That's that's unrealistic, and it's not it shows a lack of understanding of how AI works. So one of the things I love, best quote, teachers must prepare students for the student's future, not for the teacher's past. And this you know, I've been in education technology for over twenty five years.

What I think is so important is it's very easy for us to look at the world through the lens of our own experience and think that people have a different experience or somehow, you know, getting the short end of the stick or aren't doing it right. Right? We have to understand that our students, our young employees are they're having a different experience fundamentally than we had, and that's okay. Do we connect with them in the way that they wanna be connected with? My children would rather sit in their room and and watch educate you know, watch shows on a small screen than sit in front of the big TV in the living room. Fundamentally different experience than I I am prone to, but that's how they wanna learn. That's how they wanna engage.

They're microlearning on YouTube. Right? That is a different experience. How do we support that? And so, just a great perspective check there as well. So I wanna make sure we had some time to get to questions. Bianca, I I I saw a couple of questions flash by, but I wanted to make sure we, we got to some of the media questions.

Yeah. And, really, the I am sure you've been keeping an eye on the chat. The conversation that people have had with each other has been fascinating. I I reckon if we prompt a sidebar conversation, you know you're doing something right. So Yeah.

There's something that came up just kind of recently where a lot of people have mentioned in the chat that they're not currently in teaching roles, that they're more on the the corporate training side of things. Is there any different or additional advice that you'd give to someone who's not in a k through twelve or higher education classroom, but maybe is doing this work in a business? Honestly, the the the fundamentals are the same. Like, when we talk about AI literacy, there is there's if you go out, Google, AWS, Anthropic, Microsoft, they all offer basic AI literacy courses for free or for a small charge. These even if you think you know AI, these are worth, taking. I I'm a huge advocate because they teach you a little more depth on the basics of how these tools work, a little bit more on how to get the most out of the prompting, a little bit more on the ethical conversation.

And as we move towards the more agentic approach, I think the more we can stay plugged into what does that mean and how do we how do we apply those is incredibly important. In fact, I will pull together a list of the there there's a list so that I'm following up on Pam's link here. Yes. Absolutely. There's there's a list all the everything that was in my course is linked, but there's several more we'll add.

And when Joyce sends out the follow-up, I'll make sure I add those as well. But in order to teach AI, you've gotta know AI. And that's what a lot of my a lot of these links really are about is is how do we bring that AI literacy level for educators up so that then they can teach this to students. Right? And I think as an employee as an employer, we look at sometimes, at our our younger employees and how they're using these tools, and we have that same mindset that this is somehow cheating or this is somehow inaccurate. And and we've gotta get over that.

Right? We need to make sure that they're using it transparently. They're using it critically. Right? They're they're reviewing the outcomes in really positive ways. I was talking to a lawyer this last weekend, who literally just ran into something where the law firm that they were having some legal action with, was using AI completely, and they weren't reviewing the outputs, to the level they needed to. And they they opened themselves up for massive risk because they essentially sent his law firm everything that they were doing wrong.

And it they had the AI had misunderstood and actually analyzed the wrong side of the argument, and then they handed that to their their opponent. Pretty bad. Right? And and there's a number of those stories. Right? There's there's the the employee at Samsung who took a a set of code, and dropped it into the public version of ChatGPT to check the code. That code is now not proprietary.

It's not protect able. Right? I know. We can avoid these things if we raise that bar. And so, again, most of this is is not necessarily even for how for teaching. It's beneficial for just AI literacy and how do we understand, and how do we change their perspective.

Yeah. And I think for any of us who have been classroom teachers, I think there's the whole thing prior to this that was multimedia literacy that knowing when you're going on the Internet how to evaluate your sources, and this just feels like the next version of that. Yeah. And Victor's Victor just made the comment. You know, teachers always said, well, you're not gonna walk around with the computer with a with a calculator in your pocket for the rest of your life.

You're like Like this? Yes. Right. Yeah. We have we have all of the world's information there. Right? And and so we need to we need to understand that the new normal is AI at our fingertips all the time.

That's not gonna go away. That's it's the Pandora's box is open. So how do we we get that? And and and, again, to Victor's point, the more we look at history and we see, like, the the challenges I remember articles when the Internet first came out. They were talking about how much, less productive American employees were because of the Internet. And I think I used to have to get in my car and drive to the library to do research.

Right? I had to fax things that would take hours. Right? We are so much more productive because of our access to that information. That that propaganda around being less effective is is really alarming to look at back and see. And I think we see a lot of that around around AI today and and some of the fear mongering. We're getting better.

I think we've moved beyond a lot of that, but you still see headlines around. My AI said it was gonna kill me. Right? You know? There's there's there's a lot of those. If you if you mess with the prompts enough, you can break the system. They're getting better, and and we're trying not to break them so much.

So we've got time for one last quick question, and this is kind of tying into some stuff you and I talked about before this as well as some threads that I saw in the chat of people being interested, but also feeling overwhelmed by how many tools are out there, how many, different kinds of AI resources are out there. And like you and I discussed, it's easy to sometimes feel that you've missed the boat Yeah. On some of this. So if someone's in that headspace, they want to know more, but they're not sure where to start or what tools to start poking at with a stick first, What are some easy ways that someone can stay up to date with what's happening in AI? Yeah. There's there's a couple of different things.

There's there's a lot of, in fact, I'll plug the next thing. Melissa Lobo and I, who's our chief learning officer, chief academic officer, and I actually host a podcast, that is a little more education centric, but it is EduCast three thousand, if anybody, gives that reference. And there are a number of podcasts. Once you start going out, there's great And, again, it depends. How do you wanna consume the information? But there's there's great courses, simple online courses.

There's great podcasts. I would recommend, I've gotten a couple of books sent to me. It is how you print a book. You're automatically out of, out of date. But getting in there and getting started is is the first step.

What's really interesting and by the way, Edward, I love the he talked about their learning development theme and the word places AI and us innovating with integrity. Spot on. That idea that you can innovate and and keep healthy guardrails on everything is really important. But but the easiest one of reasons I show those links around just the general use tools and the free tools is once you get out there, it's a little bit addicting and you start going, oh, I could use it for this. I could use it for this.

I'm gonna I'm gonna go deep dive on that. And then you can actually start using it looking at specific use cases that apply to you and finding different tools. And you'll rapidly realize that you're not too far behind. Right? Even though these tools are being trained by more and more on more and more parameters, what we're actually seeing is we're starting to see micro versions and mini versions of these large language models that are much more, and, you know, have less environmental impact, consume less energy, are better at their tasks. And so it's almost coming full circle where we're getting better purpose built tools, and and we haven't we we're not too far behind.

Right? The the the videos that are coming out today are pretty impressive, like the the deepfake videos or the AI generated videos, so are the images. And yet there's still ways if you use AI enough to start distinguishing those and and be a little more skeptical. So, yeah, I think we'll add a list. And and, Joy, I'll Joy, I'll send you a list that we can include in the follow-up this afternoon. But there's just so many different ways to start.

It's finding the one that's convenient for you and and not being afraid to jump in. Awesome. Well, thank you so much for everything you've shared today. It sounds like you're gonna give people a lot of really fantastic links in the resource materials that are shared after the webinar. So if people are looking for more things to get started, that's got you covered as well.

Now, Ryan, if someone comes up with a question three weeks from now and just like, oh, I wish I'd asked Ryan this. How can they get in touch with you? I'm on LinkedIn. I think I'm there may be one other Ryan Lufkin in the world, but Ryan Lufkin with Instructure. Send me a note. And and I'm happy to connect because I think the this is an ongoing conversation, and it's easy to feel overwhelmed.

And and Evan post posted that he's feeling overwhelmed. It's easy to feel overwhelmed, and it's there's times when the pace of of news coming out, you have to be able to filter it out and just focus on the things that are most important to you, because there's a lot of innovation happening. And and we can't know everything, but we can certainly, you know, kind of focus in on what what could have a positive impact in our role. Fantastic. Alright.

Well, big thanks to our sponsor Instructure, the maker of Canvas, and, of course, today's speaker, Ryan Lufkin. And a big thanks to everyone here in the audience today, whether you're watching this session recording later on or maybe you're joining us live in the webinar right this second, we're always so happy when you choose to spend your time with us. So thank you. Have a fantastic rest of your day. Thanks, everybody.

During this on-demand webinar recording, you'll find:

  • Real-world examples of use cases applying AI to improve workforce development
  • Tools for building a case to explore AI in training and professional development
  • Examples of tools available for implementation today