Introducing IgniteAI
This session will explore the AI landscape in education, introduce the thinking behind IgniteAI and share how Canvas is building trust through transparent, ethical innovation.
Hello. It's so great to see so many people here today. We can't thank you enough for being here. Wherever you are joining us from in the region, we love that you've chosen to spend your time with us here today. I love seeing the representation across our region, so feel free to pop in the chat where you are joining us from. Those of you that have been in a webinar with me before also know I love to chat about the weather, so would love to see how it's looking outside your window at the moment.
So you can also share that with us as well today. My name is Farrah King. I'm the global growth product marketing manager here in APAC, and I'm really honored to be running this session today alongside Melissa Lobel, our chief academic officer here at Instructure. Mel leads our global initiatives that connect pedagogy, technology, and innovation. And with more than two decades of experience in education technology, Melissa's worked closely with institutions, educators, and policymakers worldwide to ensure that technology empowers teaching and learning.
She's passionate, and a huge advocate for equitable access, evidence based practice, and lifelong learning, values that guide her work in shaping the academic strategy here at Instructure and supporting our educators in an evolving digital landscape. Today, we're gonna be covering a few things, and we know there are so many questions around AI. It's big. It's complex. And today, we're gonna look at the real challenges that you've told us about untangling the current AI landscape.
And we'll introduce you to Ignite AI, our secure in context AI that's designed specifically for education. Today's session is really gonna be about the why and the what regarding AI and education. We won't be demoing specific features today. I know that might be a bit disappointing for some, but we're gonna be looking at that in next week's session, and we'll share details about that later on. Our next webinar will definitely go deeper into the how, and we will showcase Ignite in that session next week.
But today, we're gonna be addressing what we're calling the AI conundrum. We know that AI use is exploding right now. Educators and institutions are understandably apprehensive and also cautiously optimistic. So we wanna try and make sure we're deploy helping you deploy AI effectively across your institutions, and we know this is a big challenge. We've also heard from many of you in the region that you want greater insights into the pedagogical approaches that we use to inform our product development, how we incorporate the ed in ed tech, basically.
So without further ado, it's my pleasure to hand over to Melissa Lobel. Thank you so much, Farrah. It's so wonderful to be here with all of you today. And and as Sarah talked about, we're gonna give you a bit of the the why and the what behind AI, what we're learning, what we're watching, and what we're leveraging in order to direct our strategy as a company and as a subset of solutions for you. My work as our chief academic officer is grounded in three elements.
The first is, be is listening to you all, hearing, seeing, observing, spending time with you as educators in understanding what are you facing, what are you most challenged by, and where do you see your biggest opportunities. Second aspect of the work is grounded in research, and I'm gonna share in a few minutes some of the research that we've both done ourselves and uncovered as we've connected more broadly into the community leveraging AI. And then the third aspect of the work is is really in the work itself. It's, you know, I am an instructor. My team, that I have are all active educators, and we're in it trying, experimenting, and aligning the work that we're doing every day with the practices that we're seeing for the future for AI.
So that grounded work has shaped something that we call the impactful eight. The impactful eight are the eight big, friends, but more than that, opportunities, in some cases, challenges, but largely opportunities that we see are facing educators globally today and in the next couple of years. And AI, of course, is a key part of this. But I wanted to share just a little bit of the grounding of these eight and some of the researchers research specifically related to generative AI, again, to lay a foundation of what are we seeing, what are we researching, what are we learning, what are we experimenting with that is driving our AI strategy. So these eight, these impactful eight, cover everything from challenges or opportunities that you're immediately faced with today to things that are coming in the future.
And and it's a spectrum therein. It's everything from, as you see on the left, operational efficiency, effectiveness, and scale. This is a key area that we know you're facing today. We're all doing more with less as educators. And in that, we're all trying to scale and reach a wider population.
Well, that becomes essential not only in how we support you across our solutions, but scale and efficiency in particular you're gonna see threaded throughout our AI strategy. If we think about lifelong learning, opposite of that, we think, okay, how are we going to connect and interact with learners at all stages in their own personal and professional journeys? Well, that as well you're gonna see embedded in our AI strategy, especially as we leverage the tools that we are currently building and experimenting with with customers into new solutions that's that serve both the traditional and nontraditional learner. Similarly, if we look at the science of learning and recognition of learning, this is about what do we understand as humans about how we learn, how we engage, how do we have agency over our own growth, and how do we demonstrate that learning out more broadly into the community, or how do we demonstrate the uniqueness of us? Again, you're gonna see this in our AI strategy, show up in various places, both in how we pedagogically build tools and make sure that you're able to deploy them using, you know, modern pedagogies like cognitive based learning, you know, growth mindsets or even being able to leverage experiential learning. But you're also gonna see it in the way the tools interact particularly with faculty and administrators and how you can unpack who and where and how learning is happening across your institution. Now generative AI is a given.
It's one of these impactful aid. It's what we're here to talk about specifically, but that also then extends into assessment and evidence based design. You will see where and how we're tactically we're leveraging AI to support you in the creation and and, implementation of assessment, but you'll also see how it's actually rethinking. What do we how do we understand learner progress and how we can make sure that it maps to the ability for those learners to develop skills and competencies that are meaningful for their workplace, vision and future. And then finally, all of this you'll see is being done with educational and industry partnerships.
And we'll talk a lot about that, a little bit later in this conversation because it's the foundation for the platform of what we call Ignite AI. Now I said we we as a key part of the three elements of how we drive or how we've uncovered these impactful light and how we drive our strategy as a company and and from a product perspective, we do some of our own research. And this research was done recently to really understand what are both the top challenges that institutions are facing in twenty twenty five related to the implementation of AI and where do they need the most help. This is a global survey, and it really helped us understand, and it and it confirmed what we suspected. But that some of the biggest challenges that you all are facing are around the security, the privacy, the fairness, the equity underneath the leveraging of AI.
There's a lot of questions in the space about this. How do we ensure everyone is able to to have the right kind of access to these tools, and how do we make sure that, risk is being mitigated. And this is you will see shortly, one of the key fundamental pillars in our use and leverage of AI throughout our products. The next big challenge that we're hearing from you is is, one that, again, across the AI spectrum in education and outside of education is a big part of the dialogue today. And that's can we count on the tools, the and and the, the input and the output we're getting from AI to be accurate, reliable, trustworthy? Can we avoid hallucinations? Can we and how do we model in such a way that we are truly having a positive impact on on learning and teaching and not creating more, either challenges or even, surfacing more problems that that may create both credibility risk around AI, but even more broadly may negatively impact the education experience.
And then finally, this is a big lift. For many of you on this call, you may be faculty. Many of you may also be, administrators. Learning and understanding how to leverage AI is a big challenge. I think about my own experimentation in my practice and that I'm learning every single day and how to be back to that operational efficiency, and scale that and make that meaningful so that we can still as faculty just be that interconnected human to our learners and and appreciate the artistry in our teaching while leveraging tools to be as efficient as possible.
And this all means we need a lot of support as educators. We need ways to frame how we use AI as an assistive tool, not a replacement tool. We need help figuring out how do we vet, choose, select, and then subsequently implement the various AI tools that are that are that are in front of us, the opportunities we have, but how do we do that meaningfully? And then finally, how do we ensure that human in the loop? And you're gonna see throughout our AI strategy that that human in the loop is absolutely essential to, how we have chosen to deploy AI both internally and in the solutions that we'll be presenting. I did did see a quick question in the chat. Do you have any questions? We will have time at the end for q and a, and both Farah and I will be answering q and a.
So I'll, we have some folks helping us with q and a, but please post your questions there so that we can catch them easily. And either we'll answer them in the q and a or we will answer them at the end. Okay. I just saw that, so I wanted to make sure to share that. Now we also lean on research, I hinted to this, outside of the research that we do internally because we know there's great research being done by organizations around the world that are asking some of these similar questions.
We happen to really like the work, it's also global work, from the Digital Education Council, and they recently in twenty twenty five did a faculty survey. And by the way, these slides will be shared. There'll be resources shared with everybody that attends the webinar. And so you'll be able to get the link to this research, embedded in that so that you can go dig in more because there's a lot to this this faculty survey. You'll see in a minute, I'll also talk about a student component to it.
And there's some really powerful insights in this, and we wanna make sure you have access to those. But this digital education council survey, what it really exposed, particularly in twenty twenty five, because you can go back and you can compare it to twenty twenty four and even an early survey in twenty twenty three, is that faculty more and more faculty are using AI. This is a good this is this is a pretty significant uptake in twenty twenty five. And they're saying, yeah, I may be using it minimally, but I'm starting to ease into moderate use. That is exciting because it's showing that some of those scale hurdles around how do we prepare faculty, we're working through those as education organizations.
And what's really interesting is if you look at the spectrum of where and how they're using AI, it's probably some of these are probably not as surprising. Of course, they're using it for materials. Content creation and teaching materials is one of the first places we've seen AI being used. But as you move down the spectrum, you're also seeing them shift into, okay, how am I incorporating AI? Fifty percent of faculty reported in this survey, and it's about a three thousand n. So it's a good a good reporting, on this survey, are saying they're using it to actually better prepare students for using AI in their future.
Now we're getting into where I start to really love this. This goes into that science of education piece. We're getting into we're using it to boost engagement. We're using it to provide feedback. Now there still is a collection of faculty out there that are also using it to detect cheating.
I think we're seeing though a shift away from cheating into, okay, how do I use this meaningfully and how do I think about authorship and encourage authorship of my students and even of myself as faculty in how I'm leveraging AI. But that gives you a spectrum kind of where we're seeing it use now. You'll see this in our strategy of how we're addressing some of these use cases, but how we're also trying to reach past that. What's next beyond these particular use cases? Now I said the Digital Education Council also does student research, and they assume had a similar, really good response, right, with students. And what they found is that students are using AI.
Well, yes, we know there are stories where students are using them to do their work, but they're really using them for help. This is these are tools to support finding information or insights to be able to summarize and navigate complex documents, to be able to even paraphrase ideas so that they can think about how they're applying those paraphrased ideas to new situations and cases. And in some places, we're seeing students leverage AI to create first drafts. What's interesting about that piece, is that what we're seeing in this space is when students are being are using AI to create first drafts of work, it's because they're often being encouraged by their faculty to do so. Because the faculty are saying start with that, tell me how you're using that, but now build upon that.
And that's really exciting. If we think about that equity piece that I talked about too and that impactful eight, this can give all learners an opportunity, not just those that start from a blank page really well, but all students opportunities to jump start their thinking to then build and scaffold upon. So it's exciting to see both the faculty and student use cases starting to collide, in meaningful ways. Now similarly, OpenAI is out there, and and you've even seen, I I think, probably announcements from us around OpenAI. They're doing research.
There's some really good research that, Anthropic's done with Claude. So we're seeing some of the big LLM tools do similar research to understand, okay, what are students really using? Right? We're not just gonna ask students, but we're gonna go see what do we see a student using. And the exciting part about this is, again, the top use cases are not necessarily cheating. Although, we hear stories about that all the time, but, really, they're about help me learn, help me understand these ideas, help me write, and struggling. This in particular for ELA, English language learning instruction, and other things, or even, foreign language instruction.
This has become really a a a big place. And then just trying to navigate the world as you're trying to associate your content, your your your disciplines, your subject areas. And then finally, of course, we're seeing this from a programming perspective and help in debugging and coding. And that is also we're seeing being very encouraged by faculty. So, again, these are all really valuable and viable use cases.
But let me actually pass it back over to Farrah because she's got two really great use cases she's gonna share that, you're in your region, how institutions are choosing to use AI. Thanks so much, Mel. That was fabulous context. Here in APAC, we saw that seventy percent of our respondents view AI as an opportunity. So it's really optimistic.
And a couple of the examples I want to walk through with you are leveraging exactly what Mel's just spoken us spoken to us about. So first, we've got Val, Val Gen AI chatbot from RMIT. It's their virtual assistant for learning, and it's a really powerful example of how a third party Gen AI tool has been designed with the institution's values and student support at its core. Val operates as a private, secure, and free AI assistant that's accessible to RMIT staff and students. It supports personalized learning through private personas where custom configurations can be created and users can tweak system prompts for the tone, the choice of model, and and purpose according to their needs.
So RMIT students and staff can create up to five personas for tasks like academic support or course specific queries. At RMIT, the educators recognized that students were already experimenting with AI, which we've just chatted about. And so instead of ignoring it, they built Val with a suite of tools designed specifically for that. For example, they have Imaginio, a role play persona based generator where students can act out scenarios. So things like language learning or clinical training, students can practice in an AI driven space.
They've also got quizzical, which can generate practice quizzes aligned to their specific courses. EssayMate provides personalized essay feedback and supporting students in their draft process. And they've also got something called PolyChat, which gives their students a really easy way to navigate the many RMIT policies that are out there. A last one they have is Prompto, where it takes things one step further by teaching students how to craft an AI prompt, an essential skill that we know students will need in the future. I think the key takeaway here is that RMIT didn't just drop a general purpose tool into their LMS or into their environment.
They built a purposeful role based tool that really helped fit the teaching needs while also maintaining their institutional oversight. The other example we have is from the University of Sydney, and it's called Cognidi. It's an award winning GenAI platform AI doubles. They're guided AI doubles. They're guided using plain language system messages, and they can be enriched with course materials and rubrics.
It's currently in pilot with well over twenty institutions globally, not just here in APAC. It's also open to integration, and it really exemplifies how third party AI can be integrated into any LMS with thoughtful governance, data privacy, and pedagogical alignment. Instructure values equitable access as well. So this aligns really nicely with us where the educational integrity is empowering educators, not replacing them. And we're really excited about this model.
It's a powerful demonstration of how third party AI, when integrated responsibly, can enhance that pedagogy without sacrificing privacy or equity or even educator agency. So Cogniti really aligns beautifully with our vision for open collaborative innovation. When we look at Cognite, we see a model that's educator built and institutionally trusted, and it shows that AI doesn't just have to be imposed on education. It can be created by education and for education. On that, I'm now gonna hand back over to Mel.
And I love those two examples. Thank you so much for sharing them, Farrah, because they are great examples of what we've ingested in Instructure to understand how should we be both thinking about what we're building, and and our strategy, but then how do we make sure those kinds of capabilities can tightly and seamlessly embed into our overall approach. So Farah will get into more details in a few minutes, but I wanted to just set the landscape of what we're seeing specifically from an AI perspective, and then how are we, again, framing key principles in our solutions around AI. So many of you and and what you just saw on those two examples, are leveraging general purpose l o LLMs. Whether it's one of the main providers and you're building on top of those or you're leveraging one of the large LLM general built providers themselves and the tools that they've launched to education.
In both cases though, we know that many of you are already experiment experimenting with them just like those two examples showed. So part of our strategy is ensuring you're able to plug that work directly into Canvas and on across our entire suite of products over time so that you can leverage that very targeted and unique work that you're doing at your institutions and organizations. We also know that, and if we can hop back one slide real quick, Farrah. Sorry. We also know that there are education native AI companies out there doing very specific targeted AI work.
We know not only do those need to be integrated, but we know you're you're using those already. And so we need to be thinking about how are we complementing those tools in what we're building, not necessarily, competing with things that are deep in their particular areas. For example, Grammarly is a great example. It's been using AI for a long time. Many of you have embedded Grammarly in various ways in the education or the learning experience, and and we're not Grammarly expert.
We're not grammar experts. We try very hard to to to do our best, but that is not our specialty. And so how do we make sure we complement that work, of those very educated native AI companies? And then finally, as I mentioned before, we know you're building your own LLMs and you need to be able to plug in and bring in your own experimentation. So if we go to the next slide, this will show how are we thinking about the fundamentals in a world where you need to operate not only with what we're building from an AI perspective, what's needed in Canvas in our technology, but that you can bring your general purpose LLMs, you can bring your education targeted or specific native tools to the table, and you can bring all of your own experimentation. So there are three key points to this.
You're challenged by again, we said this before. Do I know these tools are secure? They're vetted. How what what models are they using? How do I ensure they align to my data privacy practices and everything in between? Second, we know you need to make sure that you are aware of how and what is available and how can you scale this across your institution, staff, faculty, and students? And then finally, how are you using AI to do things and to solve problems that you couldn't before? It's not about AI replacing what you can do today. It's about AI enhancing and augmenting and even expanding what you can do so that you can do so much more. So if we go to the next slide, this will show how we think about a successful deployment and the three principles that Sarah will be talking about.
Covenants, we're ensuring through tools that we've created like nutrition facts that you can vet and and understand what you're using. Adoption. We're trying to ensure that the capabilities built natively in our solutions as well as those that are plugged in can be seamlessly used. It's very clear how to use them and very intuitive in how to use them. And then finally, we're not choosing to just deploy AI for the sake of AI or for flashy, sexy tools that maybe only ten percent of you may may use, but they're make we're making sure we're employing AI tools that drive value.
And by doing that, we can be very targeted in two ways. One, we can drive value because we're solving challenges that you have. And by doing that, we can be very deliberate about the models that we choose to employ so they're targeted at those challenges. And the second piece is we're not throwing AI at everything everywhere and these large models that are overwhelming and overpowering. Those are not sustainable.
Those are not, thinking about long term leveraging of AI in an environment where we need to be very environmentally and sustainably conscious. So this value piece is in particular very important to us because it's allowing us to really be thoughtful today and well into the future. So let me pass it back over to Sarah for the last time. She's gonna introduce Ignite AI. And again, if you've got questions, don't hesitate to put those in the q and a.
We've got a team of us that are happy to answer questions as we go, and we can pull any of those at the end as well. Amazing. Thanks so much, Mel. So all the things that Mel's just talked about describing confidence, adoption and value is exactly what's guided us in building Ignite. It's not just another tool, but it's our solution to bring AI together into one cohesive experience for educators, our educational admins and students globally.
One of the biggest reasons that we're starting this new chapter with one and only solution is that it uses AI for education at Instructure. And we've heard quite extensively here globally that AI is overwhelming, especially when there's so many different tools in so many different places doing many things as Mel's just touched on. So what we've introduced a couple weeks ago was Ignite AI, where educators can bring everything into one place. It's secure, and it's embedded exactly where the teaching and learning happens within our educational ecosystem. It is in Canvas, and it will also be out across in our other products.
This means that you can be in full control and have full visibility over when your AI is being used and where. And it means that faculty and students can focus less on managing that technology and more about what really matters, which is the learning. I always say learning should drive the tech. Tech shouldn't drive the learning, and I think that's a really nice piece that we've done here. It's really about helping learners thrive in tomorrow's landscape and delivering a future ready ecosystem.
So what we'll show more of in next week's webinar is some specific functionality of Ignite AI and a solution that we're building together with you to make sure that we are supporting your use cases and taking into consideration our regional needs. We also wanna just touch on some of the things that have come up, and we know these are super crucial when thinking about an AI solution. We wanna make sure that this is embedded in our ecosystem, which, again, means less hassle, less going out to multiple places for you and your users. We're building this to seamlessly connect with your other tools as well. We know that you are using many other platforms in many places, and we pride ourselves on being open and connected as an ecosystem in the EdTech space.
So we're truly excited to bring that power of AI into that one environment for you. We think of our Ignite AI as a bit of a conductor in that space, and that's because it's going to help unify your AI ecosystem. It will help bring all of your products into one place and seamlessly connect with Canvas and even the other five hundred plus APIs you might be leveraging at this point. For your educators, it means that Ignite is useful within Canvas for things like quizzing and discussions and aligning content and outcomes. Our open API driven approach also makes it easier for our partners to integrate with Instructure products as well.
And the result of that is a connected and flexible workbench that's gonna save time, reduce that complexity, and keep educators and students at the center of everything. To start, we're using these APIs to bring the power of Ignite into our Instructure products that you already know and love. This is a bit of a sneak peek into what we're planning, for the rest of the year and heading into next year. So we do have a few pieces of functionality that are going to be made available very soon for things like testing and our early adopter programs. We've also got more forward thinking features in the works.
And again, next week, we're gonna be talking about all of these things in a bit more detail and giving you some little teasers into what those pieces of functionality will look like within Canvas and how they will help you. So as I mentioned, we really wanted to cover off the why and the what today, and I'm so grateful that Melissa could join us today to really talk through the thoughtful approach we're taking. As I said, how we're putting the education in education technology. Next week, we're gonna take some time to explore a bit more detail around the actual functionality, discussion insights, how we're working with assessments and accessibility so you can see firsthand the impact on teaching and learning. We do have a link that we'll share in the chat so that you can stay up to date on those things.
And, of course, if you'd like to stay up to date as we roll out further updates, further information and more educator stories around the world about Ignite AI, please sign up so that we can make sure that you're always in the loop about what we're doing. And lastly, we also have our quarterly product sessions that I run for the region. Our next one's coming up in a few weeks' time. And so not only will we talk be talking about all things AI, we've also got some other exciting product updates to share with you, to make sure you're taking that thoughtful approach to teaching and learning. So some more future roadmap updates and, of course, resources will be shared in that session.
You can sign up using the QR code. Now we've covered everything we need to cover, so I think we can jump into the q and a side of things if there are any outstanding questions. So, we did an answer a few questions. I did, just so you know, Farah. A little bit about timing, which you showed on that one slide of when are things coming.
I'll answer more broadly. The majority of our AI work, we have, released a number of capabilities for general availability, so things like discussion summaries, nutrition facts, and some other pieces. And those have come in those new to next new and next, presentations. You know, we've highlighted those. There's a number of pieces that are currently in what we call an early adopter phase, and this is when we're working with a collection of customers in an early it's a beta program, essentially, if you wanna think about it that way.
And a number of those things then will be released later this year, generally available. And then some are still in a little bit earlier phase of development, and those we're looking at early twenty twenty six. So just to give you a time frame, we're not talking about things that are two to three years out. We're talking about things rolling out over the next six to nine months, and then beyond because this is really building that foundation. So I just wanted to there was a number of questions.
I thought I'd quickly summarize that. And then now I am, taking a look at some of the other questions that I can throw at you. So, you know, rather than specifically around a product piece, but, Farrah, how are we approaching assessment and Ignite AI? Because there's some discussion in the webinar chat as well. Like, where is it where is it in the the order of of operations and how we're thinking about using AI in order to solve institutional challenges? It's a great question. And I think you touched on this really nicely that we're very much focused on the teacher experience right now, that educator experience and helping improve their workflows.
So with assessment in mind, it's things like rubric generation. I I teach as well sessionally, and I know how valuable a good rubric is. And I also know how time consuming they can be to create. So we are looking at ways that we can help expedite that process, but still put the teacher at the heart of what's being done, whether it's generating a draft of a rubric that a teacher can then tailor as needed, even providing some suggested commenting or feedback as a first pass on an assignment, again, so that teachers can really focus on the value add and the thing that they do best and remove some of that administrative burden around the assessment processes. I love I love how you framed that, Farrah.
And one of the questions, I know there's been a little dialogue in the chat too, that I've gotten as we've shared our strategies with other customers and and and, folks interested in education around this has been, are you doing student facing tools? And you'll see that the majority of the work that we've done is is faculty facing, it's teacher facing for all the reasons Farrah described and all the reasons you all are chatting about in the chat. Student features, there's a lot more complexity to that in in how, you know, we're all learning to engage students in the teaching and learning process. And so we are developing, and you'll see this. One of the things listed there was a an assignment tool and there's other capabilities that we are developing, but we're really focusing for all those reasons that Pera just described on faculty first because there's so much opportunity there to have such a broad impact while we together as an education community learn from from what and how students are interacting with AI. Let's see.
Some of the other questions that I'm catching. Okay. So there are some cost questions and maybe just broadly, how are we approaching that, Sarah? Good question. At the moment, as Mel was talking just before about our early adopter programs and opportunities for feedback, we have a good idea of what we wanna do in this space, thanks to all of your input. And what we're doing at the moment is refining that based on need.
We will absolutely have functionality that is within Canvas that you are already paying for, and then we will likely have some premium product offerings that we look at as an add on option for those that need to do something a bit more robust. So we're working through that with our teams and obviously wanna get that to you as soon as possible. But given we know there's so much feedback in this space, we wanna make sure we're doing this thoughtfully and giving you a product that you're gonna love and gonna wanna use. So we wanna make sure we approach that thoughtfully, and we will absolutely be talking pricing with you all soon. Absolutely.
I'll add to that too, Farrah. Thank you. One of our considerations is, again, we're talking about our strategy and being very directed and deliberate in how we deploy AI to solve specific problems. We are trying to be as cost effective as possible in that. So part of that determination will be if these are tools that are very specific to capabilities within Canvas that aren't expensive models, that way we're not coming back to you and raising prices or doing other things in a significant way to cover that.
So so that's part of our determination as well. Whereas if you look at, like, larger agent based tools, that's where you start to, you know, incur a lot more cost underlying in and that's where you may see then, features at an additional or premium cost. But we're trying to balance that too for you where we can leverage AI as specifically and as tactically as possible so we can keep the costs down across the board. How about a sneak preview of this? And I know Greg is also answering this in the q and a. Is there gonna be some sort of agentic capability? And maybe this is a good teaser for next week.
This is a great teaser for next week. Yes. There will be an Agentic, chat type of functionality. We have absolutely taken an Agentic approach with some of the AI functionality that we're looking at. If you followed along with InstructureCon, our amazing global event that happened back in July, we had some really exciting announcements around Ignite Agent, and we're really excited to talk about that at next week's session and show you how that's gonna work.
It is very much aimed at teachers, and I'm really excited about it, personally, to show you just how how great it's gonna be for the work that you do. Great. Thank you. And, I didn't catch the question, but in the chat, there was a little bit of a question around student privacy. What's our posture in general on privacy, Farrah? Like, how are we thinking about that, particularly for the region? Great question.
We we've ensured that none of the models that we're using train on your data. So that's the first thing to call out. We wanna make sure that privacy is absolutely maintained and that you are in control of all of your AI use. The other functionality that we have that is pretty exceptional and I'm excited about is nutrition facts. Everywhere AI gets used within our product, you have the opportunity to see how that data is being used, what model it's leveraging, and again, remain in full control about whether you choose to use that or not.
So your data, your student data is always protected. It is not used. And the PII, those of you that are familiar with that language, is level two, meaning that the data is protected and not used to train the models. Fantastic. And then I think we've covered almost everything.
I'm just I'm just trying to quickly scan. So there's a question in general around, how are we thinking about leveraging AI in the context where it's really easy for students to skip work that, you know, we've historically really wanted them to do? And I'll start an answer there. You'll see the kinds of capabilities that we release are about the process, not just the results from a student interaction. So as we look at things like assignment tools where students are interacting with that, producing the process or the conversations. And, again, that authorship piece is gonna be really important in how we, right, bring tools to you so that you can follow not just and, you know, what was the output or the final deliverable that a student may submit, but that you can actually follow the entire learning process of that student and be able to see that.
So, hopefully, you can track where students are skipping through work because they're using AI as opposed to where are they actually leaning in and trying to understand and be able to apply the concepts you're teaching. So that's a little bit of a starter. I don't know if you have other thoughts around this, Farrah. But how do we, it's not so much preventing cheating as how do we not make it so easy for students to be efficient that they don't learn as much? I love everything that you've just said around the assessment process and I think one of the things just to value out on what you've said is we're very much looking at how we can make how we can help you make assessment more authentic. It's really easy to say we need authentic assessments, but how are we going to help empower educators to do that? And so some of the functionality that we're looking into that Melissa is starting to talk about is removing that really linear process that our students tend to go through right now around create the assignment, submit the assignment, get the feedback.
We want it to be much more iterative and really capture that process. And I think that's a really great and powerful word to describe it where we can unlock that critical thinking, surface that critical thinking so that you as educators will get those insights into how students got to what they got to, not just that final output. Love that. And that will also start to address, you know, are we embedding detection capabilities? And the answer is no, not explicitly. However, the way the tools are built, the way, you know, Farrah just described that, will give you opportunities to be able to unpack where students may be or how they are using AI.
So I think that covered everything that I caught in the chat, and it get covered the majority of what was in the q and a. There There are some specific, feature questions, but I think we can we're gonna hold on to those for next week, which is really exciting. And this is fantastic. I do see a question in the q and a. What would be your recommended quick pitch for us to re pitch the statute procurement officers to get to yes? I love that question, by the way.
Thank you so much for posting that. And, you know, let us let us cover that next week because I think you will see in it, in short, our open strategy that's flexible and thoughtful and secure, but also allows you to innovate and experiment with AI in a safe and meaningful way. But I will answer that question. It's a it's a really good one to queue up for next week. So, Farrah, back over to you to help wrap us up.
Thanks so much, Melissa. And thank you again, everyone. These are really thoughtful questions, and I think we're firstly grateful that you chose to spend your time with us today. We know how important this conversation is, and I'm grateful that Melissa was able to join us to again talk through that why. I think the why in education is so important.
As I said, learning should drive the technology. Technology shouldn't drive the learning. And that is absolutely the approach we want to highlight today. I know you're all super keen to see some of these things in action. And next week, we will absolutely have those ready for you.
But this conversation is just as valuable because it informs what we wanna do, and we wanna make sure we're listening to you as a region and and incorporating your needs as much as possible into into what we offer. So thank you again, Melissa. We're so grateful, and I could listen to you forever. We will share all these things around. The recording will also be there as well.
And as I mentioned, please sign up. We've got webinars next week, few more in a couple weeks' time as well where we will absolutely keep you in the loop about all these things. Thank you again, everyone, and a huge thank you to Melissa and our support in the q and a. Have a good afternoon. Thanks, everyone.
So you can also share that with us as well today. My name is Farrah King. I'm the global growth product marketing manager here in APAC, and I'm really honored to be running this session today alongside Melissa Lobel, our chief academic officer here at Instructure. Mel leads our global initiatives that connect pedagogy, technology, and innovation. And with more than two decades of experience in education technology, Melissa's worked closely with institutions, educators, and policymakers worldwide to ensure that technology empowers teaching and learning.
She's passionate, and a huge advocate for equitable access, evidence based practice, and lifelong learning, values that guide her work in shaping the academic strategy here at Instructure and supporting our educators in an evolving digital landscape. Today, we're gonna be covering a few things, and we know there are so many questions around AI. It's big. It's complex. And today, we're gonna look at the real challenges that you've told us about untangling the current AI landscape.
And we'll introduce you to Ignite AI, our secure in context AI that's designed specifically for education. Today's session is really gonna be about the why and the what regarding AI and education. We won't be demoing specific features today. I know that might be a bit disappointing for some, but we're gonna be looking at that in next week's session, and we'll share details about that later on. Our next webinar will definitely go deeper into the how, and we will showcase Ignite in that session next week.
But today, we're gonna be addressing what we're calling the AI conundrum. We know that AI use is exploding right now. Educators and institutions are understandably apprehensive and also cautiously optimistic. So we wanna try and make sure we're deploy helping you deploy AI effectively across your institutions, and we know this is a big challenge. We've also heard from many of you in the region that you want greater insights into the pedagogical approaches that we use to inform our product development, how we incorporate the ed in ed tech, basically.
So without further ado, it's my pleasure to hand over to Melissa Lobel. Thank you so much, Farrah. It's so wonderful to be here with all of you today. And and as Sarah talked about, we're gonna give you a bit of the the why and the what behind AI, what we're learning, what we're watching, and what we're leveraging in order to direct our strategy as a company and as a subset of solutions for you. My work as our chief academic officer is grounded in three elements.
The first is, be is listening to you all, hearing, seeing, observing, spending time with you as educators in understanding what are you facing, what are you most challenged by, and where do you see your biggest opportunities. Second aspect of the work is grounded in research, and I'm gonna share in a few minutes some of the research that we've both done ourselves and uncovered as we've connected more broadly into the community leveraging AI. And then the third aspect of the work is is really in the work itself. It's, you know, I am an instructor. My team, that I have are all active educators, and we're in it trying, experimenting, and aligning the work that we're doing every day with the practices that we're seeing for the future for AI.
So that grounded work has shaped something that we call the impactful eight. The impactful eight are the eight big, friends, but more than that, opportunities, in some cases, challenges, but largely opportunities that we see are facing educators globally today and in the next couple of years. And AI, of course, is a key part of this. But I wanted to share just a little bit of the grounding of these eight and some of the researchers research specifically related to generative AI, again, to lay a foundation of what are we seeing, what are we researching, what are we learning, what are we experimenting with that is driving our AI strategy. So these eight, these impactful eight, cover everything from challenges or opportunities that you're immediately faced with today to things that are coming in the future.
And and it's a spectrum therein. It's everything from, as you see on the left, operational efficiency, effectiveness, and scale. This is a key area that we know you're facing today. We're all doing more with less as educators. And in that, we're all trying to scale and reach a wider population.
Well, that becomes essential not only in how we support you across our solutions, but scale and efficiency in particular you're gonna see threaded throughout our AI strategy. If we think about lifelong learning, opposite of that, we think, okay, how are we going to connect and interact with learners at all stages in their own personal and professional journeys? Well, that as well you're gonna see embedded in our AI strategy, especially as we leverage the tools that we are currently building and experimenting with with customers into new solutions that's that serve both the traditional and nontraditional learner. Similarly, if we look at the science of learning and recognition of learning, this is about what do we understand as humans about how we learn, how we engage, how do we have agency over our own growth, and how do we demonstrate that learning out more broadly into the community, or how do we demonstrate the uniqueness of us? Again, you're gonna see this in our AI strategy, show up in various places, both in how we pedagogically build tools and make sure that you're able to deploy them using, you know, modern pedagogies like cognitive based learning, you know, growth mindsets or even being able to leverage experiential learning. But you're also gonna see it in the way the tools interact particularly with faculty and administrators and how you can unpack who and where and how learning is happening across your institution. Now generative AI is a given.
It's one of these impactful aid. It's what we're here to talk about specifically, but that also then extends into assessment and evidence based design. You will see where and how we're tactically we're leveraging AI to support you in the creation and and, implementation of assessment, but you'll also see how it's actually rethinking. What do we how do we understand learner progress and how we can make sure that it maps to the ability for those learners to develop skills and competencies that are meaningful for their workplace, vision and future. And then finally, all of this you'll see is being done with educational and industry partnerships.
And we'll talk a lot about that, a little bit later in this conversation because it's the foundation for the platform of what we call Ignite AI. Now I said we we as a key part of the three elements of how we drive or how we've uncovered these impactful light and how we drive our strategy as a company and and from a product perspective, we do some of our own research. And this research was done recently to really understand what are both the top challenges that institutions are facing in twenty twenty five related to the implementation of AI and where do they need the most help. This is a global survey, and it really helped us understand, and it and it confirmed what we suspected. But that some of the biggest challenges that you all are facing are around the security, the privacy, the fairness, the equity underneath the leveraging of AI.
There's a lot of questions in the space about this. How do we ensure everyone is able to to have the right kind of access to these tools, and how do we make sure that, risk is being mitigated. And this is you will see shortly, one of the key fundamental pillars in our use and leverage of AI throughout our products. The next big challenge that we're hearing from you is is, one that, again, across the AI spectrum in education and outside of education is a big part of the dialogue today. And that's can we count on the tools, the and and the, the input and the output we're getting from AI to be accurate, reliable, trustworthy? Can we avoid hallucinations? Can we and how do we model in such a way that we are truly having a positive impact on on learning and teaching and not creating more, either challenges or even, surfacing more problems that that may create both credibility risk around AI, but even more broadly may negatively impact the education experience.
And then finally, this is a big lift. For many of you on this call, you may be faculty. Many of you may also be, administrators. Learning and understanding how to leverage AI is a big challenge. I think about my own experimentation in my practice and that I'm learning every single day and how to be back to that operational efficiency, and scale that and make that meaningful so that we can still as faculty just be that interconnected human to our learners and and appreciate the artistry in our teaching while leveraging tools to be as efficient as possible.
And this all means we need a lot of support as educators. We need ways to frame how we use AI as an assistive tool, not a replacement tool. We need help figuring out how do we vet, choose, select, and then subsequently implement the various AI tools that are that are that are in front of us, the opportunities we have, but how do we do that meaningfully? And then finally, how do we ensure that human in the loop? And you're gonna see throughout our AI strategy that that human in the loop is absolutely essential to, how we have chosen to deploy AI both internally and in the solutions that we'll be presenting. I did did see a quick question in the chat. Do you have any questions? We will have time at the end for q and a, and both Farah and I will be answering q and a.
So I'll, we have some folks helping us with q and a, but please post your questions there so that we can catch them easily. And either we'll answer them in the q and a or we will answer them at the end. Okay. I just saw that, so I wanted to make sure to share that. Now we also lean on research, I hinted to this, outside of the research that we do internally because we know there's great research being done by organizations around the world that are asking some of these similar questions.
We happen to really like the work, it's also global work, from the Digital Education Council, and they recently in twenty twenty five did a faculty survey. And by the way, these slides will be shared. There'll be resources shared with everybody that attends the webinar. And so you'll be able to get the link to this research, embedded in that so that you can go dig in more because there's a lot to this this faculty survey. You'll see in a minute, I'll also talk about a student component to it.
And there's some really powerful insights in this, and we wanna make sure you have access to those. But this digital education council survey, what it really exposed, particularly in twenty twenty five, because you can go back and you can compare it to twenty twenty four and even an early survey in twenty twenty three, is that faculty more and more faculty are using AI. This is a good this is this is a pretty significant uptake in twenty twenty five. And they're saying, yeah, I may be using it minimally, but I'm starting to ease into moderate use. That is exciting because it's showing that some of those scale hurdles around how do we prepare faculty, we're working through those as education organizations.
And what's really interesting is if you look at the spectrum of where and how they're using AI, it's probably some of these are probably not as surprising. Of course, they're using it for materials. Content creation and teaching materials is one of the first places we've seen AI being used. But as you move down the spectrum, you're also seeing them shift into, okay, how am I incorporating AI? Fifty percent of faculty reported in this survey, and it's about a three thousand n. So it's a good a good reporting, on this survey, are saying they're using it to actually better prepare students for using AI in their future.
Now we're getting into where I start to really love this. This goes into that science of education piece. We're getting into we're using it to boost engagement. We're using it to provide feedback. Now there still is a collection of faculty out there that are also using it to detect cheating.
I think we're seeing though a shift away from cheating into, okay, how do I use this meaningfully and how do I think about authorship and encourage authorship of my students and even of myself as faculty in how I'm leveraging AI. But that gives you a spectrum kind of where we're seeing it use now. You'll see this in our strategy of how we're addressing some of these use cases, but how we're also trying to reach past that. What's next beyond these particular use cases? Now I said the Digital Education Council also does student research, and they assume had a similar, really good response, right, with students. And what they found is that students are using AI.
Well, yes, we know there are stories where students are using them to do their work, but they're really using them for help. This is these are tools to support finding information or insights to be able to summarize and navigate complex documents, to be able to even paraphrase ideas so that they can think about how they're applying those paraphrased ideas to new situations and cases. And in some places, we're seeing students leverage AI to create first drafts. What's interesting about that piece, is that what we're seeing in this space is when students are being are using AI to create first drafts of work, it's because they're often being encouraged by their faculty to do so. Because the faculty are saying start with that, tell me how you're using that, but now build upon that.
And that's really exciting. If we think about that equity piece that I talked about too and that impactful eight, this can give all learners an opportunity, not just those that start from a blank page really well, but all students opportunities to jump start their thinking to then build and scaffold upon. So it's exciting to see both the faculty and student use cases starting to collide, in meaningful ways. Now similarly, OpenAI is out there, and and you've even seen, I I think, probably announcements from us around OpenAI. They're doing research.
There's some really good research that, Anthropic's done with Claude. So we're seeing some of the big LLM tools do similar research to understand, okay, what are students really using? Right? We're not just gonna ask students, but we're gonna go see what do we see a student using. And the exciting part about this is, again, the top use cases are not necessarily cheating. Although, we hear stories about that all the time, but, really, they're about help me learn, help me understand these ideas, help me write, and struggling. This in particular for ELA, English language learning instruction, and other things, or even, foreign language instruction.
This has become really a a a big place. And then just trying to navigate the world as you're trying to associate your content, your your your disciplines, your subject areas. And then finally, of course, we're seeing this from a programming perspective and help in debugging and coding. And that is also we're seeing being very encouraged by faculty. So, again, these are all really valuable and viable use cases.
But let me actually pass it back over to Farrah because she's got two really great use cases she's gonna share that, you're in your region, how institutions are choosing to use AI. Thanks so much, Mel. That was fabulous context. Here in APAC, we saw that seventy percent of our respondents view AI as an opportunity. So it's really optimistic.
And a couple of the examples I want to walk through with you are leveraging exactly what Mel's just spoken us spoken to us about. So first, we've got Val, Val Gen AI chatbot from RMIT. It's their virtual assistant for learning, and it's a really powerful example of how a third party Gen AI tool has been designed with the institution's values and student support at its core. Val operates as a private, secure, and free AI assistant that's accessible to RMIT staff and students. It supports personalized learning through private personas where custom configurations can be created and users can tweak system prompts for the tone, the choice of model, and and purpose according to their needs.
So RMIT students and staff can create up to five personas for tasks like academic support or course specific queries. At RMIT, the educators recognized that students were already experimenting with AI, which we've just chatted about. And so instead of ignoring it, they built Val with a suite of tools designed specifically for that. For example, they have Imaginio, a role play persona based generator where students can act out scenarios. So things like language learning or clinical training, students can practice in an AI driven space.
They've also got quizzical, which can generate practice quizzes aligned to their specific courses. EssayMate provides personalized essay feedback and supporting students in their draft process. And they've also got something called PolyChat, which gives their students a really easy way to navigate the many RMIT policies that are out there. A last one they have is Prompto, where it takes things one step further by teaching students how to craft an AI prompt, an essential skill that we know students will need in the future. I think the key takeaway here is that RMIT didn't just drop a general purpose tool into their LMS or into their environment.
They built a purposeful role based tool that really helped fit the teaching needs while also maintaining their institutional oversight. The other example we have is from the University of Sydney, and it's called Cognidi. It's an award winning GenAI platform AI doubles. They're guided AI doubles. They're guided using plain language system messages, and they can be enriched with course materials and rubrics.
It's currently in pilot with well over twenty institutions globally, not just here in APAC. It's also open to integration, and it really exemplifies how third party AI can be integrated into any LMS with thoughtful governance, data privacy, and pedagogical alignment. Instructure values equitable access as well. So this aligns really nicely with us where the educational integrity is empowering educators, not replacing them. And we're really excited about this model.
It's a powerful demonstration of how third party AI, when integrated responsibly, can enhance that pedagogy without sacrificing privacy or equity or even educator agency. So Cogniti really aligns beautifully with our vision for open collaborative innovation. When we look at Cognite, we see a model that's educator built and institutionally trusted, and it shows that AI doesn't just have to be imposed on education. It can be created by education and for education. On that, I'm now gonna hand back over to Mel.
And I love those two examples. Thank you so much for sharing them, Farrah, because they are great examples of what we've ingested in Instructure to understand how should we be both thinking about what we're building, and and our strategy, but then how do we make sure those kinds of capabilities can tightly and seamlessly embed into our overall approach. So Farah will get into more details in a few minutes, but I wanted to just set the landscape of what we're seeing specifically from an AI perspective, and then how are we, again, framing key principles in our solutions around AI. So many of you and and what you just saw on those two examples, are leveraging general purpose l o LLMs. Whether it's one of the main providers and you're building on top of those or you're leveraging one of the large LLM general built providers themselves and the tools that they've launched to education.
In both cases though, we know that many of you are already experiment experimenting with them just like those two examples showed. So part of our strategy is ensuring you're able to plug that work directly into Canvas and on across our entire suite of products over time so that you can leverage that very targeted and unique work that you're doing at your institutions and organizations. We also know that, and if we can hop back one slide real quick, Farrah. Sorry. We also know that there are education native AI companies out there doing very specific targeted AI work.
We know not only do those need to be integrated, but we know you're you're using those already. And so we need to be thinking about how are we complementing those tools in what we're building, not necessarily, competing with things that are deep in their particular areas. For example, Grammarly is a great example. It's been using AI for a long time. Many of you have embedded Grammarly in various ways in the education or the learning experience, and and we're not Grammarly expert.
We're not grammar experts. We try very hard to to to do our best, but that is not our specialty. And so how do we make sure we complement that work, of those very educated native AI companies? And then finally, as I mentioned before, we know you're building your own LLMs and you need to be able to plug in and bring in your own experimentation. So if we go to the next slide, this will show how are we thinking about the fundamentals in a world where you need to operate not only with what we're building from an AI perspective, what's needed in Canvas in our technology, but that you can bring your general purpose LLMs, you can bring your education targeted or specific native tools to the table, and you can bring all of your own experimentation. So there are three key points to this.
You're challenged by again, we said this before. Do I know these tools are secure? They're vetted. How what what models are they using? How do I ensure they align to my data privacy practices and everything in between? Second, we know you need to make sure that you are aware of how and what is available and how can you scale this across your institution, staff, faculty, and students? And then finally, how are you using AI to do things and to solve problems that you couldn't before? It's not about AI replacing what you can do today. It's about AI enhancing and augmenting and even expanding what you can do so that you can do so much more. So if we go to the next slide, this will show how we think about a successful deployment and the three principles that Sarah will be talking about.
Covenants, we're ensuring through tools that we've created like nutrition facts that you can vet and and understand what you're using. Adoption. We're trying to ensure that the capabilities built natively in our solutions as well as those that are plugged in can be seamlessly used. It's very clear how to use them and very intuitive in how to use them. And then finally, we're not choosing to just deploy AI for the sake of AI or for flashy, sexy tools that maybe only ten percent of you may may use, but they're make we're making sure we're employing AI tools that drive value.
And by doing that, we can be very targeted in two ways. One, we can drive value because we're solving challenges that you have. And by doing that, we can be very deliberate about the models that we choose to employ so they're targeted at those challenges. And the second piece is we're not throwing AI at everything everywhere and these large models that are overwhelming and overpowering. Those are not sustainable.
Those are not, thinking about long term leveraging of AI in an environment where we need to be very environmentally and sustainably conscious. So this value piece is in particular very important to us because it's allowing us to really be thoughtful today and well into the future. So let me pass it back over to Sarah for the last time. She's gonna introduce Ignite AI. And again, if you've got questions, don't hesitate to put those in the q and a.
We've got a team of us that are happy to answer questions as we go, and we can pull any of those at the end as well. Amazing. Thanks so much, Mel. So all the things that Mel's just talked about describing confidence, adoption and value is exactly what's guided us in building Ignite. It's not just another tool, but it's our solution to bring AI together into one cohesive experience for educators, our educational admins and students globally.
One of the biggest reasons that we're starting this new chapter with one and only solution is that it uses AI for education at Instructure. And we've heard quite extensively here globally that AI is overwhelming, especially when there's so many different tools in so many different places doing many things as Mel's just touched on. So what we've introduced a couple weeks ago was Ignite AI, where educators can bring everything into one place. It's secure, and it's embedded exactly where the teaching and learning happens within our educational ecosystem. It is in Canvas, and it will also be out across in our other products.
This means that you can be in full control and have full visibility over when your AI is being used and where. And it means that faculty and students can focus less on managing that technology and more about what really matters, which is the learning. I always say learning should drive the tech. Tech shouldn't drive the learning, and I think that's a really nice piece that we've done here. It's really about helping learners thrive in tomorrow's landscape and delivering a future ready ecosystem.
So what we'll show more of in next week's webinar is some specific functionality of Ignite AI and a solution that we're building together with you to make sure that we are supporting your use cases and taking into consideration our regional needs. We also wanna just touch on some of the things that have come up, and we know these are super crucial when thinking about an AI solution. We wanna make sure that this is embedded in our ecosystem, which, again, means less hassle, less going out to multiple places for you and your users. We're building this to seamlessly connect with your other tools as well. We know that you are using many other platforms in many places, and we pride ourselves on being open and connected as an ecosystem in the EdTech space.
So we're truly excited to bring that power of AI into that one environment for you. We think of our Ignite AI as a bit of a conductor in that space, and that's because it's going to help unify your AI ecosystem. It will help bring all of your products into one place and seamlessly connect with Canvas and even the other five hundred plus APIs you might be leveraging at this point. For your educators, it means that Ignite is useful within Canvas for things like quizzing and discussions and aligning content and outcomes. Our open API driven approach also makes it easier for our partners to integrate with Instructure products as well.
And the result of that is a connected and flexible workbench that's gonna save time, reduce that complexity, and keep educators and students at the center of everything. To start, we're using these APIs to bring the power of Ignite into our Instructure products that you already know and love. This is a bit of a sneak peek into what we're planning, for the rest of the year and heading into next year. So we do have a few pieces of functionality that are going to be made available very soon for things like testing and our early adopter programs. We've also got more forward thinking features in the works.
And again, next week, we're gonna be talking about all of these things in a bit more detail and giving you some little teasers into what those pieces of functionality will look like within Canvas and how they will help you. So as I mentioned, we really wanted to cover off the why and the what today, and I'm so grateful that Melissa could join us today to really talk through the thoughtful approach we're taking. As I said, how we're putting the education in education technology. Next week, we're gonna take some time to explore a bit more detail around the actual functionality, discussion insights, how we're working with assessments and accessibility so you can see firsthand the impact on teaching and learning. We do have a link that we'll share in the chat so that you can stay up to date on those things.
And, of course, if you'd like to stay up to date as we roll out further updates, further information and more educator stories around the world about Ignite AI, please sign up so that we can make sure that you're always in the loop about what we're doing. And lastly, we also have our quarterly product sessions that I run for the region. Our next one's coming up in a few weeks' time. And so not only will we talk be talking about all things AI, we've also got some other exciting product updates to share with you, to make sure you're taking that thoughtful approach to teaching and learning. So some more future roadmap updates and, of course, resources will be shared in that session.
You can sign up using the QR code. Now we've covered everything we need to cover, so I think we can jump into the q and a side of things if there are any outstanding questions. So, we did an answer a few questions. I did, just so you know, Farah. A little bit about timing, which you showed on that one slide of when are things coming.
I'll answer more broadly. The majority of our AI work, we have, released a number of capabilities for general availability, so things like discussion summaries, nutrition facts, and some other pieces. And those have come in those new to next new and next, presentations. You know, we've highlighted those. There's a number of pieces that are currently in what we call an early adopter phase, and this is when we're working with a collection of customers in an early it's a beta program, essentially, if you wanna think about it that way.
And a number of those things then will be released later this year, generally available. And then some are still in a little bit earlier phase of development, and those we're looking at early twenty twenty six. So just to give you a time frame, we're not talking about things that are two to three years out. We're talking about things rolling out over the next six to nine months, and then beyond because this is really building that foundation. So I just wanted to there was a number of questions.
I thought I'd quickly summarize that. And then now I am, taking a look at some of the other questions that I can throw at you. So, you know, rather than specifically around a product piece, but, Farrah, how are we approaching assessment and Ignite AI? Because there's some discussion in the webinar chat as well. Like, where is it where is it in the the order of of operations and how we're thinking about using AI in order to solve institutional challenges? It's a great question. And I think you touched on this really nicely that we're very much focused on the teacher experience right now, that educator experience and helping improve their workflows.
So with assessment in mind, it's things like rubric generation. I I teach as well sessionally, and I know how valuable a good rubric is. And I also know how time consuming they can be to create. So we are looking at ways that we can help expedite that process, but still put the teacher at the heart of what's being done, whether it's generating a draft of a rubric that a teacher can then tailor as needed, even providing some suggested commenting or feedback as a first pass on an assignment, again, so that teachers can really focus on the value add and the thing that they do best and remove some of that administrative burden around the assessment processes. I love I love how you framed that, Farrah.
And one of the questions, I know there's been a little dialogue in the chat too, that I've gotten as we've shared our strategies with other customers and and and, folks interested in education around this has been, are you doing student facing tools? And you'll see that the majority of the work that we've done is is faculty facing, it's teacher facing for all the reasons Farrah described and all the reasons you all are chatting about in the chat. Student features, there's a lot more complexity to that in in how, you know, we're all learning to engage students in the teaching and learning process. And so we are developing, and you'll see this. One of the things listed there was a an assignment tool and there's other capabilities that we are developing, but we're really focusing for all those reasons that Pera just described on faculty first because there's so much opportunity there to have such a broad impact while we together as an education community learn from from what and how students are interacting with AI. Let's see.
Some of the other questions that I'm catching. Okay. So there are some cost questions and maybe just broadly, how are we approaching that, Sarah? Good question. At the moment, as Mel was talking just before about our early adopter programs and opportunities for feedback, we have a good idea of what we wanna do in this space, thanks to all of your input. And what we're doing at the moment is refining that based on need.
We will absolutely have functionality that is within Canvas that you are already paying for, and then we will likely have some premium product offerings that we look at as an add on option for those that need to do something a bit more robust. So we're working through that with our teams and obviously wanna get that to you as soon as possible. But given we know there's so much feedback in this space, we wanna make sure we're doing this thoughtfully and giving you a product that you're gonna love and gonna wanna use. So we wanna make sure we approach that thoughtfully, and we will absolutely be talking pricing with you all soon. Absolutely.
I'll add to that too, Farrah. Thank you. One of our considerations is, again, we're talking about our strategy and being very directed and deliberate in how we deploy AI to solve specific problems. We are trying to be as cost effective as possible in that. So part of that determination will be if these are tools that are very specific to capabilities within Canvas that aren't expensive models, that way we're not coming back to you and raising prices or doing other things in a significant way to cover that.
So so that's part of our determination as well. Whereas if you look at, like, larger agent based tools, that's where you start to, you know, incur a lot more cost underlying in and that's where you may see then, features at an additional or premium cost. But we're trying to balance that too for you where we can leverage AI as specifically and as tactically as possible so we can keep the costs down across the board. How about a sneak preview of this? And I know Greg is also answering this in the q and a. Is there gonna be some sort of agentic capability? And maybe this is a good teaser for next week.
This is a great teaser for next week. Yes. There will be an Agentic, chat type of functionality. We have absolutely taken an Agentic approach with some of the AI functionality that we're looking at. If you followed along with InstructureCon, our amazing global event that happened back in July, we had some really exciting announcements around Ignite Agent, and we're really excited to talk about that at next week's session and show you how that's gonna work.
It is very much aimed at teachers, and I'm really excited about it, personally, to show you just how how great it's gonna be for the work that you do. Great. Thank you. And, I didn't catch the question, but in the chat, there was a little bit of a question around student privacy. What's our posture in general on privacy, Farrah? Like, how are we thinking about that, particularly for the region? Great question.
We we've ensured that none of the models that we're using train on your data. So that's the first thing to call out. We wanna make sure that privacy is absolutely maintained and that you are in control of all of your AI use. The other functionality that we have that is pretty exceptional and I'm excited about is nutrition facts. Everywhere AI gets used within our product, you have the opportunity to see how that data is being used, what model it's leveraging, and again, remain in full control about whether you choose to use that or not.
So your data, your student data is always protected. It is not used. And the PII, those of you that are familiar with that language, is level two, meaning that the data is protected and not used to train the models. Fantastic. And then I think we've covered almost everything.
I'm just I'm just trying to quickly scan. So there's a question in general around, how are we thinking about leveraging AI in the context where it's really easy for students to skip work that, you know, we've historically really wanted them to do? And I'll start an answer there. You'll see the kinds of capabilities that we release are about the process, not just the results from a student interaction. So as we look at things like assignment tools where students are interacting with that, producing the process or the conversations. And, again, that authorship piece is gonna be really important in how we, right, bring tools to you so that you can follow not just and, you know, what was the output or the final deliverable that a student may submit, but that you can actually follow the entire learning process of that student and be able to see that.
So, hopefully, you can track where students are skipping through work because they're using AI as opposed to where are they actually leaning in and trying to understand and be able to apply the concepts you're teaching. So that's a little bit of a starter. I don't know if you have other thoughts around this, Farrah. But how do we, it's not so much preventing cheating as how do we not make it so easy for students to be efficient that they don't learn as much? I love everything that you've just said around the assessment process and I think one of the things just to value out on what you've said is we're very much looking at how we can make how we can help you make assessment more authentic. It's really easy to say we need authentic assessments, but how are we going to help empower educators to do that? And so some of the functionality that we're looking into that Melissa is starting to talk about is removing that really linear process that our students tend to go through right now around create the assignment, submit the assignment, get the feedback.
We want it to be much more iterative and really capture that process. And I think that's a really great and powerful word to describe it where we can unlock that critical thinking, surface that critical thinking so that you as educators will get those insights into how students got to what they got to, not just that final output. Love that. And that will also start to address, you know, are we embedding detection capabilities? And the answer is no, not explicitly. However, the way the tools are built, the way, you know, Farrah just described that, will give you opportunities to be able to unpack where students may be or how they are using AI.
So I think that covered everything that I caught in the chat, and it get covered the majority of what was in the q and a. There There are some specific, feature questions, but I think we can we're gonna hold on to those for next week, which is really exciting. And this is fantastic. I do see a question in the q and a. What would be your recommended quick pitch for us to re pitch the statute procurement officers to get to yes? I love that question, by the way.
Thank you so much for posting that. And, you know, let us let us cover that next week because I think you will see in it, in short, our open strategy that's flexible and thoughtful and secure, but also allows you to innovate and experiment with AI in a safe and meaningful way. But I will answer that question. It's a it's a really good one to queue up for next week. So, Farrah, back over to you to help wrap us up.
Thanks so much, Melissa. And thank you again, everyone. These are really thoughtful questions, and I think we're firstly grateful that you chose to spend your time with us today. We know how important this conversation is, and I'm grateful that Melissa was able to join us to again talk through that why. I think the why in education is so important.
As I said, learning should drive the technology. Technology shouldn't drive the learning. And that is absolutely the approach we want to highlight today. I know you're all super keen to see some of these things in action. And next week, we will absolutely have those ready for you.
But this conversation is just as valuable because it informs what we wanna do, and we wanna make sure we're listening to you as a region and and incorporating your needs as much as possible into into what we offer. So thank you again, Melissa. We're so grateful, and I could listen to you forever. We will share all these things around. The recording will also be there as well.
And as I mentioned, please sign up. We've got webinars next week, few more in a couple weeks' time as well where we will absolutely keep you in the loop about all these things. Thank you again, everyone, and a huge thank you to Melissa and our support in the q and a. Have a good afternoon. Thanks, everyone.
Presenters: Melissa Loble and Farrah King
Key takeaways:
- What’s keeping educators up at night? A look at the biggest challenges and unmet needs around AI in education.
- From chaos to clarity: Untangling the fragmented AI landscape and what it means for institutions today.
- Introducing IgniteAI: A secure, scalable, and education-first AI system built directly into the Canvas ecosystem