Over the past year, I’ve had more conversations about AI than any other topic in my career. Some of those conversations are full of optimism, focused on what’s now possible in teaching and learning with this innovative technology. Others are more cautious, raising important questions about data privacy, model training, and whether institutions, and education as a whole, are being pushed faster than they’re ready to move.
Both reactions are valid, and in education, they should be. I frequently call out that we’re in the trust-building phase with generative AI. Innovation has always required a level of trust, and right now, with AI, that trust is still being earned.
At Instructure, we’ve approached AI with that reality in mind. From the beginning, our goal has not been to prescribe a single path forward, but to give institutions the flexibility to move in ways that align with their own policies, priorities, and pace of adoption. That’s what we mean when we talk about choice.
That philosophy shows up most clearly in how we’ve built our AI ecosystem. We work with leading partners like OpenAI, Google, Microsoft, and Anthropic to help institutions bring their solutions into Canvas, while also continuing to develop our own capabilities through IgniteAI. The intent is not to elevate one model over another, but to recognize that institutions have different needs and should be able to choose the tools and approaches that work best for them.
As Zach Pendleton, Instructure’s Chief Architect, put it, “Our job isn’t to pick a winner in AI. It’s to build an architecture that gives institutions flexibility, protects their data, and lets them move at the pace that’s right for them.” That perspective has guided how we think about both innovation and responsibility.
As AI continues to evolve, it’s natural for questions to emerge around how these partnerships work in practice. We see that as a positive signal. Institutions should be asking hard questions about how AI systems operate, how data is handled, and what guardrails are in place. Our role is not to ask institutions to simply trust us or any single provider. It’s to ensure they have the transparency, control, and flexibility to make informed decisions for themselves.
That commitment is reflected in how we approach privacy, security, and accessibility across everything we build and support. Customer data remains under institutional control, and we do not use that data to train external models or implement these tools without clear agreements and governance. Our AI capabilities are designed to meet the same enterprise-grade security expectations institutions already rely on, and we prioritize accessibility from the start so that AI expands opportunities for all learners rather than creating new barriers. And we provide transparency with all of our AI-enabled features through our “nutritional facts” cards, published for each.
Our partnership with Amazon Web Services (AWS) adds another important layer to how we think about AI at Instructure, particularly when it comes to security and control. By leveraging models through Amazon Bedrock, institutions can access powerful large language models within a managed environment that keeps data contained, governed, and aligned to enterprise-grade security standards. That means organizations can take advantage of advanced AI capabilities while maintaining clear boundaries around how their data is used, giving them greater confidence as they move from experimentation to real-world implementation.
At the same time, we are focused on making AI genuinely useful in the day-to-day work of teaching and learning. Through IgniteAI and our partner ecosystem, we’re building capabilities that help instructors save time, engage more effectively with students, and make course content more dynamic and accessible. These capabilities are embedded directly within the platforms institutions already use, so adoption happens naturally within existing workflows rather than requiring entirely new systems. Just as importantly, institutions can enable or disable whichever AI features they want in Canvas to create a custom experience and innovate at their own pace.
AI will continue to evolve quickly, and there will always be new tools, new models, and new possibilities to consider. What matters is creating a foundation that allows students, educators, and institutions to navigate that change with confidence. For us, that means staying committed to an open and flexible ecosystem, holding a high bar for privacy and security, and ensuring that institutions remain in control of how AI is used within their environments. Trust is not something that can be assumed, especially in education. It has to be built over time through consistent, transparent decisions.
If we get that right, AI won’t just be another layer of technology. It will become a meaningful part of how institutions support teaching and learning, grounded not just in innovation, but in trust. This is where the real opportunity sits. Not in chasing the latest AI tool, but in building the institutional capability to use AI well. That’s what turns innovation into impact, and it’s exactly the kind of work that defines the Impactful Eight.