Generative AI governance and responsibility
Guiding ethical, trustworthy AI in learning
At Instructure, we believe artificial intelligence should amplify human potential, not obscure it. That means our AI systems and features align with our mission to elevate teaching, deepen learning, and put people first. Our approach to AI governance brings together rigorous oversight, transparent purpose, and collaborative responsibility.
Our guiding principles
We follow seven core principles in how we design, deploy, and govern AI:
We prioritize educational goals over shortcuts. We avoid AI that encourages dishonesty or automates high-stakes decisions without human oversight.
We explain how AI works and when AI is used in our products and services via AI Nutrition Facts.
AI should enhance learning, not replace it. Our tools summarize and support, but won't write essays or complete assignments for learners.
As in all of our products, we design AI to be fair and prevent bias.
Customer data is not used to develop or train our AI systems unless your institution authorizes it. Data used with our AI features is not retained for secondary use.
AI can be a force for accessibility. Our products adhere to WCAG and ADA standards to support students with language barriers and differing learning needs.
AI governance
We want institutions to feel confident that when they choose Instructure, they’re choosing a
partner that treats responsible AI as a core part of how we build and operate. Our approach is simple.
A cross-functional governance team you can trust
The center of our program is the AI steering committee, a group of leaders from product, engineering, privacy, legal, security, and our academics team. This team meets to guide our AI direction, review new initiatives, and ensure we align our products with our values and your institutional expectations.
Clear guardrails and thoughtful review
We follow a structured set of AI policies, from our AI governance policy to our AI acceptable use policy. This defines how AI is designed, reviewed, and monitored at Instructure.
Every new AI capability is evaluated by privacy; governance, risk, and compliance (GRC) security; and product teams to confirm data protections, responsible use, and alignment with laws and our contractual commitments. Every AI tool goes through the same rigorous review that all of our software does.
Your role and how you can engage
Responsible AI works best when everyone plays a part. Here’s how institutions, school administrators, and partners can stay engaged and empowered:
Review our AI features
Review the options you have for AI features, and choose what works best for you.
Stay informed about our updates
Keep an eye on release notes, trust center updates, and AI-related communications.
Encourage human oversight
AI should support, not replace your decision-making. Build internal practices that keep people in the loop for meaningful or high-impact decisions.
Promote responsible use in your organization
Share training materials, and help colleagues understand best practices and how to use AI.
Provide feedback
Tell us what’s working or what could better support your workflows.
We see AI as a shared responsibility, and your voice, questions, and expectations help shape how we build it responsibly. Visit our Trust Center Library for more on privacy, security, compliance, and AI: Instructure Trust Center.