Blog

Pinstripes, Socks, and Systems: How America’s Oldest Rivals Are Writing the AI Playbook for Schools

a student uses a tablet on the floor of a school library
ai_k-12_blog.jpg

Last week, two of the most influential cities in the United States announced new AI initiatives and regulations that are cementing AI’s place in education and providing guidance about what responsible AI use looks like in the classroom.

New York City began the week by releasing comprehensive AI guidance for its teachers. Two days later, Boston announced a program to ensure every public high school graduate is AI-literate. Instead of banning AI, both cities are building a future where AI enables educators and is a key part of the curriculum.

While the approaches differ, the message is clear: AI is here to stay, and it’s time we prepare our students and empower our teachers.

 

Boston: AI literacy for all

Mayor Michelle Wu's announcement positions AI literacy and ethics alongside reading and math as core competencies students will need to succeed. And it acknowledges that students are entering a world where their ability to use AI, think critically about its outputs, and reason about its ethical application will all be central to their success. Instead of framing the decision as AI or not, it’s focused on teaching the kinds of skills we’ll all need to build the future we want.

And that decision stands to improve equity and access to these tools as teachers in every district receive improved training and students receive improved access. When we tell some students to avoid AI while others learn to leverage it, we widen the opportunity gap and create a new digital divide. Policies like Boston’s narrow that gap and stand to provide better access to personalized education.

 

New York City: Educator guidelines

New York City’s approach also acknowledges that AI isn’t going away and focuses on outlining its responsible use for teachers. The city's "traffic light" framework establishes clear boundaries for teachers, encouraging its use for “green light” tasks like lesson planning and content creation while banning it in high-risk “red light” use cases like grading.

This contextual approach highlights a key principle of all successful AI guidelines: it’s never one-size-fits-all. Context matters for teachers as much as it does for students, and even when we bring AI into a classroom, we still need policies and systems that put humans in the center of the process. New York’s framework respects that complexity by giving educators clear tools to make informed decisions.

 

No one-size-fits-all

Both of these announcements point to the same conclusion: there is no universal AI policy that works for every teacher, every student, and every classroom. What works in a ninth-grade English class in Brooklyn may look nothing like what works in a lecture hall at a research university.

This is why we built IgniteAI. From the beginning, our philosophy has been that the people closest to learning should decide how AI shows up in their classrooms.

IgniteAI’s granular permissions and flagging system means that schools can enable or disable AI features at the institutional, departmental, or course level — giving teachers in Boston, New York, and beyond the tools they need to do what’s best for their students. And to help educators make informed decisions, every IgniteAI feature comes with transparent "AI Nutrition Facts" that detail which models are in use, which data is accessed, and how privacy is protected. 

Whether you cheer for the Yankees or the Sox, we can all agree that data privacy and ethical use are non-negotiable. Without safety and trust, there is no place for AI or any other tool in the classroom.

 

What comes next

Best practices are still being discovered, and we all have the opportunity to teach and learn from each other as we decide what role AI should play in education. New York City’s guidance will evolve after public comment, and Boston's program will learn and adjust after its first year. And the broader conversation about equity, appropriateness, and what "AI literacy" actually means at different grade levels is just getting started.

But some things are clear: the most forward-thinking districts and institutions know that AI is here to stay, and they’re doing the hard work of building frameworks to responsibly deploy it and finding technology partners to support them on that journey.

At Instructure, we’re committed to building tools that preserve and protect educator choice. Because the future of AI in education shouldn’t be decided by technologists — it should be decided by the people in the classroom.

About the Author

Chief Architect

With over a decade of experience in education technology, Zach Pendleton acts today as Instructure's Chief Architect. Working closely with educators across the globe and with Instructure's engineering and product teams, he seeks to extend the reach of education and grow its impact. He has shared his passion at conferences across the world, and has been featured in a number of technology and education podcasts. Zach holds a BS in English Literature from Utah State, and a J.D. from BYU's J. Reuben Clark Law School.

Like what you learned?

Stay in the know by subscribing to monthly recaps of our news feed.

CAPTCHA
Enter the characters shown in the image.