Strategies for Evaluating AI Tools in K12
States across the country are developing guidance on AI in education — but how are they evaluating whether these tools work for students? In this session, we were joined by Pati Ruiz, co-author of Digital Promise’s What States Say About Evaluating AI in Education report and Vera Cubero, who led North Carolina’s K12 AI guidance initiatives.
Together, we unpacked findings from a review of guidance documents across 32 states and Puerto Rico and explored how states are moving from early efforts toward more rigorous, evidence-based evaluation of AI-enabled tools in K-12 classrooms.
Key topics included:
The three stages of AI evaluation maturity — what each looks like in practice across specific states and where your school/district might fall on the spectrum
Why traditional evaluation approaches may fall short for AI-enabled tools, and what more rigorous, outcomes-focused evaluation actually requires
Why educators need to be intentional users of AII — and how centering teacher, student, and community voices makes evaluation more meaningful and effective
The role of co-design and feedback loops in building evaluation processes that include student, educator, and community voices
What education leaders can do now to advance their evaluation efforts and make more informed decisions about AI adoption
-
Evaluating AI in Education: An Analysis of State Guidance
State AI Guidance for K12 Schools
North Carolina K12 AI Guidance
The Education Technology Joint Powers Authority (Ed Tech JPA)
More on LAUSD's chatbot
Digital Promise AI Literacy Framework
US dept of Labor AI announcement
-
Amanda Bickerstaff
Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.
Pati Ruiz
Dr. Pati Ruiz is a senior researcher and Director of Learning Technology Research at Digital Promise. Pati leads efforts to ensure AI and emerging tools are adopted responsibly, equitably, and with educators, learners, and families at the center. Her research remains grounded in classroom realities through 16 years of experience teaching middle and high school Spanish and computer science, as well as serving in school administration roles. Through strategic partnerships with educators, researchers, policymakers, and learning communities, she influences how emerging technologies are conceptualized and implemented in formal and informal learning contexts. Dr. Ruiz is a member of the OECD-EU AI Literacy Framework Expert Group and was honored as a 2022 Brilliant Woman in AI Ethics, selected as a 2023 EdSAFE AI Fellow, and named a 2025 ASU+GSV Leading Woman in AI.
-
00:00
Amanda Bickerstaff
Hi everyone. We're really excited to have you all here today with us. I cannot be more thrilled to have two amazing leaders in AI and education who have been part of my life in one way or another since we started AI for Education. I think a lot of us started this work exactly the same time. And it's really nice to be able to leverage all of this shared and collective knowledge. And so today we're going to be looking at strategies for evaluating AI tools themselves in K12. And it's really amazing to have Patti Rias with us today who is. Had a really interesting career and I'm sure you can talk about it, but has been leading some really interesting work at Digital Promise around AI adoption, especially now at the state level.
00:44
Amanda Bickerstaff
And then we have Vera Cubero, who is the architect of the North Carolina guidance. In fact, North Carolina was the fourth state to put out guidance and Vera was the reason and the engine behind it. And so what we're going to do is we're going to start with a conversation about the. The research itself and then we're going to bring Vera on to be able to kind of dive into the practical nature of what's happening. And as always, you know, we're pretty amazing community. We have over 100 people already with us today. As always, say hello. You know, if you are, you know, wherever you are, say hello. But always know there are two ways to kind of communicate. One is with your peers to the chat. So if you have resources questions, please do that.
01:25
Amanda Bickerstaff
But if there's a question specifically for one of the panelists or myself, please use the Q and A function and I'll be working at that all the way through. We'll also be dropping a lot of resources. But if you don't catch your resource, don't worry about it. It'll actually be part of the post webinar information that we send out tomorrow. So just know you'll have that there. But I want to come off and I want to talk a little bit about the work that brings us together. And so, I mean, I think Patsy, we met three years ago, right? And were in. There was a kind of think tank. I think the first time we ever met was out in Tempe, Arizona. And a lot of people kind of sitting and talking about AI and education.
02:05
Amanda Bickerstaff
And one of the things that really struck me from the very first time I met you is that you were thinking really strategically and tactically about this, like AI. And I think at that point were all talking in these very big general statements. And you were one of the first people that I met that felt like it was grounded. And so when I've seen all the work that you've done, I know you've done quite a few things at Digital Promise since then, but I was really excited to see this new report that you have coming out. So I'd you to introduce yourself and then share a bit about this report and we'll make sure to drop it in the chat as well. Great.
02:36
Pati Ruiz
Thank you, Amanda. And I'm really excited to be here with both you and Vera. For anyone who doesn't know Digital Promise, we are a global nonprofit and we work to expand opportunity for every learner. So we work with educators, with researchers, technology leaders, and communities to really design, investigate and scale innovations that support learners, especially those who have been historically excluded from these spaces. I'm Pati Ruiz. I'm a researcher on the learning sciences research team. I do this work, Amanda, to ensure that the tools and technologies that we're bringing into classrooms are responsibly and safely used in those spaces. I come from a teaching background, So I spent 16 years in the classroom. I taught computer science and Spanish at the middle school and high school level. I was also a school administrator.
03:37
Pati Ruiz
So that informs the work that I do and really helps me ground it in practice. Back in 2024, we reviewed initial guidance from seven states on AI and education. Those states at the time were California, North Carolina, Ohio, Oregon, Virginia, Washington State, and West Virginia. We identified themes across the state's approaches to artificial intelligence. There were similarities in terms of the focus, the needs for workforce development, the need for AI to be human centered, an importance even in 2024 around professional learning, for AI literacy and for structures that promote data privacy and security. And so we wondered what the guidance that is coming out now was giving. And so what we ended up doing is we followed up on our initial report, and since 2024, 25 additional states, including Puerto Rico, developed guidance on the implementation of AI in education.
05:04
Pati Ruiz
And so what we did is we categorized state level AI in education evaluation guidance. We really focused on the evaluation of these systems and tools, Right. And we identified three developmental stages where states were at when it came to just the evaluation guidance. We narrowed in on the evaluation. We identified nascent and exploratory, emergent and piloting, and systematic and evidentiary. We analyzed all of these documents, and a majority of the states we found are primarily engaged in nascent and exploratory evaluation efforts. And fewer states are engaged in more systematic and large Scale assessment. And so our report provides this comprehensive analysis of how these 32 states and Puerto Rico are currently guiding the evaluation of AI in K12 classrooms specifically. And it can serve as a roadmap for understanding the different levels of AI evaluation maturity. Right.
06:16
Pati Ruiz
Categorizing the state efforts into those three stages. And we learned a few things that we thought were important as we encourage a move towards requiring more rigorous evaluation of AI enabled tools. And of course we all know that this needs to be grounded in the improvement of student learning outcomes right before we can widely adopt these tools responsibly. So several themes that we identified were the importance of evidence based decision making through the use of co design and feedback loops. And we believe educators need to be passive, active users, not passive consumers of these AI tools and incorporate their voices as well as the voices of students and parents in the design of these tools. And we saw states like Alabama and Massachusetts that have established dedicated work groups to ensure that this type of implementation as multi directional and transformative.
07:24
Pati Ruiz
We also saw an important focus on piloting and monitoring, so ensuring that states are starting to become more structured in their evaluation, moving beyond like satisfaction surveys to tracking specific qualitative and quantitative metrics. An example there is Wisconsin. They're piloting AI in specific subjects, including math and comparing student progress against control classes to measure impact. And so that's exciting. Then we're starting to see more of a move towards evidence based implementation. So we identified a critical need for states to move towards this systematic evidence based implementation. And there are a few states, Colorado and Louisiana stick out as ones that are tracking student progress and using metrics on AI access and engagement to evaluate educational outcomes from the use of these AI tools.
08:30
Pati Ruiz
And as we know, we are all advocating for more high quality AI literacy so that the use of AI in education can be more meaningful as it gets integrated into our learning environments. So that's an overview of that report and a little bit of background.
08:51
Amanda Bickerstaff
Absolutely. And so I think it's interesting because I think there is a big push towards tool adoption, especially like Genai ed tech tool adoption at the district level. And so when you guys were doing your research, did you see that reflected? Because I know that Utah, for example, has a collective bargaining where they've chosen some tools that their state, their district partners can do and access but and after a vetting process. But did you see that tension between we need these frameworks, we need evaluation, we need piloting and then schools like going all in on not just one, but maybe multiple gentech tools.
09:31
Pati Ruiz
So there needs to be more work done in this space. Amanda? I think that's the answer. And there are groups of purchasing, cooperative purchasing groups. We see one in California, the jpa, that do more evaluation. They don't make recommendations. Right. They evaluate and provide evidence. And there are different mechanisms to do this across the country. And I think that it's important that we continue to develop models for evidence that are clear for educators. And I think that's what we're trying to arrive at, visualizing evidence via frameworks that make sense to educators. And so we all need to be talking to one another about what that looks like. And there are several emerging models.
10:33
Pati Ruiz
But we also have to ask the important question of evidence for whom, evidence that these tools work for whom and under what conditions, as we know as researchers, is an important question to be asking.
10:48
Amanda Bickerstaff
Yeah, I think one of the things that has always stuck out to me, whether it's Genai edtech right now or consumer models that are being used by thinking like ChatGPT, Cloud, Gemini, that are being used and marketed towards schools and districts, even higher education institutions, is that there seems to just be a really big lack of data on usage. Even like, not even like. I think what's interesting is not even like efficacy or outcomes. It's just like how many kids are using it and in what ways, or how many teachers are using it in what ways. And so it feels like there's like a pretty large gap between just talking about the level to which this is being integrated into schools. That seems like there's like, maybe it's because it's so new. I mean, you've been doing this longer than I have.
11:36
Amanda Bickerstaff
So this is my first real moment of like education technology and trying to figure it out from this scale. But it does feel like we're not. It's like the system isn't quite or not system, but the tech itself is not quite ready to talk about what's actually happening in schools. I don't know if you feel that way, but yes.
11:53
Pati Ruiz
And then interesting observation. I was just out in schools observing learners using some AI enabled technologies. And it's not just the question of usage. Right, Amanda? It's the question of high quality usage. Because you can have students sitting at a computer and just kind of biting their time. They know how to hack these systems and just clicking advance, not actually engaging. But we are seeing tools that are beginning to come up with metrics around quality of usage or quality of minutes. And so we are seeing some advances in the ability to Track a little bit higher quality usage. I mean, we still know I'm a learning scientist, trained as a learning scientist. And so there are so many factors that go into what high quality means. So defining that will vary from product to product. But I think we are seeing some progress there.
12:48
Amanda Bickerstaff
That's good. Yeah. And I think it, you know, one of the things that we've talked a lot with partners that we talk to that are on the tech side and that tech side is people like desperately want data. Right. We want to understand what this means. We want to know. I mean even I know this is hard to sell to people. But like the good and the bad, the uncertain, the ugly, the opportunity. Like it's such a new field and when we talk about AI, it's a big field. Right. So there are lots of incumbent AI like machine learning, type of testing software and adaptive learning. And then we've got the newer Jedi tech tools that are very chatbot based or very like you use a prompt and you get an output.
13:29
Amanda Bickerstaff
And I think that when you look at the states, were there any states that really stuck out to you that were being really progressive on the ways in which they are approaching tool adoption?
13:42
Pati Ruiz
There are many states that are, that have promising adoption practices. We did see Colorado as being kind of in the lead. Louisiana is doing great work as well. And then there are some where we're still needing more information in order to make a determination.
14:05
Amanda Bickerstaff
Right.
14:05
Pati Ruiz
So we made some educated guesses based on the information that was provided in their documents. But we do need more information and more of a focus on moving towards kind of a permanent seal of approval of a product. Right. Towards a more continuous improvement, rapid cycle evaluation and a basis for communication that really enables discussion about evidence across roles within our ecosystem.
14:36
Amanda Bickerstaff
Yeah, no, I think the educated guest doesn't seem too surprising. I mean, I think that you'll start to see even I shared kind of like all the guidance, if you want to look at the raw data, so to speak, from what.
14:48
Pati Ruiz
Yes, you're collecting all of that. Thank you for doing that.
14:51
Amanda Bickerstaff
It is interesting because I think that for example, even New York City finally put out AI guidance. It does seem like a little bit of what we're doing right now is more in the generality. And so one of the reasons why we wanted Vera to be here, not only because she's retiring soon, it's my last time to get her on a webinar. And Vera, if you don't mind coming to stage, is one of the things that really was interesting about the North Carolina guidance, even when it first came out is that I think out of almost all of the guidance at that stage and even now it's one of the more tactical versions. And I would love for Vera to talk a little bit, of course, introduce yourself.
15:29
Amanda Bickerstaff
You have a bit of a fan club, of course, within AI for Education, but also here, and we'll give your, you know, it's been such an amazing opportunity to be, to learn and be with you this the last three years. But you know, I think when you thought about pulling together the guidance for North Carolina, maybe talk about the, like how you thought about it and then we can dive into the toll adoption component directly. But yeah, do you mind introducing yourself and talking a little bit about the process?
15:53
Vera Cubero
I'm Vera Cabera. I'm the emerging technologies consultant currently for North Carolina and I have over the last three years kind of led the AI implementation work. But our guidance, when we first started, we did pull together a whole committee to make sure that we had all the voices. So we wanted to make sure we have our data privacy, our cybersecurity, exceptional children, several people from our office of Digital Teaching and Learning. And so everybody originally came together and kind of brainstormed and brain dumped. But we wanted to align it to our digital learning plan. Dr. Ashley McBride was kind of the chair and she was the leader of the digital learning plan. And it worked out really well because we have those five focus areas. So we kind of everybody kind of brain dumped.
16:37
Vera Cubero
But at that point, it was summer of 2023 and we had, you know, I'd been learning everything I could learn ever since Chat GPT came out and had been doing a lot of training for educators. So I'd created a lot of frameworks and tools and things I was using in training. And so after the original kind of brain dump, I kind of took on the role of editor trying to, you know, because when you have a lot of different voices, a lot of people putting things in, we had some repetition and just some flow that needed worked out and ended up including a lot of those frameworks and things because I wanted it to be really useful.
17:12
Vera Cubero
I wanted it that somebody could take it if their district wasn't doing training, which at that time hardly anywhere and a person or a district leader could actually take it and be kind of a 101 course for themselves. And so it got a little longer than originally intended. But when went back to committee for the final approval, everybody agreed that it strengthened the document to leave those things in. And so that's kind of how it came to be like the more tactical version where we had a lot of the frameworks and things and a lot of them, you know. Well, some of them were your work from Afro Education, some were mine, but most of them are still relevant.
17:50
Vera Cubero
You know, they've changed slightly over time, but we try to make it as timeless as possible, of course, in this era of very quick change. But hope that kind of answers your question.
18:02
Amanda Bickerstaff
It does. And so when you were thinking about. So first of all, we always suggest looking at the North Carolina guidance. It actually even led to Vera and us collaborating on the. Every framework which you can drop, which is about like responsibility and like taking responsibility, being an active participant in. So to Patty's like, we're not just clicking buttons or accepting answers. We're actually like leading the bot. And so when you thought about tool adoption, because it was really. I mean, you guys were super early. I mean, you were. I mean, Adeel didn't start Magic School. Sorry. Adil is a CEO of Magic School. He didn't start it until May of 2023. And so it was a pre. Like pre. But most of the tools we think about with Genai Ed Tech.
18:46
Amanda Bickerstaff
So how are you thinking about it then and how has it shifted now? Like three. Like as it's becoming more. There's more of these tools. How did that shifting that thinking shift.
18:55
Vera Cubero
In the document as far as tool selection in particular? Is that what you're.
18:59
Amanda Bickerstaff
Yeah. Like the AI tools themselves? Yes.
19:01
Vera Cubero
Yeah. Well, I mean, at first, you know, there were. There were not that many tools and nobody was willing to adopt the ones that were. And so we didn't. We don't really as a state too. When you're in a state agency, you have to be very careful about recommending vendors. And so we didn't want to come at it as making recommendations. I would love to be able to provide that list because that's what our. That's what our people really want is just a list of these are the safe tools. But we have, you know, a policy of statewide vision but local autonomy. And so they really want that autonomy to be on the districts. But as these tools, like you mentioned, Magic School, some of those came about as kind of safer options. We did kind of include those as a. In our.
19:50
Vera Cubero
In our guidance as. Because, you know, people still today, even if they were kind of leaning towards some of the large language models, the news cycles of the last several months have certainly steered that conversation away from that. So. So we did want to kind of Give people some alternatives. So we do list some of them, but we have a very strict policy that we don't recommend.
20:12
Amanda Bickerstaff
Right.
20:13
Vera Cubero
But since then we also came out with our third party data integration policy. I guess it's been about a year and a half ago, a policy that districts have to go through themselves to validate tools. And, and we also. One of our first AI collaborative, one of our committees was led by Marty Sharp and his group came kind of modified. I think it was Eric Kurtz originally who did it. But they created modified AI evaluation framework that we included in there because that really is one of the greatest needs for districts. They, they need that procurement guidance. But then. So I would love to as a state be able to just say, hey, these are all the great tools that are still safe and you can use these. But our state just does not operate like that. And I think most states don't.
21:02
Vera Cubero
We are looking at some statewide pricing similar to what Utah has done. We're trying to get some of those things just because also of course, the safer tools, the safer version of the tools are the paid versions. And, and so, you know, fund.
21:17
Amanda Bickerstaff
They can be quite expensive. Right. Especially in the long run.
21:21
Vera Cubero
Yeah.
21:21
Pati Ruiz
So.
21:22
Amanda Bickerstaff
Yeah, definitely. Sorry, go ahead, Vera.
21:23
Vera Cubero
That's okay. So I was, you know, funding is always an issue. So we're looking at anything we can do to try to, you know, still give them the autonomy to choose, but choose from a few tools that we've kind of vetted independently. And we also are in support of the EdTech index and from ISTE and ASCD. And so as a supporting state, we recommend people to kind of check on there and see if tools have been vetted that way. Common Sense Media, of course, does some tool vetting, so we also kind of recommend and steer them in that direction. But that's an area that I feel like I wish we could give more specific guidance, you know, rubber stamp some things because I think that is what the districts need.
22:05
Vera Cubero
But as you both know, you know, AI tools are different from other digital tools. They're just not those static things. They change too often and too many variables. So we've just not been able to do that.
22:18
Amanda Bickerstaff
No, absolutely. I mean, Patti, what do you, I mean, your reaction, of course, having looked at it in a more kind of like macro, to actually hear the whys, like what resonates with you about what, like the kind of questions. Because one of the things that we do, I mean, I think everyone would love all of us to come out and say These are the stamped civil approval, Best in class. And I think we are not saying it for the most part because I don't think we know. Right. But yeah, anything that you think about or respond to what Vera just said about their approach would be amazing. Oh, you're on mute.
22:50
Pati Ruiz
Thank you. Thank you. I just dropped two links in chat. I had previously mentioned something related to what Vera just talked about. The joint, the Educational Technology Joint Powers Authority. I dropped a link to their work in chat and their aim is to streamline procurement. So very important work. And Then also the EdSearch product index that Vera recommended from ISTE, which is an amazing resource. We've also learned a lot at the district level, specifically from Denver Public Schools. They allowed us to study their procurement process as they were getting ready to revamp their procurement process. And so were able to study what they were doing. And they ended up establishing a cross departmental district collaborative team that leveraged standardized evaluation processes to approve or reject the use of specific edtech tools and resulted in a shared district repository of approved tools.
24:06
Pati Ruiz
And we learned a lot about the work that they ended up doing to prioritize access within the district to high quality and secure edtech products. And they had several requirements and the district ended up actually saving around $200,000 while maintaining or increasing the number of purchase licenses. And that was an incredible cost savings, especially because they were able to uncover products that weren't used in classrooms or products that were very similar to one another that hadn't been evaluated and that were just being trialed in classrooms and then not used. So going back to your original question about usage and how we identify usage, so there's a lot of work to be done in this space for sure.
25:04
Amanda Bickerstaff
Yeah, I think there's such. Well, Denver is really interesting, right? Because they banned ChatGPT, right. They came out very strongly saying no ChatGPT and they went all in on Magic School. Part of it's really interesting because they're a Colorado Denver based company, is that it is interesting to see how maybe that work that you got to see into led to making those decisions. I think what's fascinating from us because we do most of our work at the district level. That's our primary work. Is it even asking the CTO or the CIO or the superintendent in charge, assistant superintendent in charge of technology? There's a real lack of awareness of usage of any of these tools.
25:49
Amanda Bickerstaff
Not just the tools that everyone knows the name of, like Attractive T, a Magic School, a Brisk or a Gemini, but the ones that Kids are using like Quillbot or Spinbot, which are our character AI, which are, you know, both paraphrasing and I will call them out, cheating tools versus AI detectors like GPT 0 and 0 GPT for AI texture for teachers that it feels like there's like a missing layer there of like what is even being used at all. There's a lot of like, you know, there's research says that, you know, school districts have 2,000 edtech tools, but the usage is so much smaller. Right.
26:28
Amanda Bickerstaff
And I think that it feels like now with how much data we can have and the even using generative AI systems to help with that data analysis that we're still missing that layer of like what's really not even. Again, it's not even talking about impact, it's like what's being used and then defining what impact looks like. And again, I'm new to this world and so I think it's a little bit surprising to me, but I think it's a pattern, right, that we have these big moments and lots of technology that comes in, but then the connective tissue to like how it's regulated, how it's procured, how it's vetted, how it's evaluated seems to fall apart. Does that sound right? Both of you have more experience than I do, so I don't know if that resonates.
27:08
Vera Cubero
I think it does. I mean the shadow it we have certainly that going on. But a lot of times when you do a survey or you do an evaluation of all the tools that are used, some of them might only be used by a handful of people. One person sees it at a conference and you know, tries it and you know, hopefully does not upload all their students to the platform and but it's. So a lot of districts have kind of locked that down more over the last couple of years because of the increase in cyber security threats. But I think that's true in a lot of them. I think a lot up until a couple years ago especially, a lot of people were not aware of everything that was being used.
27:45
Vera Cubero
And so having that consistency, it's easy to get kind of, you know, starry eyed about the latest flashy thing that comes out. But since there's been thousands of tools that have came out in the last three years and a lot of them, you know, as soon as chat, GPT or Gemini or one of them updates, you know, a lot of the startups are just kind of falling by the wayside. And so we've really Cautioned people also about, you know, especially data. Any data that you provided these tools, you know what's going to happen when that business suddenly goes out of business because Chat GPT rolled out a new update and it does exactly what they're doing.
28:25
Vera Cubero
And so I think there's a lot more awareness in our school districts over the last two or three years about bad actors and you know, protecting privacy and data because we've had, you know, data breaches and some things like what happened out there. And I think LA unified with that,.
28:45
Amanda Bickerstaff
You know, at the ED chat bot. Right. That was a kind of, I think that was a very interesting juicy AI show when that was launched and it did expose a lot of, if you don't know the story but you know, the LA went very strong into creating a chatbot, generative AI chatbot for parents and for families. And the company that was the consultant on it ended up going bankrupt. And it uncovered like servers in seven different countries, data that was being farmed. Overselling of capabilities even. I mean I think some of the overselling of what it could do two years ago is still overselling what it could do today.
29:29
Amanda Bickerstaff
And so I think it was a strong kind of like, wait a second, we don't have to be the fastest and in fact I think we hear a ton that we don't want to be the first mover. We want other people to help us figure this out before we add risk to our classrooms, our students. But on the other hand there are other schools that are like we will buy everything. It's going to be 5 to 10 gen tech tools. And it seems like it's a little bit of an either or, but we have this. So there's a comment from Jessica about we have this very interesting kind of movement right now that's both anti tech movement in state regulation and state bodies as well as questions in and around data privacy, very specifically around FERPA and coppa.
30:17
Amanda Bickerstaff
And so it's pretty interesting because generative AI specifically absorbs so much information. Right? That is like the amount of information that is stored in these tools, especially chatbot based and or access is like so much bigger. Can you talk a little bit Pati and then Vera about how you're seeing this kind of shift? It's anti tech edtech tech shift could potentially impact generative AI and AI tools but also like where these tools can get pretty tricky even without that pushback around data privacy.
30:51
Pati Ruiz
Yeah, I mean we want learners using high quality and secure edtech products in our learning environments. And key to that is establishing standardized review process at the national, state, local level and evaluating the technology based on cybersecurity standards and a whole host of other standards. Right. Susan, in the chat or in the Q and A actually asks about guidelines for making sure that AI tools are fully accessible for learners that are neurodiverse and have varying needs. And I did want to address that briefly because part of that Denver Public Schools process did include requiring vendors to provide a voluntary product accessibility template, which is a vpat, which is a template containing information about how product conforms with Section 508 of the US Rehabilitation act of 1973.
32:02
Pati Ruiz
And so VPAT is one tool that you're going to start to see more and more template, and that includes reporting compliance for web content accessibility guidelines, making sure that they meet 2.0 or 2.1 guidelines. The 508, which I already talked about standards and also the European Union. Right. There's a European Union international version as well. And so this is something that edtech vendors are going to need to start being able to provide. It does take some work to do this. And we did see in the Denver Public Schools process that there were vendors that chose not to get their vpat. So they were taken out of consideration because of that. But there we will begin to see to that point, to Susan's point, more products providing guarantees that their products are accessible by all learners and more inclusive and usable.
33:19
Pati Ruiz
And that is something when it comes to AI and gen AI use, we need to do a better job of vetting because these tools, as we know, are constantly changing and we do need to be developing more benchmarks to ensure compliance with those accessibility standards. So I kind of went off on a tangent, but I'll hand it over to Vera.
33:43
Amanda Bickerstaff
But I do think what's interesting, especially around generative AI specifically, is that these tools have bias built into them. Genai tools. And if a kid says, I slow down, I need help, it actually might change the quality of the help, so to speak, and potentially like in a negative way. There's some fascinating places in which we do. We are going to see this play out. And it's good to see like we're finishing our student course right now. We have our free student course and we have our larger student course. And so we're doing the new ADA compliance.
34:16
Amanda Bickerstaff
And you know what, it is not the easiest thing, but it starts to make a lot of sense pretty quickly about like learning design and ensuring that, you know, everyone does have access, but also changes the way you think about designing, right, that you can do something that it can actually work instead of it being something that is for only those that are readers or only those that are watchers or some kind of combination of both. But I know Vera, this is something that's happening a lot in like a lot of states which is, you know, we have these anti tech ed tech bills that are happening. The first one was, we've talked about Utah a couple times today. But you know, Utah was the first to actually do this. But it's in 11.
34:52
Amanda Bickerstaff
There are 11 states that have some sort of regulation acts going through Congress around like lowering access to ed tech and tech of any type, especially in elementary schools. Can you talk a little bit about how you see that playing out in North Carolina, but also how it can potentially contribute to Genai tool adoption?
35:13
Vera Cubero
Yes, I mean we certainly see it in North Carolina as well. I think it's a national trend. But I just, you know, I just feel strongly that all screen time is not equal. Equal. And I think that we really need to, you know, kind of carry that message forward and really be intentional about screen time because I have seen it used and I'm sure you have as well. Even before AI came about, you know, all screen time's not equal. Sometimes it's babysitting and we just have to be honest with that. And it's not really improving learning. And so we need to be very intentional about that and demonstrate ways that it can be used to really further learning to really require human judgment. So that intentionality and design I think is very important. But I think innovation and safety have to progress together.
36:01
Vera Cubero
They, you know, we can't have, you know, kids growing up in analog schools and going into a very AI rich world in two or three years that we're already starting to see the job impacts from. So we, that balance is difficult, but I think we really need to emphasize that. And, but as far as tool selection too, that's, I think that, you know, as I said, they kind of have to progress together, but I think AI tool evaluation is kind of the bridge between them. If we're very intentional about evaluation of the tools and selecting them, then that can help us to further that discussion, that it can be useful.
36:40
Vera Cubero
But if we just rush after every, you know, shiny tool that comes out and we're kind of just adopting tools without really having a problem to solve, you know, that just adopt a tool just for the tool sake. We've been saying for a long time, no tech for tech's sake. You know, we need to say the same thing about AI. You know, there needs to be a problem to solve and it needs to be evaluated carefully and then followed up with is it solving the problem? Because what's really going to hurt AI adoption and tech adoption is if people do adopt too quickly and we have these things that end up on the news where there's danger or where students privacy is infringed upon, that's going to make it worse and kind of give more fuel for that discussion.
37:28
Amanda Bickerstaff
I think it's interesting because one of the things that we're seeing is that there does seem to be, at least at the district level, more capacity and interest in buying tools and doing AI literacy, especially AI literacy for students. And we have a couple questions, but also we can't do a webinar where we don't say AI literacy at least 25 times. I think that would be impossible. But I do think just kind of drawing some of the lines that we see that could be potentially concerning that kind of maybe go to that point of like school AI and magical school have chatbots that are used to brainstorm historical chatbots, et cetera. You've got Gemini, which is now all ages, which is. Let's not even go too deep into that because I'll be very Sad. You've got ChatGPT that's starting up with parental guidance.
38:17
Amanda Bickerstaff
But what we see is that something like a school, AI or magic school, that their chatbots are primarily used in elementary school settings, they're being used in K to 5 settings where we know qualitatively there is the least AI literacy, the least foundational skills to be able to evaluate and lead a chatbot, and potentially the least AI literacy work done for teachers in those settings. We sometimes have to push to say it's not just. It can't just be your middle or high school teachers that get this kind of training. There does seem to be this pretty dig disconnect.
38:53
Amanda Bickerstaff
And one of the things that really is staggering to me is that the NCS data from the end of last year said that 67% of schools had done at least one training with teachers about responsibilities of AI, and only 14% had done any with students. I think that there is this massive disconnect between preparing students to actually even know what these tools are and aren't, and some of the enormous risks of emotional dependence, of bias, of inaccurate information, of poor pedagogy. These tools are not pedagogical. Right. That I think that there is this like incredible disconnect that we find like we had people come back to us and say like, hey, went all in on tool X and we realized that does not work. And now we have to go back. We wish we had done that first.
39:41
Amanda Bickerstaff
So how do we think about that? Because I know that in the state guidance, most of them talk about AI literacy, but very few have like really structured like AI literacy that goes all the way to the quantum change, which is that classroom school level. Right. And I know that Vera, you also did a lot of work with the collaborative summit. But also again, it was like, like how do we get all the way down? But what are you seeing, like in terms of what we can do to start like shifting the thing from build to get the tool to like build the like knowledge about the tools and then decide what tools can be like, useful?
40:16
Vera Cubero
Well, we always, in the guidelines, we've kind of got a call out to make sure the AI literacy comes before the tool. And I know that doesn't always happen still, but our guidelines also, we have that timeline where we specifically recommend against the use of chatbots and PK5 at all. And we don't see that changing. We did get some pushback from it a year or so ago, but I have noticed that after the recent news cycles, we've stopped getting pushback about it because a lot of people are like, oh, okay, I understand now why we wouldn't. And so we, I think we're, we'll continue to be firm on that. But I, I, I think I've lost track now and forgot what your question was.
40:56
Amanda Bickerstaff
No, I think it's, I mean in our C framework, we come out very strongly saying chatbots, like no chatbot use before AI literacy. But then we do not recommend it. But I think the thing that is like how like, so let's talk about, so were very lucky we got to hang out with you all around the summits and the collaboratives. So you're one of the states, I would say probably Ohio and North Carolina are the two states that have done the most of like we're gonna do AI literacy in some way shape or form. But it did feel like even in that situation it was, were never getting to the classroom. Right.
41:29
Amanda Bickerstaff
So how do you think about flip, like how can we create a groundswell that the class like that, the kids in the classroom, the teachers that are supporting the classroom, a non negotiable is AI literacy? Because that does not seem to be the case today.
41:42
Vera Cubero
Well, we're really trying from the state level. But again we have that local autonomy. But we've added some things to our guidelines recently. I don't know if you've seen but we really have tried to move beyond that discussion of just AI literacy too because teachers need to know what does this look like in the classroom if we're going to actually make sure that our students are learning it. And so we've got a couple frameworks in there. One about human judgment and agency like how do you redesign instruction? We're really focused on instructional redesign to make sure that students are employing their judgment and that they're not just accepting the output. And then we have another one on micro pbl kind of re envisioning what teaching the standards might look like with AI enabled tools.
42:27
Vera Cubero
And those are both primarily focused on grade six to 12. But it is time that we start thinking about that student experience. If we're just saving time, if it's just an efficiency tool for teachers, then we're not preparing students for the future. So those things have been included in our strategic plan. Also from the superintendent has very specific call outs. There are three different actors items in there about continuing the responsible safe adoption of AI but one of them also about ensuring that students have the collaborative problem solving experience. And we recently had our first solvathon kind of like a prompt a thon except for we really wanted to focus on that solving piece because we had 50 teams across the state and they were solving problems in their own communities with AI. And it was a really amazing experience.
43:19
Vera Cubero
And we had our top 10 winners announced recently. And they're going to be the top three winners are going to be speaking on a webinar April 1st on our webinar. And so trying to model that too for our teachers. And so they can see, okay, so yes, AI can be used for good. Students can actually use it responsibly and just trying to model that. But. And I think we've made a lot of progress. But I can't say that every classroom, every school is, you know, has an AI literacy plan. You know, we worked a lot on making sure everybody had guidelines. But guidelines are not enough if you don't have that roadmap, the AI literacy roadmap to go with it.
43:55
Vera Cubero
And so we pulled together, you know, we did one day in each district across the state and had them bring their whole teams and worked on those and getting those out. We're scaling it in every way we can and we're fortunate enough that we have eight wonderful regional consultants also that can kind of do the work in their regions. And a lot of states don't have that. And so I think we're making good progress, but we certainly have not arrived. So we're.
44:22
Amanda Bickerstaff
I mean what I say, though, but I always use. Like I said, I think Ohio and North Carolina are the two states that have done the most around AI literacy. But I mean, so Pati, you guys have like the two of the major AI literacy frameworks. We're coming out with ours, so we'll be a third one. We got digital promise, which you've done the TJI ocd. But I think that. Do you see that there is a capacity or interest at a more like state level for there were questions about standards being developed around AI literacy or at least like ideas of graduation requirements. Are you seeing that at all in the guidance or what you're hearing at the state level?
45:03
Pati Ruiz
We are seeing, of course, that the what is being taught in classrooms really comes from the standards. And so we do need some guidance there about how to integrate AI literacy in cumulatively and consistently across the curriculum. And I just want to highlight something that I really appreciate about what Vera said and something that we highlight in our framework, which is the centering of human judgment in all of this.
45:33
Amanda Bickerstaff
And.
45:37
Pati Ruiz
AI needs to be. AI literacy needs to be developed by the educators so that they can bring it into their classrooms. And there are opportunities for connections and to provide starting points for teachers through things like digital citizenship, media literacy, data literacy, and computational thinking, which have been existing efforts because a lot of the skills and practices and competencies that are important to develop for AI literacy are connected to those foundational practices and foundational skills. And so, of course, learners will better prepared to apply these practices to AI tools if they've had opportunities to develop them early and often across grades and subject area learning. So we do still have a lot to do there.
46:32
Pati Ruiz
But I'm thinking specifically of data privacy and security, digital communication and expression, misinformation, information data analysis and inference, and algorithmic thinking, decomposition and abstraction being ones that we really need to be focusing on across the curriculum. And there are opportunities and again, districts doing the work to identify how these skills and practices fit into the curriculum across grade levels and subject areas. So that is incredibly important for us to do. And one of the things when we developed our framework back in 2024, we hoped that folks would take it and modify it and make it work for their local context. And we understand that each community has unique needs and priorities. And we hope that folks keep adapting this framework and using the OECD framework, using your resources and figuring out what works for them and what is high quality for them.
47:40
Pati Ruiz
So we hope that they continue to evaluate these tools and resources that are coming out to make sure that it's a combination of resources that work in that community in the context.
47:53
Amanda Bickerstaff
Yeah. And I think that it does feel very much still like an uphill battle from the perspective of We've been in two large rooms recently with about 160 school leaders, both in one large district in New York State. And just the still nascency of that work. Pati. Right. Just like, yes, maybe even not being familiar that there is an AI literacy framework that has been released by Digital Prom, or that their state even has guidance or that there are resources out there. I think there does seem to be a need. One of the things that's very, I would say that seems to be a positive leading indicator is the Department of Labor's work recently on their AI literacy framework, which is so practical. I mean, I'm sure, Vera, you and I were like, they probably read some of our stuff, hopefully. But they did.
48:48
Amanda Bickerstaff
They launched yesterday that course, right? That is a free course, tech space for accessibility. We haven't had a chance to take it yet, but I think that if we could, we would love to see something like that in every kid, like, especially every middle and high school kids, hands, parents, teachers, leaders. Like, we. We still have not seen a level of like, informed population yet to make the decisions which very talked about of you can't shift curriculum, you can't shift instruction unless you understand the why. Right? And people right now don't have the why. And I think that this is an opportunity. So what I would say is we've come up on time. But first of all, like, how amazing is it to have such a rich community of lovely leaders like I do?
49:32
Amanda Bickerstaff
It's been an absolute pleasure to get to talk to you today. But I think the most important thing that all of us have talked about is that it's still, like we're still getting started, right? There's still. There's not one thing that Pati said that isn't like, there's not like the shining star. There's not one way to do this. It's about finding the right context. It's about finding the right resources, but it's also about asking the right questions, right? Like, we want to ask questions. We want to make sure that we are like our panel has done today, but also our audience, like, share great resources.
50:01
Amanda Bickerstaff
But there's a lot of work to be done because what we definitely want to happen is like tools should be vetted to, not just to have a tool, but to actually impact young people and also the adults that are supporting the young people. So I just want to say thank you everybody for coming. I want to say thank you to Pati and Vera. They are amazing. We will follow up with all the resources in the chat, so you do not need to worry about that. But I just want to say thank you and we hope that, like, you know, this is AI literacy. We couldn't even talk about it, but AI Literacy Day is on Friday. So whether you enjoyed this or you have a resource, bring something back to your, like, to your community on Friday. Like, celebrate with us.
50:39
Amanda Bickerstaff
So I will we will be doing a webinar on AI literacy planning. There's going to be work in North Carolina. I'm sure Digital Promise is doing anything to say for Digital Promise.
50:48
Pati Ruiz
I'm at the Tech Interactive in San Jose. Come join us.
50:51
Amanda Bickerstaff
Perfect. So we have all these things. So like, this is the moment. But I just want to say thank you and hope everyone has a lovely rest of your day evening wherever you are, and we will see you on Friday. Thanks, everybody.
Want to partner with AI for Education at your school or district? LEARN HOW