Building Community-Wide AI Literacy at Your Organization
As AI reshapes the educational landscape, it's essential for leaders, educators, students, and caregivers to develop the knowledge, mindsets, and practices needed to prepare for an AI-driven future. As a part of the third National AI Literacy Day, this session explored how to build community-wide AI literacy at your organization and introduced AI for Education’s SEE Framework, the first GenAI literacy specific framework in education.
Key topics included:
Importance of GenAI Literacy: Understand the crucial role of GenAI literacy for educational communities in an evolving educational and professional landscape.
The SEE Framework: Get an early preview of our GenAI literacy framework launching in April that includes:
The knowledge necessary to understand GenAI, its capabilities, and its limitations
The mindsets that underpin responsible GenAI in learning, work and life
The practices of Safe, Ethical, and Effective GenAI use
How to Approach an AI Literacy Plan for Your Organization: Learn from our work with institutions around the world about how to approach building AI literacy for all stakeholders and sizes of organizations
Check out this opportunity to prepare your organization’s community for the future of education and work, ensuring they can navigate, critically evaluate, and ethically engage with GenAI technologies in their academic, personal, and professional lives.
-
AI for Education Flagship Generative AI for Educators Workshop
The Rithm Project Study
5 Key Questions to Ask Teachers & Students About AI
Understanding Stakeholder Experience with Focus Groups on Generative AI
An Essential Guide to AI for Educators FREE course
GenAI Literacy 101 for Students FREE Course
Future Fluent: GenAI LIteracy for Students
AI In Education: What Parents and Caregivers Should Know GUIDE
AI for Education Prompt Library for Educators
Free GenAI Student Literacy Lessons for Middle and High School Students
GenAI Literacy Trainer Essentials Course
-
Amanda Bickerstaff
Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.
Corey Layne Crouch
Corey is the Chief Program Officer and a former high school English teacher, school principal, and edtech executive. She has over 20 years of experience leading classrooms, schools, and district teams to transformative change focused on equity and access for all students. As a founding public charter school leader, she ensured that 100% of seniors were accepted to a four-year college. Her focus now lies in assessing the broader K-16 edtech ecosystem, uniting stakeholders at all levels to build a more equitable and abundant future for all. She holds an MBA from Rice University and a BA from Rowan University.
-
00:00
Amanda Bickerstaff
Hi everyone. Super excited to have you here for the third annual AI Literacy Day.
00:06
Corey Layne Crouch
Happy AI Literacy Day.
00:08
Amanda Bickerstaff
We are very excited here at AI for Education.
00:11
Corey Layne Crouch
Here's with your coffee because it's also Friday.
00:14
Amanda Bickerstaff
There's also a Friday. Hi, I'm Amanda, CEO and co founder of AI for Education. And today I am with my wonderful partner in crime, Cory Lane Crouch, who is our chief Program Officer here at AI for Education. Today we're going to really kind of deep dive into building a community wide AI literacy plan. We are just about to publish our first ever general AI literacy framework. You guys will get the first preview. But I want to start by always saying hello to everyone. We are so lucky to have such an engaged community. So we'll go to the next slide and just as always, I know that we're going to have lots of good thinking about this. So please say hello, you know, share resources. If they're great resources, we'll try to pull them out of the chat.
01:00
Amanda Bickerstaff
And as always, just like, you know, this is a time to be a community of practitioners really focused on this. We're going to have some help from Wendy and our team who will drop some resources. And then also if you ever have a, like a question that you want to ask Corey and I, I, I'm always in the chat, but feel free to put that in the Q and A. But we're really excited about this session. And if you don't know, this is the little bit of lore is that I started AI for Education back in April 2023 and I met Aaron Moat, the CEO of Innovate. Edu and they had just actually purchased Ed Safe, which was an existing alliance focus on AI safety, even back before generative AI.
01:48
Amanda Bickerstaff
And I was sitting there and already I was like, AI literacy needs to be a national priority. And this is something that I think I was just talking to Aaron, I was like, why don't we create a national AI Literacy Day? And just in that we work in Brooklyn, we kind of came together and talked about it and were part of the steering committee that first launched this. And so this is our third annual AI Literacy Day. And if you're not in the US it's still for you too. Because I see Greg and everyone from all over. But one of the things that we believe so strongly and continue to believe so strongly is that AI literacy, specifically generative AI literacy, is a non negotiable skill for the future.
02:28
Amanda Bickerstaff
And we have been doing a lot of work over the last three years to help create that Space for educators first and now more and more students. So we'll go to the next slide and what we're going to do today is talk a little bit about where we see. And this is something that I feel like is such a gift for the work that we do is that I was just at a dinner last night with OpenAI and a bunch of other people. And like OpenAI said, I'm pretty sure that your organization has trained more people on ChatGPT than we have. And I think that's right. And what we've had is the distinct opportunity to go into districts, organizations and have seen just how people are responding to this. If you've ever been in one of our trainings, you might have seen this before.
03:13
Amanda Bickerstaff
And it's actually one of our oldest slides. And so what we have here is that there's still a strong banning action happening in some of our stories, schools and organizations. And I would say it's pretty consistent. In fact, what we see is that banning can happen at any time. Like if there's an issue with the chatbot or something happens, we will see that previously open technology will be banned very quickly. And this banning action really leads to a lot of hidden use. People are hiding their use, whether it's students. But also, I mean, Corey, I'm sure you've heard this too. Sometimes adults, leaders are saying, I felt like it was cheating or I didn't feel like it was appropriate for me to use. So we see the banning action really create these negative experiences for pretty much everybody within the education organization.
04:04
Amanda Bickerstaff
And then more and more what we see is that most of our partners and most schools out there are either in one of these two middle spaces or either in this, like, wait and see, like, you know, we're gonna come to it. Or maybe our state doesn't have guidance or we have like we're doing, you know, science of reading and all of our PD is taken to that. And so there's not been necessarily a no around generative AI or a yes. It's just been like, we'll come to it later. And then the other piece is self guided where maybe you've had one or two pds, probably mostly for teachers. Maybe you've talked a little bit about academic integrity. But what's happening is that both in wait and see and self guided, you see a lot of uneven adoption.
04:47
Amanda Bickerstaff
And meaning that you have some teachers going all in on AI for instructional planning or even instruction. You have other teachers going to pen and paper only. We have this kind of really Inconsistent approach. And you actually see young people in those situations really not knowing what's right, like am I cheating? Am I not cheating? And so we see again hidden use and, or we see avoidance. I will say that even though there is more work around self guided, I would say that the feeling, the cultural feeling is really the same. There's a lot of fear and uncertainty like what's the right way to go? And so everything that we have done over the last three years is to try to move people to an Organization Wide approach.
05:30
Amanda Bickerstaff
And this is one of the reasons why Corey is the best partner in the world for this. Because if you want change management to her middle name in the way of really thinking about systemic approaches. But we think so much that the foundation of Organization Wide has to have AI literacy as a foundation. So Corey, you want to talk a little bit about when we think about Organization Wide, why that AI literacy is really that, oh, we just made confetti.
05:56
Corey Layne Crouch
Day. Why not? It is AI literacy day.
06:02
Amanda Bickerstaff
But yes. Do you want to talk a little bit about that?
06:04
Corey Layne Crouch
Yeah, of course. I want to talk about systems change management and really what I think about from my work myself as a school leader and working with districts around the country and really now around the world on several school model and school program initiatives. We know that there are so many key components of an effective school model and an effective organization and AI, as we all know here, or if you are thinking of it this way, you're going to change your mind in the next 45 minutes. We're not just talking about another technology tool. We are talking about a shift in the way that we work and that we learn and even in the way that we are navigating the relationships and it is impacting beyond just again, another tool.
07:03
Corey Layne Crouch
And so in order to set organizations and ultimately set our students and our young people up for success, we really believe that the only way to do that is to do it very intentionally in a way that is integrated into the broader strategic framework and model of your school and your organization.
07:25
Amanda Bickerstaff
Absolutely. And I think that when we look at the next slide, so our C framework, first of all, I think that maybe I saw Marwa here from our women's group. So we just got an embargo copy out into the world to guests, start getting feedback. But what we have is that were thinking about starting to structure what AI literacy meant is that we came up with the C framework. And it's actually really interesting because I still remember when we had a conversation, it was E S E. And we're like, that doesn't make much sense. And were on a phone call and someone was like, why shouldn't it be C? And that worked out perfectly for us. But what we've done is we've actually come up with the C framework.
08:04
Amanda Bickerstaff
And for us, if you turn the next page, it is completely founded on this generative AI literacy definition. And what I will say is that this is one of the things that we have seen is that AI literacy can be so broad. In fact, we could just do AI literacy focus on machine learning models, we could just do AI literacy focused, in this case, in general literacy. And one of the things that we wanted to do is get incredibly tight on generative AI specifically. If you think about generative AI, this is where we start to have completely different types of interactions with technology than ever before. I think that what's really fascinating here is it requires a whole new set of skills and mindsets and practices, but they all relate back to how the tools actually work.
08:53
Amanda Bickerstaff
It's really interesting because we don't ever try to beat over the head the technical aspect of these tools, but one of the most important things that we did almost naturally was we would dive into these knowledge bases through practical demonstrations. I don't know if anybody's been in our flagship, but you've seen us do this piece where we actually show how the chatbots are always kind of making up new content, that they can be sycophantic and very pleasing, but they're also really bad at stuff. Like in the case of the one we do right now, it cannot. Between Corey and I, we probably have like 500 bad maps. Like generative AI has messed up hundreds of bad maps because of the sycophantic and probabilistic nature. It's going to try to predict a map even if it cannot do so.
09:47
Amanda Bickerstaff
And so for us, when we thought about our C framework, it really comes down to the ways in which we use the tools based on the knowledge of how they work. And so for us, those foundational knowledges are things like being able to differentiate and know the difference between AI as a large field and generative AI specifically, which is going to be quite important because people often conflate the two as if they're the same. In fact, there are videos out there from major organizations that talk about how AI, like any type of AI, is trained on the Internet for data from the Internet, and that's just not true. Like Google Maps and TikTok are not trained by the Internet only generative AI models are. So when you think about that distinction, it's incredibly important to understand.
10:34
Amanda Bickerstaff
The second is of course, the role training data. You may have heard generic in generic out, biased in, biased out, garbage in, garbage out. Training data makes an enormous difference to how these tools work, how effective they are. And so we want to understand that because these tools are probabilistic, they're based on what they see in their training data sets. The third is how Jenny I learns and works. Like, how do they actually. If you, when you talked about sycophancy, that yes, Spot. If you understand how these models are trained to be pleasing, it makes sense why they're sycophantic because they're trying to get you to like answer. And I'm anthropomorphizing a little bit, but it's the idea that they've been, you know, trained to be pleasing. And then the last thing is what gen AI can and cannot do.
11:19
Amanda Bickerstaff
And this is probably the most difficult one to really nail because of how quickly things are changing. But that is that kind of when we talk about those mindsets of starting to experiment, of. Of learning and continuing to learn is going to be very important. Let me hand it over to Corey because once we think about these foundational, like knowledges, it leads into these mindsets. And I will say I like Corey. Talk about this. We. How long have we been working on the C Framework document? Is it like a year and a half now?
11:48
Corey Layne Crouch
Yeah, I was going to say are. Are you sure that we want to disclose. Disclose the iterations upon iterations, y'?
11:55
Amanda Bickerstaff
All.
11:55
Corey Layne Crouch
I would still am in the document pretty much daily and have been for. And sometimes I see comments in some of the versions that are like me from May of 2025. And I'm like, oh yeah, been thinking about that one for a while. Yeah. But what our evolution has been and for those of you that have been working with us for that period of time as well, I know some of you are here, you perhaps have heard us say skills, mindsets and PR and knowledge and you'll notice that we changed it. And so Amanda and I are even getting used to ourselves saying knowledge, mindsets and practices and really what. What we've distilled further. And I was just saying earlier about how this is simplicity.
12:48
Corey Layne Crouch
On the other side of complexity is taking those mindsets out and trying to boil it down to these five things. One is we talk so much about just being intentional, being an active, consistently thoughtful user of this technology and that intentionality goes all the way from am I making the decision to engage with a gen AI tool for this task or for this, you know, moment or thing that I'm grappling with either relationally or in academics or work. But there's also intentionality in the tools that we're selecting and in the way that we prompt with it and work back and forth. So the intentionality versus being a little passive in the in the way that we interact with it and just you know, take using the same tool all the time and just taking the first response and assuming that it is correct.
13:54
Corey Layne Crouch
So the first one is be intentional and then connected to that is staying critical. We have to be critical consumers of this technology. And again all the way from looking at an output and evaluating and refining it to thinking critically about who is building these tools, what decisions are they making, who is it impacting and who is being left out in those really consequential decisions given how much the technology is integrated into our day to day. And then this is one that we've always had but the just importance of transparency. And we want to continue to open up this convers Learn together. And like Amanda was saying, yes, when there's a banning of the tools, there's a stigma attached to wanting to hide use.
14:50
Corey Layne Crouch
But it's even you know, beyond that there's still some hesitation around how much should I share that I'm using Claude or ChatGPT for this thing in my work or in my learning or even in my just day to day personal life. And we really encourage that transparency because that is, that's how we learn. It's how we learn together. It's how we're keeping ourselves and one another honest if you will and really further allows us to be intentional and stay critical in a way where we don't want shame and stigma to be attached to using the tools. Act responsibly. This one actually evolved from really thinking about understanding the potential for using this technology for harm.
15:45
Corey Layne Crouch
And we one of course want our young people and ourselves to act responsibly in not using the technology to you know, to manipulate, to create deep fakes or to create any kind of harmful outputs of others or intended to harm others. But we also see act responsibly as an understanding of again what is the impact of this technology and how am I being sure that I'm using it in a way that is aligned to my values and ultimately that is for the benefit and minimizes any of the risks or Harms.
16:30
Corey Layne Crouch
And then finally, of course, we all know that we're learning a ton of and even in this moment, it requires a growth mindset and a commitment to just agility and curiosity to keep using the tools and learning and evolving and using all of these mindsets in concert because we want to stay critical and intentional and keep learning. I can share for myself and some of my team members know. I was even workshopping that some of this deck because our framework has evolved. And I was like, well, let me just see what Claude Cowork can do. You know, let me, let me see. So this is me learning yesterday afternoon. I say, I. And I don't even think Amanda knows this.
17:22
Corey Layne Crouch
I spent some time and let me tell y', all and if you figured it out, Claude Cowork has been really helpful in some things. It is terrible. It's in Canva. And so I was like, you know what the ideal of it? I'm like, oh, you know, reorganize our RC framework slides. Love the idea of it, the actual execution. We're not there yet, but I learned a lot and I'm learning about, you know, the can do and can't do with Claude Cowork, which is relatively new, a new skill set for me.
18:00
Amanda Bickerstaff
Absolutely. And I think what's really nice about these mindsets is that they're all about, like my choice. They're all about it always coming down to human agency. I think that what we see quite a bit. And in fact I was at this dinner last night and there were a lot of both parents and like first tier media and there was a lot of, like this really big feeling of, you know, this is. It's inevitable it's going to be here. But what we really want to do with this framework and is to take the back and say, actually what is my space in place? And using these tools and having not just this output machine, but something in which I can actually use it in ways that are truly meaningful to myself.
18:46
Amanda Bickerstaff
And I think that what you'll see is that even though we talk about safety, ethics and effectiveness, we'll talk about in a moment these mindsets across everything. In fact, if you. Josh asked a great question about safety and governance. And so a lot of this comes down right to this idea of like, are you actually building a system that can do these things? And I think that is a question we're still grappling with to see if schools are going to be able to shift this. But like this thinking around what generative AI is to make it something that really comes down to choice. So if we go to the next slide, the practices. And so this is where, you know, it is.
19:27
Amanda Bickerstaff
We think of this as me, the world, my impact on the world, and then like the actual, like my role in the use of the bots. And so for us, safety is about me. And I think that this can be underrepresented in a lot of what we see in other pieces. Because what ends up happening under safety is it almost always is just data privacy and security. And so, like, the most important thing that a lot of, if you looked at this list, the two things that I think most schools are thinking about the most is protecting data privacy and maintaining academic integrity. That if you want to talk about where, and maybe you recognize that in your own school organization. But those are the two loudest pieces and they tend to obscure the fact that we actually really need to dive in deeper.
20:18
Amanda Bickerstaff
Because data privacy is quite a bit different this moment in time, because these tools have access to the type of data that's never really been possible before. So for example, we did a webinar with Common Sense Media where there was about AI toys and companions, and two, there are three AI toy companies that were accepting and recording and training on young people's voices. We're talking about 5 year olds, 7 year olds, and by the time of the release of the paper, two of the three of them had major breaches of that content. And so what ends up happening is that we kind of have to think about this differently because it's going to be something where the more data these tools, the more data that we give it, the better they work often. And so it's a weird incentive structure there.
21:10
Amanda Bickerstaff
And so I think that safety is important, but if we look back actually evaluating the risk of these tools like Deep Seq was this very popular moment. We became the most popular chatbot downloaded without anyone really realizing that it was on Chinese servers or Claudebot or Multbot, whatever it's called now had so many critical security issues, but because it was shiny and new, went right to it, especially those that were excited then. The last two pieces, I think are more and more important. And this is where we have to do a lot of work around the research as well, because some of the areas we actually call out in the document that there aren't enough research on this. But the first is maintaining that human agency and choice. I am using the bot instead of it driving my belief system and what I do.
22:02
Amanda Bickerstaff
But then the second thing am I Maintaining a healthy balance with these tools in two ways. One is around my cognitive work. Am I bypassing the hard work of learning or the important work? Maybe Claude was telling you, Corey, that you really needed to get in here and work on it and figure it out yourself. Like, you know, it's idea of that healthy balance cognitively. But then the second is we are seeing more and more that there are these tools are driving emotional dependence and attachment. And so I want to give an example. We work with an organization in Massachusetts that just did a massive survey across 2,000 students. And 10% of them use these tools for companionship in some way, shape or form. And that's as low as grade five.
22:47
Amanda Bickerstaff
And so the idea of these tools being designed in that way and understanding that if I like I need to be sure that I am not over relying on them or giving away thinking or giving away my emotional attachment, which is very important for ethics. Of course we talked about disclosing use, maintaining academic and professional integrity, but also understanding style impacts. I will say we see more and more people very like just very concerned. And I say people, not just students, but around climate impact, around, you know, copyright, around bias, around, you know, how these tools are trained and how they are deployed and equity, for example, like what happens when the best bot is paid. And then if you cannot pay for the bot, then are you getting the same level of reliability or accuracy?
23:38
Amanda Bickerstaff
And the last one is what we've already talked about, avoiding harmful use. So that's the deep fakes that is going to be misinformation that's incredibly important. And we think that is something in which is only going to become more important. One of the interesting things about the survey is that 60% of students said they could accurately identify something that was AI, which we know from the research is just absolutely not true at all. But there's this misconception that I will be able to tell the difference because I know enough. And then finally, effectively, and this is why this is specifically a generative AI literacy framework.
24:16
Amanda Bickerstaff
In fact, you could most likely just say safe and ethical could be for AI writ large and maybe really just digital products and services, but the effectiveness, because so much of how the bot works comes down to how you use it, the knowledge you bring to it and how you direct it. It really makes a huge difference. And research even shows that like those that have AI literacy and that are using these tools in meaningful ways are getting so much more out of the bots themselves. And so things like deciding when or when not to use AI, meaning like even if it can't do it, maybe I need to do it instead. Or generative AI can help me in a way that really like speeds me up without diminishing the value of what I'm doing.
25:00
Amanda Bickerstaff
Using those prompting and context setting strategies, evaluating and refining outputs, we make everyone pinky swear if you've never been one of our trainings that you will never just cut and paste the first output of a generative AI bot and then also reflecting on use and avoiding cognitive offload. So I'm going to say again back to the survey because I think it's so interesting. There are two things I want to point out. One is that 60% of students say they do not either they've never or very rarely actually evaluate and review AI outputs before they use them. And this is a group of students that seemed like they said they were pretty savvy, but there's this large it's just an output and I use it.
25:41
Amanda Bickerstaff
The second thing is that across students, teachers and leaders across the community, their number one concern was a loss of critical thinking. It did not matter if it was a student. Although, sorry students, the first one was job replacement, like job security and replacement. Figuring out that. But then critical thinking was number two. But for parents and teachers is number one. I think that we have to. One of the things that we see so much in this is that through these practices, what we should be doing and were trying to do is create spaces in which we actually are going to maintain and also foster like the thinking, the human agency, the voice of those that are using these tools that can easily obscure them. So I'll hand it over to Corey for the next slide.
26:29
Corey Layne Crouch
Okay, well, the part of what we want to focus on here is establishing an AI literacy plan or getting a plan in place. You all are responding to this understanding of the knowledge and the mindsets and the practices. But what we know that needs to we need with our organizations is really thinking about our stakeholders and getting a plan in place that feels like it is doable within everything else that we're already doing. As a former school principal and teacher myself, I always say, you know, we have to remember that I have space on my plate or I have a lot of extra capacity is something that no teacher ever said. And so thinking about a plan that really is intentional and strategic is a key to making sure that this is something that it is integrated, organization wide.
27:35
Corey Layne Crouch
So the first thing that we always encourage is to gather evidence in order to understand your community's Needs. You want to understand the range of needs that teachers have that students have, as well as understanding their current dispositions and feelings about the technology. And, you know, what do they understand and what. What are they using on a regular basis? So that data that Amanda was just sharing is one of our partners doing exactly that. And there's always some data that comes out from these surveys or the focus groups where some of our partners are like, okay, well, this resonates. This makes sense. And then there is also always something that is a part of the data where they're like, oh, I didn't realize it was that many. Or I thought it was, you know, more of our students doing X, Y and Z.
28:35
Corey Layne Crouch
Especially, you know, sometimes some of what's surprising is when people see that there's a set of students that are intentionally not engaging. We've often had the experience of people being surprised when young people share that they have an AI boyfriend or an AI girlfriend. There's just a variety of things that come out when you're gathering evidence, and this allows you to accomplish two things. One, of course, it allows you to custom your. Customize your plan so that you're meeting your audience where they are so that it is well received and they are able to really digest what you're sharing. But it also brings that community voice into the plans that you're making you all. We always encourage to have student voice and parent and caregiver voice and different roles as well as different perspectives.
29:34
Corey Layne Crouch
You want both the AI skeptic and the AI optimistic in the room and their perspectives to be taken into account again so that you can move your organization to a stronger place that's aligned with your goals and your vision together. So we have a present for you here.
29:57
Amanda Bickerstaff
We do. But before we go on, I just want to say, I know I'm the research person on the team, but I just literally put in our workshop planning channel that we should try to get every single partner doing this basic research, because I think you will absolutely be so just, like, more thoughtful and intentional about what this means. And I promise you will be surprised. I promise you there will be something in this that you haven't thought about that is coming from a place like, because the people that you hear, to Corey's point, are the people that are knocking on your door saying, you know, Cori, this is the worst thing that ever happened, or, this is the best thing that ever happened. And that's what we tend to think is the majority of people.
30:41
Amanda Bickerstaff
But there is such radical fragmentation and inconsistency of experience, both in terms of what people are using or how they're thinking about it. And so like, the best way I know it, I know sometimes it's hard. Another survey, Amanda. But like, it is something that I promise you will find value in if you make the time for it. And now to our gift.
31:02
Corey Layne Crouch
Yes, well, we have multiple gifts, but Wendy is putting it in the chat for you. We're, we do have the five question survey that you can copy and you. And edit and use for your community, as well as a guide on running focus groups if that is also of interest to you. I get to run a lot of focus groups with our partners and I love doing it, especially with young people, high school students. And I was a high school teacher and principal and so I just missed them. And once they realize that, they can be like, pretty unfiltered with me at first. They're, they're like, should we tell this lady the truth or not? You know, and then I, I nudge them. They always start with, well, my friend does this with chat GPT.
31:52
Corey Layne Crouch
So I do kind of love, you know, getting them to warm up to me and sharing what's really going on, if you will. So we encourage you to check out those resources. And what we also have for you here is a sample plan. And something that you will notice is that this is designed for one year and it is a 6th through 12th grade school. And this is all somewhat fictional, but here's some more AI for education lore. This is exactly the shape of my school when I was a leader. So I had about a thousand students, 100 staff members, etc. And yes, I was very lucky to have a very strong instructional leadership team.
32:51
Corey Layne Crouch
And the purpose of this plan is not to for you to feel like you have to have every detail like this figured out, but the purpose of it is to give you a sense of, you know, maybe over the course of a year, but more likely over the course of two years or three years, depending upon what else is on your plate and the capacity of your team. What might it look like to say, okay, we want 100% of our students and we're moving the confidence of our teachers and our operational staff, etc to this baseline of AI literacy that's aligned to our goals. So it has overall goals and then milestones. And really what a lot of our partners find, yes, the goals and the milestones and the phases on timelines that make sense for them. But you could even start just here.
33:51
Corey Layne Crouch
Who are the audiences and your stakeholders and what training formats are you thinking about or what are some of those first early experiences you can provide to them? So you can see this is an example 6th grade through 12th grade. So we have high school students, middle school students, they're in different places in their journey. So they are going to have slightly different plans. And then we have our non instructional staff, so operations and front office and all of those components, of course our teachers and instructional staff as well as parents and families. And then we have our components and then you can poke around the rest of the plan and feel free to take this and delete out the, you know, the hypothetical. But what but real school and make it your own. What is the activity?
34:55
Corey Layne Crouch
What are you going to do before a new school year, before you come back from a break? And then what makes sense in. In each trimester or each semester? You'll notice this one is color coded by that by audience. So stakeholder type. So that we could track, you know, have we accounted for everybody and then who is leading what and how are we building capacity across the team. So this is to help get you started. If it feels like a lot, my recommendation is to just start with your audience and who and the format for what you could possibly do first. And of course we have some resources to help get you started there. So Amanda, you want to say anything about this?
35:45
Amanda Bickerstaff
First of all, can. Wendy, do you mind resharing the document? Someone asked and want to make sure you all have this. Okay. So this is featured in our AI Adoption and policy micro credential. Yes. And one of the things that was so great, so Corey had this great idea of like building this out and we've started to use it more and more. One of the things that I just can't. If we had our magic wand, and I know it's National AI Literacy Day, right. Is that we would like by the end of 2026, 2027 school year, everyone within a school community having done basic foundational AI literacy. I think that and I will say we're starting to have partners that are going all in with us.
36:25
Amanda Bickerstaff
Because what we would say is we don't want to just do half of our teachers one year, half of our teachers next year. We want this to be like. And so if you want some support on this, we'll actually go to the next page. The next slide is we have the two ways that we're trying to help you do this. Let's come back to that one is we have our two courses and I want to Kind of update that the short course will be updated for the summer. So look out for that. But one of the things that we're trying to do is hypercharge. This where you can have your students taking the course, your teachers, and this is like 9 to 12, you could probably do 8 to 12 as well for the student course.
37:08
Amanda Bickerstaff
But we want is like how is the fastest way possible to have the foundational AI literacy done and then go into what it really starts to mean within this school environment. Because what we want to do is we don't want AI literacy to be the ending place. We want AI fluency, right? We want people to be able to not just have safe, ethical and effective practices, but to be able to consistently apply them over and over again. And so one of the last thing I'll say about this is that this two hour course for students is already showing two thirds of these students in two hours are more are saying they're going to be more ethical in their use. And like, and that is, and you know what?
37:47
Amanda Bickerstaff
It is one course, it is hyper focused, right in the sense of it's like not the most comprehensive thing but like just to have that type of response for something so, so like small and positive. I think is this why not, why not give kids and teachers these baseline places to start with if it's actually genuinely going to change their behavior in meaningful ways? And I think that's where we get so excited and want to see more and more because I think there's sometimes there's this like this fear that like if we give people more access to this AI literacy then it'll it almost like be too hard to corral going forward or kids will cheat more with these tools. And actually we found the opposite to be the case. It just creates a sense of like shared understanding and common language.
38:38
Amanda Bickerstaff
And it creates a space of like people talking to each other. Like kids not hiding their work, hiding their use, but saying, hey Corey, you know, like I was thinking about using this way Or I used general bi and actually I thought it wasn't that helpful. Like is it because of what I was using it for? Can you show me how to do it? Or a high school teacher being like, hey, I just built this cool interactive element with CANVA AI. Let's look at it together. How could I approve it? Is it actually something better? And modeling those things I think are going to be really important. And this is just a way to start if we want to go back to the slide that I skipped it is something in which these contextual moments really matter. And one of the things that.
39:20
Amanda Bickerstaff
And I'll hand it over to Corey, but like, go for it. I would say like 80% of what we do is the same across any audience. Would you agree with that, Corey?
39:28
Corey Layne Crouch
Yes. Yes. Because foundational generative AI literacy is the same. Yeah.
39:36
Amanda Bickerstaff
I mean, I have nothing else to say.
39:37
Corey Layne Crouch
No, I'm kidding. I do have other to say about that. But yeah, the knowledge, the mindsets and the practices, we build those very similarly with school board members and superintendents and assistant superintendents in. In the same way that we build them with 8th and 9th graders when they're initially engaging before promptathon or just with one of our sessions.
40:03
Amanda Bickerstaff
Absolutely. And I think that you don't have to pull punches either. I will say, like, if it's a school leader, I will be a little bit more pushy. Like, you know, like this is important, it needs to happen now. Like, if you don't believe in it, OK everybody, if your school leader does not believe in it, what we see is that things don't happen. So I do. Like whenever we like Corey and I are in those spaces, we really push that. But I think that realistically, students care as much about bias. They care about the accuracy. I will say that kids, if you can get them in one moment, like the richness of the discussion and the ways in which this becomes these true anchor experiences, I think is just really like, it's so beautiful when it happens. I know it's happened with Corey.
40:47
Amanda Bickerstaff
1 Tell the story about the Miles, the chat bot. You needed Miles and Carrollton where the kids, like I can tell it, but.
40:55
Corey Layne Crouch
I know.
40:58
Amanda Bickerstaff
Again, I was in November, but I remember you coming back and saying that we do this demo with Miles Chat bot and the girls were like, oh my gosh. They were like talk. They wanted to like they were talking over. They wanted to talk to it. They were talking about bias, about the voice. They were.
41:12
Corey Layne Crouch
Oh yes.
41:13
Amanda Bickerstaff
But the level of like just like huge thinking from a two minute demo.
41:18
Corey Layne Crouch
Right.
41:19
Amanda Bickerstaff
Is just something that I think are just. This is why you gotta have the kids in the room with you because they're gonna get this. This moment for them is gonna be like an opening of a light. Right? Like you're giving them this knowledge. They're gonna understand it so much more.
41:33
Corey Layne Crouch
Yeah, I, I actually, I was thinking about the moments when we're with students that I've been a little bit cheeky about it, to be honest with you. All because again, if you've been in recently and you can do this too to build AI literacy for any of these audiences. Our go to, as Amanda was saying, was generating a map of wherever we are, often in the United States. But the one time I was doing it in Australia, and I'll be honest with you, I was like, okay, the states in Australia and the capitals and the territories etc are a little less complex as far as there are just fewer than the United States. And I was like, this is going to be interesting to see how accurate or not accurate the map is.
42:25
Corey Layne Crouch
But it was a little cheeky with the students and I said, okay, so you all said, you have experience it, you know about prompting. So we're going to have a competition and the first person that can get your tool to generate an accurate map of Australia is going to win. Well, so of course, you know, and these are students that came actually the week before their school year started. So they are, you know, they're your kiddos that are eager to be high achieving. They somewhat quickly realized that I had given them an impossible task. But I can tell you something right now. They will never forget that AI hallucinates and that they have to evaluate the outp have that foundational knowledge.
43:15
Amanda Bickerstaff
Absolutely even it's always so fun. Like, I mean something as simple as that can make such a difference. So like we would just say though, like plan it for everyone and like just think about. The most important thing is thinking about the use cases that are going to mean the most to your audience. That's probably the most important thing for a leader that might be a master scheduler. Like our prompt library. I'll ask Wendy to drop our prompt library if you're not familiar with it. But the master schedule prompt, if it's students, maybe it's research, maybe it is where they learn how to use NotebookLM to help them study.
43:46
Amanda Bickerstaff
If it's teachers and staff, we love showing like canva AI vibe coding where you can create interactive elements that are yours, that you created, that come down to your student interests, your approach, your pedagogical approach. But we've learned that through enormous amounts of trial and error. Like we watch everybody, right? And we're like, what is the thing that's going to flag for them in the sense of, I joke, like jazz hands, which, you know, guys, I love a jazz hand. But that these are the moments that feel really powerful, like I can go and do this right now. Like that is a way to really build by in Pretty quickly with AI literacy trainings.
44:26
Corey Layne Crouch
Absolutely. Yeah, absolutely. And then the pieces with these different audiences. Like Amanda said that 80% of that foundation is really the same. And then you can see here how then it plugs into that organization wide approach. Because having that initial foundation allows for each stakeholder to think about like okay, I have a better understanding here that's bringing me to the. So what does this mean for me and my role both in my day to day of my work, but also in how I'm supporting my community more at large with this. Like school leaders thinking about their use but then also policies and setting the conditions for responsible use and same with teachers and staff. And then it really allows us to charge students with being leaders in this and being a voice, using it safely, ethically and effectively themselves.
45:27
Corey Layne Crouch
And also advocating for what they, the role that they want gen AI to play in a, you know, healthy and successful future for themselves.
45:37
Amanda Bickerstaff
Absolutely. So we're coming up on time. So what we're going to do is we have a couple other resources. So we've got our family guide. It was really interesting. So like New York just published their family guide but we have this, you know, parent, we've done Corey and Wendy and Maria and I have done multiple parent events lately. This is a great example of a document that can really help build and bridge not just AI literacy but like communication between like home and school also we know that kids are using these tools at 2 o' clock in the morning. They're using it on their device. But also, I mean I'm gonna, this is my hot take everybody. The futurist moment. Is that a huge like moment in terms of like home, like consumer use of AI.
46:25
Amanda Bickerstaff
The next one will be when Siri, Google Home and Alexa are AI, our generative AI specifically. And so that is where like you're gonna start to have like, like think of how much coverage this happens where kids can have it on their phone, they asking Alexa question. And so this is something to consent. Like the more we can do now to build some of those safe, ethical and effective practices that bridge home life and school are really going to matter. And then finally like, I think for us the next slide is that like we just really want you to take a step. I mean we. I don't know man. I think I may have said AI literacy more times than I've said my name in the last three years. Like it is something that probably it might not even be close honestly.
47:14
Amanda Bickerstaff
But the idea of, okay, we just gave you great resources, you're already doing impactful work. But it's like, what is the one thing you can do? Is it bringing in a resource back? Is it leading a training? Is it showing a use case? Is it uncovering what you're doing with students? Whatever it is, even if it's only one thing, it will make a huge difference. And for some of you, advocation will be the key, right? If you're still not quite in a place where you saw that first slide and it was like, we're still on that. Wait and see our self guided, please, please move forward, go advocate, push.
47:47
Amanda Bickerstaff
Because I'm going to tell you, the more that you do that, the better chance we have of making good choices and decisions that really support our young people, that keep learning centered, that lower the risk of cognitive offload and over reliance. But also it's just such an opportunity to learn together. The amount of like Corey and I have done so much PD and we have never walked out of a room and said that didn't work or that wasn't good or that people didn't engage. And that's not normal, everybody. We're like, that is not normal. It's because people have a genuine interest, even if it's a scared interest. But we just want you to get started. So I want to say thank you everyone and also Corey. I cannot have a better, a better partner. We have two webinars coming up.
48:33
Amanda Bickerstaff
We have one on the 8th that's going to be focused on the Brookings Institute report and our futurist piece beyond the AI inflection point with Sari and Rebecca and I and then everyone. And then the official seed framework launch. You guys have to come back. I promise it will be different. We're going to have scenarios and look fors and activity banks and so that is going to be on the 10th, that Friday that is also going to be the third year anniversary of AI for education. And so please come and see that. But if nothing else, we just appreciate your time and attention and just happy AI Literacy Day.
49:10
Corey Layne Crouch
Yes, Happy AI Literacy Day everyone. I don't know how I did the confetti earlier. If I did, I would do it again.
49:17
Amanda Bickerstaff
I'll do jazz hands instead. But we appreciate everyone. Thank you guys so much. Have a beautiful rest of your day or evening. Bye everyone.
Want to partner with AI for Education at your school or district? LEARN HOW