The SEE Framework: A Practical Guide to Building Generative AI Literacy

GenAI is already in classrooms, but most schools still lack a shared language, structured guidance, or a clear path forward for building the AI literacy learners need.

Join AI for Education for the launch of the SEE Framework, the first framework purpose-built for generative AI literacy. The framework is grounded in research and field-tested with educators and institutions across the country.

in this session we walked through the framework in full: what it is, why it's structured the way it is, and how to start using it.

Built around three lenses, it gives educators, school leaders, and learning designers the shared language and practical guidance they need to build Safe, Ethical, and Effective GenAI literacy across every age group and learning context.

In this session, we:

  • Introduced the SEE Framework, a shared language for GenAI literacy built on three lenses: Safe, Ethical, and Effective — and explain what sets it apart from existing AI literacy frameworks

  • Explored the knowledge and mindsets that ground the framework, including what every learner needs to understand about how GenAI works, its risks and limitations, and the mindsets that shape responsible use

  • Walked through the SEE practices for each lens, with reflection questions and real-world scenarios illustrating how the framework applies across K–12 and adult learning contexts

  • Highlighted resources for putting the framework into practice, including free courses for educators and students, workshop options, and the full downloadable framework

  • Amanda Bickerstaff

    Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.

    Corey Layne Crouch

    Corey is the Chief Program Officer and a former high school English teacher, school principal, and edtech executive. She has over 20 years of experience leading classrooms, schools, and district teams to transformative change focused on equity and access for all students. As a founding public charter school leader, she ensured that 100% of seniors were accepted to a four-year college. Her focus now lies in assessing the broader K-16 edtech ecosystem, uniting stakeholders at all levels to build a more equitable and abundant future for all. She holds an MBA from Rice University and a BA from Rowan University.

  • 00:00
    Amanda Bickerstaff
    Welcome to a very exciting day for us here at AI for Education. We are so excited to be able to finally share our C Framework which we have been working on for a really long time. It is something that is the core of everything that we've been doing, but it has taken an enormous amount of time and effort to get to this place. But we're just really excited to have you here. I'm Amanda, I'm the CEO and co founder of AI for Education and I'm joined by Cory Lane Crouch, our Chief Program Officer. Do you want to introduce yourself? Corey.


    00:31

    Corey Layne Crouch
    Hi everyone, I'm Corey and we're so excited to have you here.


    00:37

    Amanda Bickerstaff
    And as always, if you've never been to a webinar or you come all the time, it's important for us to continue to have such a strong community. So please feel free to drop into the chat, say hello where you are. You know, if you've worked with us before, maybe can you can say that too. Not only is this the launch of our C framework, but it's also our three year anniversary. We're actually having our party right after this. And so as always use the chat functions. But also if you have a question specifically for Corey and I, you can drop that into the Q and A. And if you have any resources you want to share, I know that there are other genwai literacy frameworks out there. Happy to do that.


    01:15

    Amanda Bickerstaff
    And I can see some friendly faces in the audience already that maybe thank you Marwa.


    01:20

    Corey Layne Crouch
    So great to see you all. And as you're saying hello, make sure you change your chat to everyone. If you're toggle to host and panelists so that everyone can see your wonderful thoughts.


    01:33

    Amanda Bickerstaff
    Absolutely.


    01:33

    Corey Layne Crouch
    So great to see you. There's a lot of really great Deanne's here. Hi Deanne, Sorry I'm not going to read everybody's names, but we see some.


    01:42

    Amanda Bickerstaff
    Very friendly faces and if you're new to us too, we're happy to have you. And so you know when we just as I'm actually going to come up share for a second because I think it's just important to contextualize what the C Framework means to us at AI for Education. And so if you're unaware of our kind of lore, our story is that the first time I ever used ChatGPT inspired me in April of 2023 to build a website. And in fact we actually saw this like the wayback machine of the Internet showed me what our website looked like. It was mid journey photos that were completely crazy. And then A prompt library with 12 prompts and including the first prompt I ever used generative AI for, which was a rubric.


    02:24

    Amanda Bickerstaff
    What was really interesting is that even from the first moment of using generative AI, I don't know if I could help it or not, but I was just constantly thinking about what does this mean for people, educators and students. I couldn't even think of it for myself. In fact, I don't even know if I've ever really dug into my own use as much as I've dug in to try to help other people navigate this. And so it wasn't something that was incredibly formalized when thinking about generative AI literacy.


    02:52

    Amanda Bickerstaff
    In fact, at that stage, it was really just navigating the field and trying to figure things out and trial and error, and came across this approach that really centered what we're going to talk about today, which is these safe, ethical, and effective practices that we want really every generative AI user to feel comfortable, to be able to do. And so when we. Okay, Corey, when was the first time we started writing on the C frame? I think the first time we ever met mentioned it was in a keynote in, I think, June of 2024. But when was, when did we start writing the C framework? Oh, you're on mute, my friend.


    03:28

    Corey Layne Crouch
    Yep, of course I'm muted. The memory isn't super crisp. But I do have this memory of us and our colleague and friend Mandy Dupriest having a conversation of, okay, what are we. What are the mindsets? What are the things that we do? And we're. When we're using gen AI and we're doing our training very early on and articulating those things in buckets. And then there was a point where I was like, okay, well, we're really talking about being safe and ethical and then using the tools for value. And that got us to the safe, ethical, and effective. And then. And were doing it in our workshops. Right, Amanda, like, these were things that you and I were talking about out over and over again. And I would say we first started to put pen to paper about what the framework was.


    04:29

    Corey Layne Crouch
    I want to say it was what, like, not last January, but the fall before. So what would that have been? 2024?


    04:37

    Amanda Bickerstaff
    Yeah, we've been working on this for.


    04:39

    Corey Layne Crouch
    A really long, for a long time. Lots of versions.


    04:42

    Amanda Bickerstaff
    It's been through many versions. But I think that, you know, when we think about the need for generative AI literacy, specifically, because this is something that is pretty new to the field in the sense that there are Other great AI literacy frameworks out there, including teach, oecds, we've got Digital Promise, Educause. But one of the things that we saw is that when people were thinking about AI, they were not thinking about AI, machine learning and these bigger field. They weren't thinking about social media. They were thinking about ChatGPT or Gemini or Perplexity. And so we really have always been very focused on that. And here's the urgent need for general literacy. And, and so we set this out in the document. But it's just so important to know that young people are using these tools. And like, we have to be so aware.


    05:28

    Amanda Bickerstaff
    And we are aware, but the students are using these tools without any guidance. In fact, there are so few, like school, public school districts, a third have any policies in place. There's another statistic that says while about 2/3 of all teachers have had some AI literacy training, only 14% of students. Like, there is this huge gap between usage and support and even guidelines around these tools. We also know that there's an equity gap that's starting to exist, is that we can see that the response to generative AI in terms of policies, generative AI literacy training and access to these tools is actually starting to go along the digital divide lines that we've always seen in technology adoption over the last 30 years. And then the final thing is that we just know that this is important.


    06:16

    Amanda Bickerstaff
    If you are a young person or an adult that says no generative AI, that is totally okay. But you have to be aware that these tools are becoming commonly that's a skill for the Future. That isn't 10 years from now, but when you graduate high school, when you go into the workforce, this is something that's happening right now. We know that LinkedIn's rising skill last year was AI literacy, and this year was AI strategy. Like, in less than a year, went from just general literacy to strategy. And I think that just really shows us how important this is. And so here's what we did, everybody. And so it is wild, and look how beautiful it is. And I want to shout out my mom who made me wear a color today.


    06:55

    Amanda Bickerstaff
    If you know me very well, you know I love black, but I'm matching. Our cover is that, you know, this is our C framework. It's a practical guide to building generative AI literacy. We had three really big core commitments when we came to this document, especially since it's been through so many iterations. The first is we really wanted to empower educators and students and leaders and even education tech developers to have this set of knowledges and tools to lead AI adoption with confidence, but also care. We want this to be something that feels like it's coming from a place of learning, it's coming from a place of not fear, but care.


    07:33

    Amanda Bickerstaff
    And I think that's something that we try so hard to do and have seen have such positive impacts in everything that we've done on the ground or even through our webinars and other types of trainings. The second is just we talked about the centering equity. This needs to be for all people. It needs to be something that is going to like. The free versions of the tools still have an enormous value, but if they're not accessing these tools or not finding paths or not understanding how to use these tools in meaningful ways, that equity gap is just going to continue to get exacerbated and larger. And then finally, we designed with purpose, we grounded every resource and pedagogy, the emerging evidence space, developmental science, and also just real world relevance.


    08:15

    Amanda Bickerstaff
    And so I want to say if you've looked at the document, you will notice about 15 pages of citations. We have an enormous research base, all the way up until last week that were some of the really good research on cognitive surrender, cognitive offload, sycophancy. We, we are including that all the way up into publication. And so this is something that we're like really excited about and we're going to go through what we have. And so I'm going to hand this over to Corey, who's going to take us through the definition, but also through our brand new beautiful Venn diagram that we hope that we'll start to see in a lot of what we do.


    08:51

    Corey Layne Crouch
    Thanks, Amanda. Our definition is simple and we did that on purpose, that this is about the knowledge, mindsets and practices that enable people to use general generative artificial intelligence, not just AI, broadly, safely, ethically and effectively. And as you're digging into the document, if you saw our email this morning, and as Amanda's mentioned, were very intentional about saying that this is about generative artificial intelligence. And what is that? And we'll talk more about knowledge in a moment. But what are those fundamental understandings of what the technology is, what it can and cannot do, and its limitations and capabilities and how it's designed? And what are the mindsets that guide our approach to the technology that then is underpinning those safe, ethical and effective products practices?


    10:01

    Corey Layne Crouch
    And what you'll see in this, you know, fresh off the press, new visualization of the way that we think about it is that all of those components have to be working together in order for us to really use AI with literacy. We can be safe, but perhaps not be effective or not be ethical, or we could be ethical with our use, but perhaps doing some things that are unsafe for user, for ourselves and for others. And so what you will see in the document as we dive in is that this is really about that application of all three sets of practices grounded in the mindset and the knowledge of what the technology is that then empowers, like were just saying and what was guiding us, empower students and educators to really maximize the benefits of. Of this technology while minimizing those risks and limitations.


    11:07

    Amanda Bickerstaff
    Absolutely.


    11:08

    Corey Layne Crouch
    What did I miss? Amanda?


    11:09

    Amanda Bickerstaff
    I would just say the one thing that's so unique to this, though, is that we found that starting with the knowledge of what gen AI is and what gen AI isn't has led to the biggest breakthroughs in general AI literacy. As soon as you start to understand what these tools are and this idea of the black box model, these tools are not fully explainable, but. But they are explainable enough for us to start really situating our own use with these tools and understanding that they are not magic, they are not going to understand us. And we'll talk about we have a beautiful misconceptions table in the document, but that these are predictive engines that require user input, evaluation, and expertise.


    11:55

    Amanda Bickerstaff
    And I think that is something that's really interesting because I was at a recent dinner where someone that is very well known in education told me I was wrong that these tools could do that work without us. And I think that it's still such a misconception that the knowledge, like the knowledge base itself, can keep us back from really understanding how to use these tools, but also to understand their limitations, which really can lead to some major dangers for young people and adults alike. And so when we think about diving into this more, this is what we think. And so I know it's like not for the faint of heart. I will say we did cut it up. So we'll show you in a document, there's a first kind of deep dive into these eight kind of key knowledge areas.


    12:37

    Amanda Bickerstaff
    And then if you're really nerdy, like Tanner on our team and I and others, you'll find at the very end, we have an appendix that goes into this with, like, really robust evidence bases. It's actually something that we're really proud of. And so what we have here is the idea of these foundational knowledges is that Most people conflate AI and as generative AI or ChatGPT and do not recognize the first chatbot is like 60 years ago. And that there are other types of AI that have been part of our lives with technology for 20 to 30 years. And so understanding that context really brings us out of this moment, so it doesn't feel so new.


    13:14

    Amanda Bickerstaff
    But there's also an understanding that like generative AI itself, you look down this list is going to be something that's hyper specific in terms of how it works, but also is this transformational technology that really has had this moment where we're all talking together. We then go into how Genai works, what it can do and what it can't do. I will say that fifth one and fourth and fifth one are ones that are going to be versioned as we go through because we know that capabilities are changing. But you can understand really where we are today based on this foundational knowledge.


    13:46

    Amanda Bickerstaff
    And also we talk about how Genai learns, especially around trained data sets, the ways in which they're trained that actually prioritize sycophancy or the like, this agreeableness that can lead to problems down the road when young people and adults are using these tools, the risks and limitations of the tools themselves, like bias and outputs, hallucinations or inaccuracies, you know, and then finally this environmental economic impacts, we are not skirting away from that. These tools are quite complex and have climate impact, especially around new model training, that they have economic impacts as well. In terms of access. We want to make sure that we're not keeping these knowledges in just this technical space, but it's also starting to be applied into our world around us because we want this generative AI literacy framework to be as human as possible.


    14:37

    Amanda Bickerstaff
    I hope that you see that throughout the document that it's not the technology that is the center, it's the humans that are using it and getting value out of these tools. And so I'm going to hand this over to Corey because this is probably, I will say this is probably the part of the work that we are the most excited for because it is the newest. It also is something that were able to finally get to a place where we could actually have these stated in memorable ways. I'll hand it over to Corey to talk about once you have those knowledges, what are those mindsets that we've seen to be so effective?


    15:09

    Corey Layne Crouch
    Absolutely. And for those of you that have worked with us before, which I know many of you are here, you might notice that this Articulate articulation of the mindsets is an evolution where we're pulling out the beliefs that we're underpinning those practices and safe, ethical and effective that we've been talking about for years and really saying what applies across all of them. So the first one that we talked about was this intentionality. Being intentional and being an active decisive user of the technology rather than a passive user, if you will.


    15:51

    Corey Layne Crouch
    And that means really one being intentional about when you're deciding to use the technology and when it's clear to you that you shouldn't be using the technology as well as recognizing that when you come into decide to use it, that you have to be active in evaluating the outputs and all of those components and that you know that your directing of the tool is what keeps you in a place where it is valuable and effective and safe and ethical and then connected to that. And you'll notice that you know these overlap and are connected. Is that staying critical?


    16:36

    Corey Layne Crouch
    I would say the first one that we had here was this ide of engaging with healthy skepticism consistently knowing that you have to evaluate and that you shouldn't trust any of the technology or the outputs for at face value and that you have to be actively critical and evaluating both the outputs as well as how that technology is using your data, using your prompts, what your connect, giving it access to. Especially as these tools continue to have more and more features that leverage our files and our calendars and our email, etc. Etc. And then this line has been tried and true from the get go with us. I would say Amanda right in perhaps the very first deck of our flagship we had be transparent and that continues to hold where it is critical to always share your use.


    17:46

    Corey Layne Crouch
    And that aligns both with academic integrity and with professional integrity. But we also mean being transparent here about the way that you're thinking about the technology and the assumptions or the beliefs that you have about where it should be used and where it shouldn't be used in especially in education and professional settings. So that it is more of a conversation that is expected and there's an acknowledgement amongst a learning community or a professional community that we're all figuring this out together and we're all committed to doing this in a way that is ethical and valuable for us. But we have to talk about and again be open about when we are using the technology and what we're using it for. And then that also overlaps brings us to acting responsibly. And so for this Wine again.


    18:49

    Corey Layne Crouch
    We actually workshopped a few ideas here because acting responsibly is really about taking accountability for what you do with generative AI and honoring both your own dign and integrity and that of others. And one not engaging things that could into things that could be harmful for others, like creating deep fakes or intentionally spreading misinformation. But this is also about believing that if you're going to use anybody's intellectual property or their likeness, that you should be attaining consent for that and that you take responsibility for any outputs that you put into the world that may potentially be harmful or, you know, wrong or inaccurate. And then finally, we know this one to be true for all of us. We have to keep learning. We have to lean in to a growth mindset, which is not new in education, right?


    19:55

    Corey Layne Crouch
    Like we talk about growth mindset, we talk about this belief that we can keep getting better and keep learning. And we want that to be true because sometimes we come into rooms where, you know, we're training a room full of educators and I'll lean over to help somebody and they'll be like, I'm not good at technology. And we don't want people to have that mindset. Everybody can be effective in using generative AI. We want you to lean in to that growth mindset and to end to keep learning. And this needs to be true for our students too, because we know that this technology is going to continue to evolve.


    20:36

    Corey Layne Crouch
    And so we're going to have to keep learning, keep exploring, and be resilient in figuring out not just how does this work and how do I use these new tools, but then also really think about what does this mean more broadly for my work, for my community, and for our world at large. And so those mindsets you'll see underpin all of the practices that we outlined as well.


    21:06

    Amanda Bickerstaff
    And there is a great question by Hernan about how do you, like you're seeing that these mindsets can be quite difficult for people to really latch onto. I think that there are a couple of things that we've seen to be really effective. The first thing is to always voice over. If you're doing a training with generative AI, you are always voicing over these mindsets. We do not ever do any kind of work where we're using generative AI and we are not modeling these mindsets. I think it's incredibly important. I think that the second thing is if you have education tech companies or tech companies coming to you and these mindsets are not supported by their tools, meaning that they are not opportunities to not just be a passive recipient or that the transparency is something that's not encouraged, whatever that may be.


    21:54

    Amanda Bickerstaff
    Push back on those companies too. Because what we see is that the tools themselves can actually really negatively impact these mindsets because of what it looks like versus what it is. And I think that's a really strong piece as well. And I think the third thing about these mindsets is that some of it is just cultural too, even if there's not a sense of urgency yet. But if you start to be transparent in meaningful ways, we can start to have deeper conversations that are not fraught with fear and uncertainty avoidance or hidden use. And I think that is something that we've also seen of how to shift that practice. I will say we have a couple of different types of ways we work with districts and schools, and some of them are ins and outs.


    22:36

    Amanda Bickerstaff
    And we do find that, like, you know, the three hours gets people moving and thinking differently, but it does require embedded and consistent application of these mindsets and spaces in which people can practice them in ways that feel safe and that they can make mistakes. Like, I mean, all of us have made a mistake with generative AI and learned something from it. And now hopefully the next time, we don't avoid usage in those ways unless it actually is going to be a negative use case, but instead do it better. And I think that those, even those learning moments can really lead to a lot of that mindset, like embedding, which we want to see.


    23:12

    Corey Layne Crouch
    Yeah, yeah, that modeling and praising it. When you see others starting to apply that and operate with that mindset as well. I didn't give up, you know, on a tool right away or really leaned into understanding what a new, you know, report on a study is saying about, you know, teen use or whatever it may be.


    23:38

    Amanda Bickerstaff
    I do not know what I just said.


    23:39

    Corey Layne Crouch
    I do not know why they're.


    23:43

    Amanda Bickerstaff
    Apparently every. Every webinar now is going to have some kind of weird effect. So apologies for the bubbles that just happened. And I think that Cody said, I was trying to answer Cody in the chat, but yeah, today, not confetti, but bubbles. Is that Cody. This is why we develop. So Cody asked about Teachers don't feel comfortable enough with AI literacy. This is why this document exists. What we hope. And we're going to go through kind of all of the key resources. But the entire goal of this is that a practitioner could read this and start to build their own knowledge and then feel more confident talking to young people and learning against learning. In fact, there's a whole section about how the teachers themselves have a unique opportunity to not just build generality for themselves, but build it alongside their students.


    24:31

    Amanda Bickerstaff
    Because we're all learning together. But we're going to move into the practices. And this is the thing that's most familiar out of everything that we've done. You probably have seen this before, but these safe, ethical and effective practices are really designed to feel like, okay, these are the lenses. I take my goal is safe, ethical and effective use. And I'm going to try to do that every time. And you know what? If I do it every time, not only am I Jenny, I literate, but you know what? Maybe I start to build fluency too. And what we see is that with safety, I think of this as three lenses. Safety is me. How am I keeping safe? Like this is all internal. Am I protecting my data privacy? Am I evaluating the risks of different tools, but also the most important.


    25:14

    Amanda Bickerstaff
    Am I prioritizing and also safeguarding my thinking, my cognitive, you know, like, work and effort, my voice, but also my healthy balance with people around me where I'm not over relying on these tools for social, emotional purposes. And we know that young people and adults are turning to these tools in ways in which they are both, like actually bypassing the hard work of learning or the judgment that is required to be a good teacher or be a good professional and or really starting to use these tools in ways in which they isolate themselves based on their use of generative AI and can be getting some pretty terrible feedback loops of negative and very sycophantic feedback that actually can cause psychosis, which is really something that we want to avoid. For ethics, that's the external.


    26:09

    Amanda Bickerstaff
    So if safe as me ethics is external, this is where we're thinking about academic and professional integrity. You'll see, you'll see all these things, right? You'll see all of these mindsets, right? About transparency. It's where we have act responsibly awareness of AI's other ethical issues, but also that do no harm mindset. It may seem so fun to build a deep fake. Like, I love our team and like, but I have to get permission to be able to create funny images and make sure that they're never externally shared. Like, you know, they're only already publicly available. Like I'm using that kind of constant thinking about what I'm doing. Is it going to cause harm?


    26:44

    Amanda Bickerstaff
    And then finally, this is why this has to sit in a generative AI literacy framework versus any AI literacy framework that is larger in scope is that this is the only technology in the world right now that requires so much intentionality, expertise, evaluation and understanding. And so we to use these tools effectively. This is what Corey said. If you can ask a good question,.


    27:12

    Corey Layne Crouch
    If you give good feedback, if you.


    27:13

    Amanda Bickerstaff
    Can keep the mental stamina to keep reading and understanding the outputs and giving feedback and refining, you can get the most out of these tools. It is remarkable what you can create already if you take those mindsets and those practices and put them together. And this is everything from centering your own human originality. It's the prompting and context setting strategies. But it's also the critical evaluation that stay critical. Mindset is something that is probably the easiest to get lulled into a false sense of security. Our brains say, oh my God, this beautiful lesson plan. And then we think the lesson plan is good because our brain sees a lesson plan.


    27:55

    Amanda Bickerstaff
    But to actually stay critical and keep always in the back of your mind that these tools make mistakes, it can be biased, incomplete, et cetera, has got to be the core of what we do. And this is something that there is not a Genovese tool on the market that does not make mistakes. And I think that is something that you always have to keep centered. And so we really love that these lenses are going to be those that you can bring back absolutely into your classrooms and your schools. But we want to make it as easy as possible. So we're actually going to do a walkthrough of a document. I will say everybody, last night at like 9 o' clock at night, we found out that we couldn't like the downloading was messed up because of an ombre.


    28:35

    Amanda Bickerstaff
    So like it has been a wild journey to get this ready for you all. But one of the things that we wanted to do was to give you as much practical applications as possible. So we have the comprehensive research space, we have these framework and action scenarios which we absolutely love. Guidance and age appropriate AI literacy, some activity banks for everything from early childhood to adolescents to adults. And then some these AI generative AI look fors about what it could look like at different developmental stages. So I'm actually going to stop sharing and hand it over to Corey if you're comfortable to start walking us through the document. And this is something where this is not. This is the first version, there will be more versions.


    29:17

    Amanda Bickerstaff
    So if you are inspired by this and we're like AI for education, we'd love to see X you should tell us because we would love to have that be part of this opportunity. Like it is something that is going to be a living document and we really, absolutely want that to be the case. So you want to join take its way?


    29:34

    Corey Layne Crouch
    Yes. And I, I'm chuckling because I have to make sure I pull up the wine that doesn't have the crazy.


    29:41

    Amanda Bickerstaff
    Oh my gosh, an ombre. Guys was we're going to. Yes.


    29:48

    Corey Layne Crouch
    And if my colleagues in the chat, just in case there are folks that have joined us since we first dropped it in, drop in the link where folks can download it for themselves. So let's do a quick walkthrough of what we have and you'll notice that this table of contents is exactly what Amanda was just talking about and some of the data that we shared with you at the start of this. As far as the urgent call for AI literacy. And I don't want to give anybody to, you know, whiplash and scrolling through too quickly, but you'll see how we've synthesized it here. And what I do want to point out is were very intentional about citing where these components are coming from.


    30:43

    Corey Layne Crouch
    And when you go down to the appendices, we have all of that information there for you so that you can also reference that research and use it in your conversations with your colleagues and your community as well. So those components are here. And of course, we have to have the clear definition called out about how we're defining generative AI literacy. I do want to take a beat on this guidance on the age appropriate use of tools. And you'll see this throughout the document as well as hear us say this, that we spend really every day, right, Amanda, we're reading articles, reading the studies that come out, paying attention to as the research base evolves because this technology in of itself is so new, so nascent, and then our understanding of its impact on young people's development, both cognitively and social, emotionally, etc.


    31:52

    Corey Layne Crouch
    Is evolving as well. And it's going to continue to evolve. And we know that there is a need of some clarity of that guidance. So based on our on the ground work and the evolving research base, here's what we recommend as far as those age ranges about in early elementary, we do not recommend independent use of tools. And we also know that young learners are being exposed. And so just because they're not on tools independently, what does it look like to still have that exposure and conversation that helps shape that knowledge and the mindsets for them even before they're using the tools?


    32:38

    Corey Layne Crouch
    And then depending upon where your organization is insofar as what tools are you adopting whether it's in upper elementary, middle school and high school and where might it make sense with the understanding of safe and ethical to start to have students engage more directly with tools? One thing that I note often and you will see it noted here also in a call out box, we talk a lot about that age of 13 and you'll even see here in our document where we're assuming that it's above the age of 13, that learners are starting to use the technology independently.1 we recognize that was actually happening on the ground. And when young people have phones in their hands, there is a lot of use under the age of 13.


    33:40

    Corey Layne Crouch
    But we also want to name that 13 year old threshold is really based on data privacy compliance for the Children's Online Privacy Protection act coppa and it's not based in robust research about our understanding of the impact that this has on child development. And so if you're a decision maker in your school or in your district thinking about what are tools we potentially want to purchase or use for academic reasons with students, we encourage you to really think, of course about the data privacy and security components within that safety bucket, but also understanding that there's still so much for us to learn about, you know, what is a positive impact and mitigates any kind of cognitive, what was the word, the phrase that. Cognitive stunting.


    34:46

    Amanda Bickerstaff
    Yeah, there's cognitive surrender. Stunting. Stunting, yes. I think that I will say that while this is very like we try to make this as accessible as possible, there are some shots fired in this. Like, I think that's one of them. Like we come out stronger. We have always come out stronger against young children using these tools or young people using these tools without the skill base to be able to evaluate the outputs and understand how these tools work. And we are going down. We're not stopping that. That is something that we still believe is incredibly important. But if we want to keep scrolling, I think one of the things that is, you'll see this is a shortened version, but this is probably. This is we, this is Amanda.


    35:29

    Corey Layne Crouch
    This is Amanda's favorite section, everyone. I'm calling it now.


    35:32

    Amanda Bickerstaff
    It's one of my favorites. And I think that it's important. It's actually quite funny because I was like, no, we have extra time. We're going to get this done. These are the common misconceptions that we see that hold people back from building grail generative AI literacy. Whether it's that AI was not here before ChatGPT, that it's like a search engine that it learns from my chats that I can teach ChatGPT not to use an EM Dash. You cannot if you want to keep scrolling, Corey, we've got JI outputs are limited to its training data. We know that now with more and more research capabilities, us being able to add files, that is no longer true. That it's capable of human level cognition and judgment which absolutely like these tools cannot.


    36:17

    Amanda Bickerstaff
    That they don't make mistakes, that they understand me like my, like the people that like say that like oh, but you know, ChatGPT learns about me and understands me absolutely not the case that it's unbiased, that it'll challenge you if you're wrong. In fact, it's actually the opposite in most cases that your chatbot conversations are private and finally, that it's just a tool and it doesn't actually reflect different values of those that create them. What we hope is that we're already using this. So our new teacher courses that we're launching over the next couple of months actually have these misconceptions laid out. But what we also just love is that everything in here we try to make as reasonable as possible. So you could take this, put it on a chart, you can take it into the space.


    37:02

    Amanda Bickerstaff
    And one of the things that's so great, and I'll let Corey drive into these is that when we did the safe, ethical and effective practices, it's not just the description but the questions you can ask yourself to see like to build those practices. So Corey, you want to talk a little bit how we brought, we came up with these.


    37:17

    Corey Layne Crouch
    Absolutely. We talked about the mindsets a little bit ago and some of these practices. But we really have the conversation of how do we make sure this isn't, you know, concepts and ideas on a document. But it becomes an active part of the work and using the technology. And we've been using this language of a thinking routine. Well, like what is it that we want going through educators minds and students minds that actually leads them to the practices? And that is what led us to including these reflection questions. So the reflection questions, we have the description, for example of evaluating gen AI risks. But think of these as what should be going through my head as I'm doing that. Like what are the stakes of this task? Am I reviewing this output as carefully as I should?


    38:20

    Corey Layne Crouch
    And then the maintaining human agency, Am I making my own judgment about this output or this topic? Or am I allowing myself to be swayed by the, you know, assertive nature of generative AI. So of course, these are all questions, as you can see, where we're not necessarily going to have them memorized, while some of you might, I don't know, let us know. But what we encourage you to do and start with yourself and then think about how you can use this with your colleagues and your students. And you might tweak some of the questions for students, but we encourage you to have this as a resource to reference when you're using a new tool or you're using the tool for a new application or a new use case that you haven't tried before.


    39:17

    Corey Layne Crouch
    Just take a beat and read through the reflection questions to start to train your mind around the mindsets and what this actually looks like in practice. So I will say that I, I mean, I love the misconceptions table as well. The reflection questions as a new addition is also one of my favorites too.


    39:40

    Amanda Bickerstaff
    These are, we gave ourselves like an extra 10 days and that's where the misconception table, the reflection questions got in. But I will say, okay, so we're talking about our favorites, not that the whole document.


    39:51

    Corey Layne Crouch
    Yeah, it all fits together.


    39:53

    Amanda Bickerstaff
    It all fits together. But when we get down to the look what's. So this is where. And I will say this is a space in which caveating, like young learners, we have the least information of like, what that looks like both in practice but also in the research. And so I would take this with a grain of salt, just a little bit, knowing that we've actually identified very clearly that this is the area that the most research is necessary to do. And you can see how we actually have this disclaimer.


    40:26

    Amanda Bickerstaff
    But what we have here, and this is so great, is that you've got what it looks like for a pre K through early elementary, that they would be able to understand the difference between an AI and a person, that they would know that technology can create these things and to be aware of them. They've got the mindsets of curiosity about how they work. They're not just going to just take everything as real. They're going to say that might not be real. And then finally, can they actually start to apply this into their practice? They understand that some of my, like, I'm not going to share all my private information. I'm also going to like, know to go to an adult if there are questions and be able to evaluate.


    41:06

    Amanda Bickerstaff
    But also, you know what, even though, you know, this tool is available in like Snapchat or YouTube, like, I shouldn't be using it at my age because it could actually be dangerous. And so like having those types of conversations. And if we scroll down, what we have is this for every. We have fit for the late childhood. And this is an area that was really actually worked on a lot by Corey, but also Mandy DePriest who was with us for the first couple years of the work. And so it really is like something that has built over time.


    41:36

    Corey Layne Crouch
    Yeah, yeah.


    41:38

    Amanda Bickerstaff
    And.


    41:38

    Corey Layne Crouch
    And we rock. I also want to acknowledge, we recognize there's a bit, there is a jump here. So were in. We are excited about continuing to build upon this understanding with you all, our community because there's just so much to learn. Right. So there's the really early childhood and then late childhood as they're approaching adolescence and approaching, you know, the cognitive development that allows them to be more discerning. What do we want them to know as they're then moving into. And you'll see after that framework in action that we're going to want to take a beat here. Moving into some more independent but scaffolded by educators and adults. They trust use of the technology. Okay, can we have lots of favorites? Connection is also favorites.


    42:39

    Amanda Bickerstaff
    Yeah, let's do, let's go a little bold though. Let's go to the AI detection one. Let's go down. So it's so funny. Like you know us, we have to have a practical moment. Like it cannot be just a bunch of like these, like big ideas. And so for us, we love these. And this is going to become part of the ways that we do workshops. In fact, we're going to create a version of this where you can fill it out yourself with either scenarios that you come up with. But what we have is there are six framework and action scenarios. There could be hundreds that take real things that we've heard on the ground and allows you to work through the safe, ethical and effective questions to lead to an AI literate response.


    43:20

    Amanda Bickerstaff
    So in the case of deciding whether to use an AI detection tool to evaluate student work, questions like what happens to student work once it's uploaded? Most of them take it as model training data. And so you're giving away student intellectual property. Does my school or district have a policy I'm using AI detection tools? If not, maybe I should ask. The next part is ethical. Are these detection tools accurate enough to make high stakes claims about student integrity? We know the research says it is not. Will this AI unfairly target young people that are from neurodivergent or multi language backgrounds? 100%. The research shows that. And finally, is it actually just catching. It's catching AI use. The goal of what we should be doing, is it helping or hurting?


    44:06

    Amanda Bickerstaff
    And also, should I just be using AI detection or actually starting to think through updating my own approach? And so we have those questions that are a frame. The safe, ethical and effective approach is. First of all, I went out and researched and found that these tools are pretty unreliable and can be biased against non native English speakers. I also understand that the vendor policies are pretty not safe for my young people or potentially ethical. And then finally, like, man, you know what? I just don't want to be an AI catcher. I just don't want to be a person that. That's their job. And I need to start thinking through the ways in which I think about academic integrity, discussing with young people and the assessments themselves.


    44:43

    Amanda Bickerstaff
    And so what you see is like, you can actually start to apply this to almost any way in which you can start to use this Meaning, like you can have. We have one on a student using a math tutor chatbot, we have one on research, we have one on a teacher like grading. Like there are. These are designed to be as meaningful a moment of the, like this moment in time. And so we're gonna. I love that everyone's starting to get. Isn't it great? Everyone's getting very practical. We love a practical moment. And so we're definitely going to have resources we want to do. And I hope you guys will help us out with this.


    45:19

    Amanda Bickerstaff
    We absolutely want to do kind of like a prompt library, but instead of the library of prompts, it'll be like these scenarios that you can work through with your teachers or students that you kind of come to us and work through. And so we absolutely love this. And I think that hopefully you guys are starting to think of how this will turnkey immediately into those sessions to build those strong practices.


    45:41

    Corey Layne Crouch
    Yeah, and I'm seeing some in the chat. We're getting there also, which is great. Thank you, Sarah. Because to Amanda's point, and some of you are saying it too, while this has been in the works for quite some time, we're also seeing as a starting point, a place where it's going to continue to evolve in how we do exactly this, building this generative AI literacy across our learning communities in a way that really empowers movement towards the goals and the vision that we have for our young people that is keeping pace with the change of the technology. So we have resources here for you and talking about what comes next. But I'm going to go ahead and scroll down and here is the, you know, lengthier shout out.


    46:33

    Amanda Bickerstaff
    Tanner.


    46:34

    Corey Layne Crouch
    Tanner ultimate nerds. Yes. And really what you have here as we're moving to articulate this framework, you also, you have a research base of research database to really use and build your own understanding and that within your community. And then as some of you are calling out, were thinking a lot about and I know Amanda and Tanner heard me say a few times, like I want to make sure that this is, it's like, okay, I'm excited. I understand this. It's so much more clear about what safe, ethical and effective looks like. What do I, you know, what do I do next?


    47:15

    Corey Layne Crouch
    So we have next steps for building your own AI literacy but then also starting to think about with these activities, what could it look like to be building these concepts and mindsets and understanding of the practices with students all the way from that, you know, pre K and early elementary to secondary and adolescent learners. And then also many of you are mentioning running your own PD or introducing this to your staffs and pre service. We, these are the student ones. It is meteor for that secondary audience because there's a lot to build in them as they're using it. But then we also have an activity bank here in building it with adults and peers that is very much based in the work that we're doing every day as well.


    48:10

    Amanda Bickerstaff
    Absolutely. Okay, so everybody, first of all, I know some people have to run because we're at the 45 minute mark. And so just know that like there are definitely like, you know, more things coming. But I want to, if we can just like okay, so let's talk about it. Number one, that the most important thing is to take a step. And I think that we talk about this in two ways in the document. One is that this is a community practice. This is a community. This needs to be a community goal. And a community practice that means that everyone has a role to play. Whether that's school leaders and teachers, students themselves, families. And then the people building around the like, you know, like, we cannot give up some of this onus and to edtech providers or big general AI companies.


    48:59

    Amanda Bickerstaff
    We have to make sure that they're part of this conversation. But then the second thing is that getting teachers to understand the scope of this moment, the importance of this moment must happen. But if you cannot get your teachers on board, the reason why our free student course exists is because, you know what, if kids need to know this, there are other ways to Start building that AI literacy in ethical, safe and effective ways. And so we're going to drop that student course. We have over 1,000 students that have completed it since February. We're seeing that kids are more ethical, they feel more confident in the tools, they can make better decisions.


    49:36

    Amanda Bickerstaff
    But you know what, if you're finding that your teachers are just not on track, you can still have kids use these tools in more meaningful ways because we know they're accessing them at home. We just don't think you can wait. And we want this to be literally as practical as possible. But we know that change is hard. Mindsets take a long time to shift. That culture matters a lot. But I think that we're hoping and absolutely hoping that there are enough points in this that you're skeptics, you're all ins that need to be maybe pulled back a little bit. You're. Those that are unsure have this sense of place, of the importance and urgency. And I think that is our ultimate goal.


    50:17

    Amanda Bickerstaff
    But I know that I just want to say thank you because literally today after this, we're having our three year anniversary from the moment in which I built the website, having no idea we come this far. But I want to say thank you to everyone that has helped with this. And so we have had so many amazing people that have been first readers, but I want to call out those that actually help directly with the document. So we have Corey and Tanner, we've got Dan, Alex, who helped design it. We have got Maria and Wendy and Lena and Julie and Hannah. And we had a lunch and learned yesterday with Casey and Mike. Everyone on our team has had an enormous hand in this and I think that it is something that would not have been possible.


    51:05

    Amanda Bickerstaff
    We started this document a year and a half ago, but I was always something, I was reticent to put out into the world until it felt right. And I think that all the things that we've learned on the ground, everything that we've learned through some of our partners here and those that have accessed our information in multiple different ways, finally got us to a point where we felt confident that a another framework needed to exist. Because that was a question. There are a lot out there and they're quite good, but one that was specific and fit for purpose for generative AI. Number two is that we had something important to say, that we had learned enough and we had listened enough and we had tested enough to be able to do that and understand what people really need.


    51:44

    Amanda Bickerstaff
    And number three is that even though it's like, you know, in some ways this is kind of bold in some of these people in these places. We believe that being bold today, no matter if you're an individual or an organization, is the most important thing you can do. And so I think that, you know, if we think about all of these things together, what's been able to happen is just to see you all here with us and the response and we hope to see this really start to become even more like kind of available with the resources we'll create.


    52:12

    Amanda Bickerstaff
    We just want to say thank you to you as well because coming here today and then hopefully turnkey this in your organizations should absolutely lead to positive change around gender, bio literacy that keeps us safe, ethical and have the most value from these tools. So that's for me, I don't know. Corey, if you have a wrap that you'd like to say, but that's where this is all coming and it's all really based on the work that we've done with everyone that we've ever worked with.


    52:38

    Corey Layne Crouch
    Yeah, I don't know if I could really follow that, but what. But I'm going to anyway. What I really will say is that for me this is always, this is grounded in the students that are in our classrooms every day and the educators that are taking care of them. And there's a lot of people doing a lot of really important work in the space of developing frameworks and doing the research. And at the end of the day I just again think about the students that are showing up every day wanting to be successful in the future, wanting to know what is in store for them. And as we talk to them, they do have, you know, wonderings and concerns and want to know how they're going to be able to navigate the future with this technology.


    53:32

    Corey Layne Crouch
    And so I'm always grounded in that and I'm so excited similarly to share it with all of you so that you can go and start to integrate this in the work that you do with our young people. So thank you so much.


    53:46

    Amanda Bickerstaff
    And so look out for more ideas and please share this widely. We will we take some feedback from day some downloadable images so that you can actually take this into your own PDs and workshops we will be building out. I really think the scenario library is an is a nice place for us, but we're also really going to keep you all in mind. But we just appreciate you all have a beautiful day wherever you are in the world and we will see you. We have a webinar coming up with Cast on udl and AI and we have a new resource for that as well on the 21st. So that'll be our next webinar. But until then, just appreciate everybody. Thank you all so much.

      Transcribed by https://fireflies.ai/

Want to partner with AI for Education at your school or district? LEARN HOW