The Future of AI in Schools: What's At Stake & the Choices That Matter
Artificial intelligence is no longer a future consideration for schools. It is already shaping how students learn, how teachers design instruction, and how systems define academic success and prepare learners to thrive in the AI age.
The challenge is not whether to respond. It is how.
In the absence of clear models, many systems are defaulting to extremes—restricting AI use entirely or adopting tools without a clear instructional strategy. The real work lies in something harder: making deliberate choices about how AI fits into teaching, learning, and the purpose of schooling itself.
During this free webinar we heard recommendations from the authors of two recently published landmark reports:
Rebecca Winthrop of the Brookings Institution on A new direction for students in an AI world: Prosper, prepare, protect, and
Amanda Bickerstaff of AI for Education, and Sari Factor of Imagine Learning, on Beyond the AI Inflection Point.
They examined the decisions in front of education leaders and school communities right now— to move the field beyond reaction toward intentional design.
Participants gained a clearer understanding of:
How current AI decisions are already shaping classroom practice
How policy, instruction, and technology must align to avoid fragmentation
What it takes to move from short-term response to long-term strategy
-
Tips for parents: Raising resilient learners in an AI world (Brookings)
A new direction for students in an AI world: Prosper, prepare, protect (Brookings)
Webinar with Common Sense media on ai companions
Beyond the AI Inflection Point report
Cognitive Surrender research
protocol resource
-
Amanda Bickerstaff
Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.
Sari Factor
Sari’s vision and strategic direction guides Imagine Learning’s work to create effective, digitally enabled curriculum solutions. A lifelong teacher advocate, she works to ensure that every student has access to meaningful, educational experiences wherever learning takes place. Before joining Imagine Learning, Sari was a mathematics teacher and held leadership positions at several successful educational publishing and learning technology companies including Kaplan, McGraw-Hill, Houghton Mifflin, and Everyday Learning Corporation.
Rebecca Winthrop is a leading global authority on education, the director of the Center for Universal Education at Brookings, co-author of The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better, and an adjunct professor at Georgetown University. She is dedicated to ensuring that every child has the opportunity to thrive in life, work, and as an engaged citizen. She leads cutting-edge research and initiatives aimed at transforming education systems around the world to better support children's learning and development
Rebecca Winthrop
Rebecca is a trusted advisor to both school communities and national and international organizations. Her expertise is sought by many, including parent networks, schools, district education leaders, governmental agencies, the United Nations, and Fortune 500 companies. She currently leads the Brookings Global Taskforce on AI in Education and her work is centered on developing and advocating for evidence-based strategies that bring people together—families, educators, policymakers, and companies—to help children maximize their potential.
She holds a PhD from Columbia University’s Teachers College, an MA from its School of International and Public Affairs, and a BA from Swarthmore College.
-
00:00
Amanda Bickerstaff
Hi, everyone. We're very excited to have you here today. I'm with two spectacular colleagues in women in AI and education and the world at writ large. I cannot tell you how excited we are to bring together our two reports that we've worked on. They were both launched in January, focused on the risks and opportunities of AI adoption in education. It is something that I think is the key question that we're all grappling with right now, and one in which to have these two, both the research and report from Brookings, as well as our kind of fictional beyond the AI inflection point piece, come together in a really nice way to start thinking about next steps. So I'm going to be joined today. I'm Amanda.
00:44
Amanda Bickerstaff
I'm the CEO and co founder of AI for Education, and I'm going to be today with Rebecca Winthrop, who is the senior fellow and director of the center for Universal Education at Brookings Institution. And then Sari Factor, my good friend and colleague who is everything. I feel like you have so many great titles, advice chair and chief Strategy officer of Imagine Learning. We're going to start, though, with a bit of context, like always about the day. Like, this is your opportunity to not only listen and share with us, but also with each other. Please get involved, say hello in the Q and A, and make sure to drop any resources questions that you may have.
01:18
Amanda Bickerstaff
The only thing that we ask is that, please, anything that you want to come directly to Sari, Rebecca or myself, please put that in the Q and A so that we can make sure to see it but also share resources. That's one of the things we love the most about our community in these webinars. And what we're going to do to start is that we're going to have an opportunity to hear both from Rebecca and from Sari and I on the two reports. If you haven't had a chance yet to dive into the two pieces of work, we're going to give you that context at the beginning, starting with Rebecca.
01:51
Amanda Bickerstaff
And then what we're going to do is we're going to have a pretty good and deep discussion about what this really means in context in schools and education systems around the country and also around the world. So I'm going to start. I'm going to hand over the mic to Rebecca, if you don't mind introducing yourself, and then you're going to take us through this spectacular report that you all created.
02:10
Rebecca Winthrop
Hi. Thank you, Amanda. Hi, everybody. I'm Rebecca Winthrop. I direct the center for Universal Education at the Brookings Institution. I am going to share my screen. Let's see. See and can you guys see it? Yes, Amanda.
02:30
Amanda Bickerstaff
Yes. You got it.
02:31
Rebecca Winthrop
Okay. Okay. Let's see here. I'm going to start. So I'm going to walk you through this report. It's a large policy report a la Brookings. Though I really did love Amanda your and your colleagues report because it was so clever sort of having this fictional school district that you walked, walk through what they might be going through. And ultimately this report was centered on this question of you know, that frankly every parent that I've ever talked to is worried about how to protect and prepare our children. At the same time. I led a task force that was made up of global education leaders and had hundreds of people involved.
03:13
Rebecca Winthrop
We looked at hundreds of studies and we looked at over 50 countries including the US and really the reason we did this task was we used a pre mortem methodology and ultimately is because we've seen this movie before and we don't want to repeat what happened with social media where people who knew about kids and child development really weren't at the table shaping things. And ultimately what I'm going to do is give you my Cliff Notes version or SparkNotes version of what's in the report. High level. There's five key takeaways. The first takeaway is it's super duper confusing. We looked at the risks, we looked at the benefits. We looked at generative AI and students learning and development in and out of school. And we just asked are we headed on the right track? And it's hard to know.
04:09
Rebecca Winthrop
First of all, there's utterly blurred lines. You know, gone are the days of technology use for education, for entertainment, for communication. It's all blended together. One student who we talked to us high school students said oh well, my school band ChatGPT. But it's all right, we all use social media. We go in our AI friends and Snapchat and meta AI and take pictures of our homework and do our ask it to do our homework there or maybe run it through humanizer and work good. We found that there were lots of benefits, lots of risks. The sense making is what took us really a long time. You see here, orange is risks, blue are benefits. And we did find takeaway number two that there absolutely are benefits of generative AI use for students learning and development.
05:00
Rebecca Winthrop
If AI is used very narrowly and strategically, vetted content with good teaching and learning practices can bring learning to life. I'm particularly partial to see where, especially if costs go down, interactive VR gets us could be quite incredible. Helping neurodivergent kids learn huge promise there. One of the most moving parts examples for me was kids with aphasia making synthetic copies of their voice and all of a sudden being able to communicate in the classroom. Of course educators love GENI for the efficiency purposes, but also this ability for novel ways of assessment. Being able to pick up where kids are stuck and the learning journey could can be remarkably effective. The ability to adapt lessons to where kids are at. One of the most moving examples we found was from this organization, Solax, which works with girls in Afghanistan.
05:59
Rebecca Winthrop
As some of you may know, the Taliban regime has banned secondary school girls from going to school. And with a bunch of diaspora teachers who fled the country when they came to power, they, the teachers are leading but they're using GENI to take the Taliban, to take the Afghan curriculum, secondary curriculum, and make these little WhatsApp bite size adaptive learning lessons that girls are accessing again from their mobile phones at home and sort of being able to keep up. So incredibly powerful for inclusion. However, if used inappropriately it can be quite harmful to kids and the potential we saw real harm and then the potential for real harm and ultimately we found that this is this harm really to diminishing students.
06:51
Rebecca Winthrop
Learning really is often related to what we would refer to as wide AI use, which is often unmediated, unscaffolded, long form direct discussions by kids with commercial chatbots or AI companions. Not safe or designed for kids and not safer and not designed for learning. So the thing that makes my head explode, it's when everyone anyone says, which usually doesn't happen necessarily in the education community that much, but outside, I give lots of talks to folks outside the education community. Well, Jenny is just like the calculator. You educators are overreacting. I was like, well, calculators automated and offloaded a slice of arithmetic, but it didn't do all of this.
07:40
Rebecca Winthrop
Math, English, physics, chemistry, biology, history, social studies, poetry, music, art, history, take the sat, give relationship advice and act really sad and guilt trip you which AI companions and friends do when you stop using it. I've stopped talking about cognitive offloading and talked about cognitive stunting, but there's lots of terms out there because I'm liking it to, you know, when kids get stunted nutritionally, their bodies and brains can't develop. And I think it's the same when they don't have the learning experiences they need. They won't be able to develop the critical thinking and creative collaborative problem solving they need. Narrowing ideas Is another big worry. This is really interesting research out of Georgetown where they have a longitudinal study Natural experiment pre ChatGPT and post of high school students essays.
08:31
Rebecca Winthrop
And all of a sudden post chatgpt the essays are all sort of clustered around the same ideas. These were college application personal essays. Very worried about undermining social development. Again, one in three teens in the US Common sense media say that they prefer eager more talking to an AI friend than a human. Again because of the sycophantic design. And new research recently out I believe it was in nature found that even one conversation I was really surprised by that actually diminishes people's ability to repair when harm is done interpersonally. And this, you know this I'm incredibly worried about this. How do you learn from your mistakes? How do you take feedback? How do you work collaboratively in groups if you're socialized to always be right? Amplifying bias. People's companies will say we're working on this. But it still exists, it's still there.
09:30
Rebecca Winthrop
This is studies with my colleague Punya Mishra and his colleagues where they put the same student essay in and asked for feedback to make it better. The only thing that ChatGPT was ChatGPT knew about one student this they liked rap music. The other was that they like classical music. And of course what happens kid with the classical music has a full grade level higher critique and feedback, much more sophisticated. Reducing trust. This is something that I worry a lot about which is really damaging and hard to see. And you don't mistrust until it's gone. But we really did find that in many ways, not everywhere, not every school, not every community, but in many ways gender, how it was rolled out is really hurting the trusting relationships and interactions between in the instructional core, between educators, learners and the content and material.
10:34
Rebecca Winthrop
And also we add parents in there. And of course that instructional core, that interaction is what makes learning happen. It's what makes schools improve. And it's very hard to advance if you don't have trusting relationships. We're also quite worried about inequality. I actually think that inequality kind of cuts both ways. With gene hand you have sort of this access benefit, for example, the girls in Afghanistan. You also may have a benefit for late adopter communities who sort of might leap frog over the mistakes of the early adopter communities. Some of the risks I'm talking about now, but in the medium term we really do worry about leaving large groups of people out. This is just an example of language is one dimension where you we risk Sort of in the medium term exacerbating inequality. You see the light blue is percent of languages spoken.
11:38
Rebecca Winthrop
Dark blues percentage languages on the web. There's many languages spoken by millions of people that bring very little back when they interact with gen just because their languages aren't large enough and aren't digitized by vjooqic very much. Ultimately, I think the thing that I'm most worried about is kids undermining their motivation and engagement. So many students I talk to when I go out to schools, when we did our focus groups with students and parents and teachers said, you know, why am I here like this? Tech can do everything I'm doing in school. I, you know, was quite existential. A few students, we're using it to explore and we're going ahead much faster. What I worry about is a lot of kids getting into passenger mode.
12:25
Rebecca Winthrop
These are the four modes of engagement that my co author Jenny Anderson and I found when we wrote our book. ChatGPT came out halfway through it called the Disengaged Teen Helping kids Learn Better, Feel better and Live Better and really worried about a lot of kids getting into passenger mode. So ultimately, what did we conclude when we did sense making? We ultimately concluded that at the moment the way Genai is being rolled out now, if we do nothing different, the risks overshadow the benefit largely because the risks are of a very different nature. They're undermining students ability to learn independently and interact and have relationships with other people in a classroom environment. Things they need to access the very benefits that Gen AI brings. So what do we do?
13:15
Rebecca Winthrop
We certainly would say and this is the entire reason we did a pre mortem that is absolutely not too late to shift course. There is a lot that we all can do. It's all hands on deck effort to move from AI diminished learning to AI enriched learning. There are three main pillars of this and each pillar has four big recommendations and I'm not going to go into those now, but we can talk through them in the discussion and with Sari and Amanda. And the three pillars are prosper, prepare, protect. So the first pillar is really around shifting teaching and learning practices to be AI aware. So if you have a homework assignment that you're giving that can be easily hacked, don't assignment to be AI assisted. That's using AI very strategically and to be AI resistant.
14:07
Rebecca Winthrop
We need to make sure there's times in the day where learning is very human. And we need to make sure kids are learning to think before they learn to prompt as My colleague Andrea Schleicher says prepare. We need everybody in the building to be AI literate. We especially need to include families. And I've started saying what we need is people to learn how the online world works. Because different people have different ideas of what AI literacy means. And then of course, protect is we need to have a lot of safeguards in place. Lots of other countries around the world are beginning to do that and the US nationally needs to catch up. There's lots of state level action happening there, but those are the three big things. And that is the. Now you have the Cliff Notes version.
15:01
Rebecca Winthrop
So Amanda and Sari, over to you guys.
15:05
Amanda Bickerstaff
Yeah, well, so Rebecca, I mean I think that one of the things that we love so much about your report is just how evidence based it is. And I know we'll dig into that in a moment, but there was a question from Leslie about like, what did you mean about the trust erosion with students? Like, like you talk a little bit about that before I. Yeah, absolutely.
15:24
Rebecca Winthrop
So we've done a lot of work on relational trust, which has seven sort of core characteristics. And we have a relational trust scale we've developed. And we know that when there's trusting relationships, for example, between families and adults in the school building, teachers and school leaders, Those schools are 10 times more likely to be improving on all outcomes. We care about learning social, emotional and so trusting trust is really integral to high quality learning experiences. And what we found is a lot of educators don't trust their kids or do their work is authentic. So half the teachers in the US say I don't trust my kids work. We also found students not trusting teachers when they're using AI and interpreting it as a lack of care for them. And we also found parents like doing weird stuff like regrading teachers.
16:17
Rebecca Winthrop
Kids graded with AI and showing rocking up to the school and telling the teacher they graded it wrong. And then we found people not trusting the AI content either. I mean, superintendent saying, I talked to one, interviewed one and he said in the US and he said, we're going back to all primary sources. He was in a purple district. And he's like, it's taken me two years to negotiate the books and the curriculum and now we let AI in and I can't trust anything and forget it. I'm just going back to.
16:47
Amanda Bickerstaff
I mean it is a fascinating piece. I mean I think that just to kind of pick up what you talked about with the students not trusting teachers. We were in a district last. Someone from Shumster is actually here where were doing guideline developments and there was a sixth grader who was with the teachers and leaders in the room and she got her essay back from a teacher that's known for being very concise and like almost terse. And the feedback came in one block of like one comment and it was like five paragraphs long. And our first response was like, I don't trust it. It's AI, this is not my teacher. And like and I think that is something that it really is multi directional.
17:29
Amanda Bickerstaff
I think sometimes we talk so much about students doing their like students taking the shortcut or not using judgment or these fees but it really is happening like at every level. And so I just really appreciate how much thought and effort, I mean we find that like to people right now, like whether it's focus groups or surveys or just sitting down, like is easily the best and most beneficial way to actually understand what's happening. So just appreciate how much that is in your report. So I'm going to take over just a little bit. So we have for those of you that are familiar, Sari Factor who come on Sari, come back on is my partner in crime here we. It's almost a year since the ideation process happened. So I guess Rebecca, when did you start this process?
18:18
Amanda Bickerstaff
What was the kind of moment you.
18:19
Rebecca Winthrop
Started kicked it the task force. We kicked off in September 2024. We basically did a year sprint deep again deep research interviewing students, parents, educators, experts, technologists all over the world and doing lots of other things, other research methods. And then finalized it in fall 2025 and launched January 2026.
18:42
Amanda Bickerstaff
So you guys are even more early on I think. And so for Sari and I think were sitting down at ASU GSV where we're all be next week if you're in San Diego, come say hi. And were just looking around this exhibition hall of all these AI vendors and it felt like were in amber, like nothing had really changed. And over a year in terms of what was on the floor, the conversations hadn't really shifted or become more sophisticated. There were novel applications of tools in a way that really made us go this is the opportunity. And so we did is I had recently read a piece called AI 2027 which is a futurist piece about what it would take if superintelligence was possible by 2027.
19:27
Amanda Bickerstaff
And what I really liked about the piece is that it made this very complex time very digestible by giving it a story. And so what we did is we brought together about 20 people over the summer in July, Sari and the team out in Imagine were amazing. All the free snacks and also like over 100 degree weather. And we sat down with people from like we had Stanford represented a couple times. We had one of the very first people to ever build AI in education. We've had, you know, we had students involved to two high school students, practitioners that were leaders. We talk about parents. Rebecca. We had a, you know, we had Jason B. Allen from the National Parent Council. And we sat down and we grappled with this big question of like, what are the possible futures of AI in schools?
20:14
Amanda Bickerstaff
Both if we don't make choices today and if we do. And so this report, we dropped it in the chat, is designed to take that through. And I will say we picked like a mid sized district, but we believe very strongly that this scales up and down based on what we have. And if you look at the way that we did this is we follow the central district through essentially where we are today. And one of the things that was so important to us is that we wanted this to feel as uniquely of the moment as possible till the point in which we don't know what will happen. And so things like AI. Rebecca talked about how this proliferation started to happen. We see that in 2023, and this district was relatively progressive. They came together and built AI guidelines.
20:59
Amanda Bickerstaff
But even with those guidelines, there was a lot of pressure in terms of students using work or being frustrated when they were accused of AI cheating. You had chatbot use already happening with young people. And then that leads into like a choice. And the choice here is something that we would love to start seeing, but essentially saying, okay, if this is all happening, if general BI is coming so significantly into the school day, is changing the skills of the future in real time, like what can we do to respond? And they created an innovation lab. Think a skunk works. Rapid iteration, Rapid in education, not that rapid, but one that was designed intentionally to not just live within this special school, but to come back into the district around age appropriate ways. That happens in 2026.
21:49
Amanda Bickerstaff
And then we start to see pushback because we know those of you that have been in education and systems change know that this is really difficult. And we will definitely talk about that in our discussion. And so pushback starts to happen in really significant ways more and more until the end of the 2027, the 2026, 27 school year. So think, you know, a year and a half from now. And so we, you know, the drivers, those big pressures, right, that were that are coming in are things that we see literally today. Right. We have a divided community. We have those that believe that generative AI is the path forward and those that believe it is something that's really going to negatively impact the social fabric and not just learning, but our society.
22:34
Amanda Bickerstaff
You have questions around assessment, which I know Sari is going to want to talk about with some work that we've been doing or she's been doing with accountability assessment, but our kids. Rebecca was so interesting. You talked about AI resistant assessments. There's no such thing as an AI resistant traditional assessment. Even if unless you are in a place with no technology at all and kids are locked down into that with hidden paper, these tools can be used to do any traditional assessment. And it's something that we have to think about because we're still expecting young people to show their learning in these ways that are at risk of AI use, but also a risk of becoming outdated. The other pieces are on scaling. How do you take innovative practices and actually scale it across a system?
23:19
Amanda Bickerstaff
That is going to be a big question that we always have. And then finally that uncertain future. We are seeing this in real time right now. The amount of young people in college changing their majors, for example, because they don't know what's going to happen when they graduate. Like that quantum of change being three to four years causing so much change already. So these are the pressures that we often see with the work that we do. And so we wanted to kind of have this moment of provocation where you follow central. We've heard choose your own adventure so you get to pick what you want. But there are three features here. And I'm going to say there are more than three features. There are many, many futures that could happen.
24:02
Amanda Bickerstaff
But we wanted to kind of give three straw man arguments, the first being this return to fundamentals. So you talked about that, Superintendent. It's only primary sources, traditional curriculum, no technology in schools. And so this idea that we can re entrench and we can also essentially create boundaries around this technology, that the learning would not be the same as it was 30 years ago or 20 years ago or five years ago. What we see a lot is that while the school is cutting down the use of technology, students are still accessing tools at home and they're also not being prepared for this AI literate future that we're seeing more and more important.
24:42
Amanda Bickerstaff
The second is this idea of going all in on the technology, making a choice to go in on AI tools in meaningful ways for teaching and learning for even tech optimized surveillance. It's a lot where we start to see the human judgment erode, where these decisions are being made more and more by AI and less by people. And then the last is this kind of messy middle, this idea that education really needs to change. This is the place in which the opportunity we find really lies, is that what if we build enough AI literacy? Essentially what we talk about, what's Rebecca doing? Prepare, prosper, protect, right.
25:19
Amanda Bickerstaff
Is that you do all of these wonderful pieces and you get into a space in which you actually can look at changing the system of accountability measures and assessment of how we're training young people not to come out with skills, but to become lifelong learners with durable skills, but also the ability to note their judgment. Right? To be able to. To really note their judgment and to believe in it. Which is something that we see is going to become in question even more as these tools become more authoritative to the point that Rebecca shared. So this is something that I think we've always wanted to have this be a way for us to navigate what we've experienced as educators and leaders, but then also force us to have that deep questioning and that intent about what happens next.
26:10
Amanda Bickerstaff
The worst, I think, and I'll take this off now to kind of come together as our conversation. And, and I think I can say strongly that one thing that I believe very strongly is that the worst thing we could do today is we not make any choices. Just let it happen, let it roll over us. And I think that is something that we are still seeing very significantly everywhere we go. We're very lucky. We're in schools and systems and conferences pretty much every day. And to be able to see and acknowledge that some schools pretty significantly in education systems are not making any changes yet is something that I think is. Is a pretty big worry. So what I'll do is I'll hand it over to Rebecca and Sari. But I think the first question I'd love to ask is why now?
26:58
Amanda Bickerstaff
Why it was so funny. Like, we literally changed our publication date because it was the same. It was going to be the same day as you guys, and you guys are a little bit bigger than us in terms of the scope. But what was the. Like, we started in 2024. But like, why do you think today or this time is the moment to have something so substantial about the risks and opportunities of AI in education?
27:22
Rebecca Winthrop
Well, I think for.
27:23
Sari Factor
For me, were seeing questions from our customers. Imagine Learning is one of the largest providers of curriculum solutions and our school. And we're in about 50% of the districts across America, only domestic. But they were asking us about how they should move forward. And there was kind of this moment in time. There was a. As you say, we're stuck in amber. It was the description you gave. We felt last year at ASU gsb, I think there was this worry the kids are cheating. That was the first thing we heard. The kids were cheating and they were like frozen in time. They weren't sure what to do about it. They were waiting for guidance from the federal government, the state governments, the district. And you know, and there was inconsistency.
28:11
Sari Factor
So we had two women, young women on our group, Amanda, that they were talking about their frustration when they would go from one classroom where a teacher was encouraging the use of AI and another teacher who was prohibiting the use of AI. And this, the cognitive dissonance around that was creating challenges for the students themselves. So we looked at this as we have to do something because as you said, no decision, no movement is a decision in and of itself. We have to be proactive here to do the kinds of things that are going to protect kids, that are going to promote learning, because ultimately that's what it's about. There was a question in the chat, in the Q and A about balancing protecting students cognitive development and learning trajectory without hindering their readiness for life and their careers.
29:07
Sari Factor
Because we know when they graduate, they have to be either ready for the workforce or the military or college. And every worker I know, every employer I know is looking for these skills. But beyond just these skills, they're looking for the discernment when you use AI. They're looking for adaptability. I mean, those are the skills people are hiring for. I'd love to know about your timing, Rebecca, why it felt so urgent to you and the council.
29:38
Rebecca Winthrop
Well, first, I agree with everything that you said, Sari, in terms of our timing to your question about, like, how did it come about? This was actually something that we, that came out of a big symposium we had in Silicon Valley in February 20th. HP hosted and was a whole day and we spent time in like the Hewlett Packer garage thinking of. It was great. Thinking about the, you know, the invention of the, you know, the chip and how it changed the world. Anyways, at the end of it, a colleague said, you know,.
30:16
Sari Factor
We,.
30:19
Rebecca Winthrop
You know, we knew that social media could. He didn't. I'm trying, I'm pausing because I'm like trying to figure out what he actually said. He made the point that was like, look, when social media came out, it was. It really was. The discourse was similar to AI. It's going to bring connect people. They're going to be more creative. Kids are going to have friends all over the world. We're at world peace. Like, the DIScourse was very lofty, and the discourse in February 2024 was very lofty around generative AI and all the wonderful things. And he said, what if we just made a list of all the potential things that could go wrong and all the potential things that could go right, and we stacked them up and then figured out how we act from there.
31:10
Rebecca Winthrop
And so we had this big discussion around all the things that. That we, who are our educators, child development specialists, parents, teachers, curriculum designers, know about how kids learn and develop. And if we had been at the table and had some. Had been aware and had some eyes on social media rolling out, maybe it could have been a different story. You know, for example, we knew a decade ago that social comparisons in adolescence can be very toxic if done poorly and could cause mental health. I mean, we. We didn't need to wait for all these kids to be harmed. Right. And we also know that there are parts of social media. You know, I would say the risks are farring, outweighing the benefits at the moment, especially around citizenship development and political discourse. But there are parts of social media that are good.
32:06
Rebecca Winthrop
Like, if you're super interested in learning to, you know, learn tycoon, you know, taekwondo or jiu jitsu or learn to bake, you can go on there and figure it out for the really, you know, sort of engaged, motivated learner who can stay focused far and few between. So we also knew it's complicated, like this idea of entanglement. The center for Humane Technology talks a lot about entanglement where, like, the good is all mixed up with the bad. And so that was really our genesis. And then, of course, it takes. It takes a while to, like, put together a big, massive global task force and get your methodology set up and get your 500, you know, participant focus groups all lined up and stuff like that. So that's why we kicked off in September.
32:48
Amanda Bickerstaff
Well, you know, I think it's interesting, though, because having, you know, I think that we're almost always doing action research. Like, I genuinely feel like every time we go into an organization, we do that it's still like the. Even though it's maybe a little bit like you. The research was already done, like, at, like, the beginning of the school year, it still Feels so like reflective of what's really still happening. Like, so it feels like I can. We write, we read things and see them immediately. So I think that these are such core principles and feelings from people of all different types that it just really is a nice place to like to build off of. If so there's someone in the chat, Leslie, again, that said, like, how do I, you know, like how do I build urgency?
33:28
Amanda Bickerstaff
Like this report, like is a great way to do it, pull out some of those pieces. But also like just go back to your own organization and take a similar approach. Like ask your students, ask your teachers, like dive in, ask parents. Like a focus group. We actually have a focus group protocol that we can drop in the chat. I always give a curveball to my team on these. But doing something like action research like Rebecca did, I think can be something that can be incredibly easy to do in your own context. It will not take as much. It doesn't have to be 500 people. It could literally be six to eight of a couple different demographics. But can start to really make this feel like urgent to your community. Because that's the goal. Right.
34:05
Amanda Bickerstaff
It's not just urgency writ large, but what does this mean specifically for your community that we need to do to have change management, really have the buy in to move forward.
34:15
Sari Factor
And we put in the beyond the AI inflection point report. There's a resources section that includes many of the same activities we put our group through that you can use in your communities. That was very intentional and it really from people who are in charge of AI in states and so on. It elevated some of our own concerns about the risks and our own thinking about opportunities, unlike the things that Rebecca articulated for our group. So I would encourage you to dig into that and pull those out and use them with your groups. And I would say engage parents, engage local employers if you can. I mean because they, you know, they need to hear each other as part of this growing community to build what you think you want for your communities. I think that's really critically important. Getting that buy in.
35:09
Amanda Bickerstaff
Absolutely.
35:10
Sari Factor
Can we talk a little bit about the prosper piece and the concern and I see it coming up a lot in the chat. The cognitive offload. Loading. I love the word stunting. I've been talking for two years about, you know, letting a stunt double do your paper. You know, it doesn't give you the experience of having to struggle with something you don't understand or you know, talk to us a little bit about prosper and in the context of Cognitive offloading, sure.
35:38
Rebecca Winthrop
And, and you know, there's analogy that I really like that colleague uses, says, you know, if you watch weightlifting, it doesn't mean you're gonna get stronger. And it's the same thing with Gen AI. If you have Geni do the work for you're not gonna get smarter. That's not the, like passively consuming interacting with Gen AI is not the same thing as learning. So to me that's like the clearest analogy we can all use. And, and there's just so much research. Again, were really asking the question of where we headed because it was early, it was a very rigorous methodology and we found a lot more. Wow, this is not just what potentially could happen based on what we know about children's learning and development, but wow, this is really beginning to happen.
36:34
Rebecca Winthrop
So we know that when kids use genai to sort of do their thinking for them, have less friction, have less effort, have less struggle in their learning that they might do better on. You guys say it in your report too, sort of assignments where they're able to collaborate, but then if you actually ask them a week later what they learned, they either don't remember a single thing that they wrote or was able to repeat or if you assess them in an in class test, they do much worse when you take the AI away. So that is basically what we're talking about now. The issue with cognitive offload loading.
37:19
Rebecca Winthrop
And in the report we use the term cognitive offloading because it was kind of the term that everyone was using and we debated about coming up with a new term and we had to get the, were too late. It takes a lot of time to fact check and do all our peer reviews and stuff. So we just went with it. But to me, the issue with cognitive offloading is we are naturally, as a human species prone to do it. We have evolved to cognitively offloading. It is why I, I don't know if I can speak for you, Siri or Amanda. I cannot be dropped in the forest and know which berries and mushrooms to pick to survive. But like my great, great ancestors somewhere, all of ours, you know, knew that.
37:59
Rebecca Winthrop
But now we have grocery stores and we don't need to remember that stuff. And we do, you know, higher order math. So the humans have evolved to cognitively offload. We will do it naturally. It is baked into us. So it's not, you can't just say, and I've had arguments with some of the big commercial tech AI providers. They're like but we have, you know, the AI for learning tab right here. And I was like it's. And the give me all the answers tab, I was like it's right next to the give me all the answers tab. Like it's.
38:32
Amanda Bickerstaff
And sometimes you can practical, you can also guilt, you know, ask enough times to give you the answer for the learning mode and it will still give.
38:38
Rebecca Winthrop
You the answer and you let's give them that. Eventually it'll be quite good, right? Because they're working on it. But it's not practical to imagine that someone, people are just going to choose the let me learn hard mode unless they have an incentive to do so like performing on an in class test without and they want to learn the material. So you know, I, we worry a lot about that. And the reason I also don't like the term cognitive offloading, I have decided is because it really is about taking a skill that you have developed and passing it on to a tool. So for example, I'm very old. I used to know how to read a map. I can't read a map anymore. I use Google Maps all the time. Like I've offloaded my spatial reasoning that I had developed.
39:24
Rebecca Winthrop
But now Google Maps gets me everywhere. Kids are offloading things they have never developed. They haven't developed critical thinking or deep reasoning.
39:35
Sari Factor
Right.
39:37
Rebecca Winthrop
So that is why I'm likening it much more to cognitive stunting.
39:41
Amanda Bickerstaff
I think that they're getting to the semantic thing though because like cognitive offload, cognitive bypass stunting and potentially surrender, cognitive debt. There's many terms, there's so many different things here. And I think sometimes we talk about monoliths. Right. So but kind offloading in my mind also brings with it choice like I'm choosing like you talk about. I don't, I'm not going to look at a map, I'm going to look at Google Maps now. Like, one of the things that's really interesting about cognitive surrender, which is now becoming more of a common term based on some research that's come out and we're going to drop something in chat. Is that what we noticed really early on, Rebecca and Zeri, is that the more authoritative a response looked from a generative AI tool, the harder it is to actually get our brains to evaluate it.
40:26
Amanda Bickerstaff
And now there's research that shows that we are the confirmation bias of long authoritative or just in the right format. Like generally one of the things that we struggle with is if I ask for a lesson plan and it comes out A terrible lesson plan in a lesson plan format that I could easily recognize that schema takes over. And some of it I think that the thing with young people, it's almost more when they're really young about cognitive surrender where it looks right like why would I need to learn this? It could do it for me that even more than offload, it's not necessarily a choice. At that point my brain is saying this is actually better than what I could do or this is right that it becomes something that can become quite a bit more dangerous.
41:08
Amanda Bickerstaff
Because if you look at a lot of the AI literacy frameworks, it talks so much about evaluation of outputs and most humans cannot like not kids, but humans are not able to consistently evaluate a outputs because of the ways in which we talk about our evolution and the way our brain works. And I think that's where it gets really interesting and tricky here where a learning mode doesn't fix this like a learning mode a soc. You know, my big bad work is that I don't know who decided that Socratic questioning was the right way for all generative AI chatbot learning to happen. But it is something that I think that we have to really dig into. When you talk about the protect how this is impacting so much more than just making a choice not to use it.
41:56
Amanda Bickerstaff
Because another thing that we think a lot about is when the tools actually become super reliable. What if you decide that you're going back to the artisanal way of reading that map that I'm going to shoe using Google Maps even though I know it'll get me there faster because I want to have my own decision making about the path I take. Like that is something that I think is like a really new piece too. Right.
42:18
Rebecca Winthrop
We just. Amanda, on your point, one of our. I mean I did a very quick, superficial sort of spark notes version but we sort of pulled out our top six benefits and our top six risks. And one of the risks that I didn't talk about here but is exactly what you're talking about is dependence.
42:37
Amanda Bickerstaff
Yeah.
42:38
Rebecca Winthrop
Which is what I think you're getting at. I'm curious Sarah, if you that that classification makes sense to you but that we found that a lot in kids saying and teachers frankly like saying I don't. I can't start anything anymore without it like this crutch, this dependent like I can't. I can't actually get going without feel I can't. And also like it was both a false sense of self efficacy. Like I put two sentences in and a essay came back and they're like, I did that. I really did. And also a diminished sense of self efficacy, which is, I'm not creative enough, I can't do it. I need this AI. I don't really.
43:20
Sari Factor
Sarah what we become a crutch. It becomes a crutch like much technology does. I mean, not just generative AI. Many technology tools have become a crutch for us humans and are diminishing our innately human skills. And I think this is really what you were saying with Prosper and what were saying with the path C, which was how do we ensure that we really focus on the human skills that are the things that really distinguish us from the computers and enable us to. We are directing the computers, not the computers directing. And I think we have to continue to really push there. And this is where you get back to redefining the purpose of schooling. What are we really teaching in the first place?
44:10
Sari Factor
And I feel like after two decades of no Child Left behind, and I mean, here's a great opportunity for AI to be able to help take a first stab at scoring a much more complex student project or portfolio project where that, you know, the teacher eventually has to weigh in. But there are things that because we couldn't really evaluate them efficiently and inexpensively, we've just boiled down what we want kids to learn into some sort of multiple choice test. So how do we start to build up when it comes to prosper, really building up the kind of AI resilient resistant activities and put the onus on the kids to take agency for their learning and become really capable humans.
45:03
Amanda Bickerstaff
So this is a perfect big transition. But I think one that's really important, which is talk about the purpose of schooling, right? That the idea of human skills, what we're assessing, our system still assesses the same way that when I was in school, you know, that it's still. And I'm sure there are things that look like the same as when you like we all were in school, whatever time period that was.
45:28
Amanda Bickerstaff
And so we think about like one of the things that we hear the most about why we can't start thinking about changing assessment structures or having more AI pedigo, AI resistant pedagogy is because these kids are still going to have to present their learning in this way, this kind of traditional way of one day, of one year coming in, giving you the knowledge over in this fashion, whether it's an essay, a multiple choice question test, short answer, whatever. So what do we tell people that Are like, yeah, we want to make a change. We want to so positive in the pieces like dual assessment, dual evidence of assessment where you have traditional and portfolios and project based. But this is a huge question.
46:11
Amanda Bickerstaff
Do we think that the powers that be, so to speak, the structures around schooling will be flexible enough, quickly enough for we to actually change school in a meaningful way for this generation of kids?
46:26
Sari Factor
Well, I am seeing more flexibility than ever in terms of states and even the federal guidance has said, come on now, we need some creativity, we need to look for some alternative methods. I think the portrait of a graduate push that many states are moving towards and districts are moving towards is starting to open up. But it's a huge change management process. Right. The system is just so wound around the current assessment system. And so I think it's going to take some time and I think we're, you know, our whole teach the how teachers are taught to teach, you know, is going to take some time to change because the colleges of education aren't caught up yet. So we've got to, you know, I see in the chat project based learning models and so on.
47:19
Sari Factor
I think that is ultimately where we should be going. But the question is, do we have what it takes to go?
47:27
Amanda Bickerstaff
I mean Rebecca, what do you think that.
47:30
Rebecca Winthrop
Yeah, I mean I've done a lot of research on education innovation and my conclusion is that yes, there's lots of headwinds, but actually the biggest headwind is mindset because there's. And mindset like through and through. Like, you know. And again, the reason we focus so much on the instructional core is because you shift the instructional core, you shift outcomes. That is the thing. A lot of ed reform is sort of, I'm like pretending I'm way up here at the policy level and it doesn't trickle down to the classroom educator. And the instructional core is the interactions between a curriculum, content, lesson plan, whatever that content. Be an educator, a learner. And then of course you've got like families sort of interacting. But like the core is content educator, learner. So much of the curriculum changes never actually shifts what's happening in the classroom.
48:29
Rebecca Winthrop
So it doesn't actually matter. So like I, I would start bottom up really thinking about the mindsets and support. I think teachers are squished from above and squished from below. They've got all this pressure from state standards and school leaders and high stakes assessment and then pressure from families who are like, you better get my kid into college or get them the scholarship or whatever it is. And so I really think we have to start at the instructional core with educators, look at mindset shift and then think about the conditions around them because you actually don't have to change that much structurally because we've seen people do it. You look at every single big picture learning school. There's 1400 of them across the United States. They're public schools, they're with unionized teachers.
49:15
Rebecca Winthrop
They, you know, they didn't massively shift the architecture and its mindset within and a lot of, you know, rigor relationships and you know, experiential learning, whatever that is. It could be PBL learning, it could be service learning, be career connected learning. I don't really care as long as it's done with quality. But that real world relevance piece I think is key. And then for me, the purpose piece that people don't talk enough about is the in person hub of building community. Schools are really some of the only places in the United States, for example, where to this day kids will meet other kids who are not from their family or neighborhood and not like them. And that is how you build community. That is how you build strong social fabric.
50:06
Rebecca Winthrop
That's how you build citizenship, which, you know, work is really important, but it's not the only thing. You also have to think about citizenship and their own self actualization and so centering those visions, which a lot of portraits of a graduate do, I think is, you know, really important.
50:28
Amanda Bickerstaff
Absolutely. So we're coming, we're just a little over time, but if you can imagine how great is it to be able to listen and engage with this work with Rebecca and Sari. I think that I hear all of this, but then I also see like how frustrating this moment is for so many people on the ground. When you talk about mindsets or systems, people are experiencing this in a very real way, like right now where it's like, what do I, like how do I become future ready in a world that's vastly changing? How do I support students to learn in a place in which I'm not sure the skills that I mean I'm teaching them will still matter. Like, what is the role of foundational skill building? What's the role of having kids do all these different tasks?
51:11
Amanda Bickerstaff
We've always asked them, we've for a long time asked them to do. And so I want to make the ending of this, it's practical as possible. So if you are going to someone on the ground right now, whether it's someone in this room that's working directly in classrooms or with Teachers or leaders in higher education, what can you do right now to support this change that needs to happen? And where do you push, where are you ambitious and where are the places? If you had to pick your ambitious moment, what would that be?
51:45
Rebecca Winthrop
I, I, our report has a ton of really practical stuff. I'll sari, I'll leave you for the closing word. But if I had to pick one thing, just one thing, one of our recommendations is to create student AI councils. Now if you are a classroom teacher in a school that does want to create a student AI council, if you're a school leader in a district that doesn't want to create a student AI council, create it yourself as a classroom teacher. But I think every school needs it and it really is co designing with students what you should be doing. So you know, we had a student co author on our report, we interviewed a ton of students.
52:20
Rebecca Winthrop
Student agency is essential here and they are learning alongside educators and they're like we can better beta test the products, we can look at the protocols, it's our data. We can tell you what assignments we're going to easily get around. We can tell you know, the things that motivate us and how to use it. Well, you know I, I wouldn't say don't use it. I would say make sure there's lot for me AI resistant pedagogy is make sure there are plenty of time in the day when kids are on their own doing effortful thinking with other people. And that is hard to do with technology in the mix, frankly. Any technology perhaps. But then of course you want times in the day when you are using AI tools to maybe forecast sea level rise.
53:10
Rebecca Winthrop
If you have a science project where you're tracking in middle school sea level measures and that only lets you do more rigorous and sophisticated work and you can learn machine learning at the time and how AI works that to me is what you need to focus on. And that's what I would do is those sort of student AI council.
53:31
Amanda Bickerstaff
Great. And then Sarah, if I had to.
53:32
Sari Factor
Do one thing I would, you know, I very much. We focus on the teacher so much and what the, you know, your comments earlier Rebecca around, you know, really helping the teacher make shifts from an older pedagogy to you know, using high quality instructional materials to ensure rigorous instruction and making those shifts which does put more emphasis on students directing their own learning. And that's a hard shift for a lot of people. I don't care whether you've been teaching for a lot of years or teaching for A minute. You know, there's this unsettledness about a lot of noise in the classroom and a lot of, you know, that we have to get over because we want those kids to help direct their own learning and that becomes really important. So I think that you're absolutely right.
54:25
Sari Factor
The classroom is the center of change and we have to really support teachers with high quality professional learning about AI, but also about how AI can be used in their curriculum area. So your examples are very much appreciated.
54:41
Amanda Bickerstaff
I'm going to put a fine point on this. I think that there is a really negative rhetoric right now that teaching students how to use AI will make them better cheaters. It is one of the reasons why we see AI literacy for students not being taken up nearly as much as it should be. So Rebecca, I totally agree with NAI student council and all these pieces, but I think if there's one thing to do is that even I would say at this stage, even more important than teachers who are finding this on their own is that students should be taught intentional generate bi literacy, that they should understand how these tools work, how they don't. The impact of cognitive bypass, surrender, stunting, offloading, dependence is something that is just such an opportunity for today.
55:29
Amanda Bickerstaff
And we have seen through the work that we do and others that like it doesn't take 20 hours or three weeks to do this. You can do this in two hours. Our free course for students is saying that almost 70% of students feel more ethical after that. They are using the tools more to justice slave. They're making decisions and that's like two hours. But I think that what we see, the most recent Data is like 2/3 of teachers have had some sort of AI literacy training and 1 in 10 students have. And I think that is just a stark place that we can actually pull that piece together or close that gap so quickly.
56:08
Amanda Bickerstaff
But without that I think we're leaving kids in our world in which they're just not prepared for what they're going to get in all places, whether entertainment, social work, you know, schooling. And I think that for us is just the non negotiable and it they will not be more like some of them are going to better cheaters no matter what. So.
56:28
Sari Factor
And there was plenty of cheating before gender day.
56:31
Amanda Bickerstaff
Absolutely. But like to give people more, I feel.
56:34
Rebecca Winthrop
Yeah, I feel like we should change the term or something. Maybe we could do it together.
56:39
Amanda Bickerstaff
Let's do it together. Let's rebrand.
56:40
Rebecca Winthrop
Because people think AI literacy and they think web literacy or it's like, they think companies going out and just being like, here's how to use my tool. And that's not what we're talking about. We're talking about, like, I wrote a piece with my colleague Jenny in the Washington Post, which I should send around. I'll send it to Amanda. You can send around. But we likened it as if you only teach how to. If you approach AI, literacy is only teaching how to use the tool. It's like teaching biology by only teaching how to use a microscope. You're, like, missing the whole thing. And, like, your courses are great. Amanda I really love the AI lit framework. It's like, you know, how do you use it, how do you manage it, how do you create new things with it, and how do you design it?
57:21
Rebecca Winthrop
Because students need to know that this is changeable. They can design it any way they want. It's not the laws of physics and all the ethical implications. And students. Every student I talk to, every time I talk to schools or school districts, and I get lucky enough to talk to students, which is my favorite part, they have so many opinions, and they're hungry. They're hungry for this.
57:44
Amanda Bickerstaff
Absolutely. And you know what? Not only are they hungry for it, but, like, they have big questions and, like, they're engaged. Like, how rare do we have a moment in time where, like, everyone has an opinion, everyone has a question, if you hit that the right way. Like, the worst thing is when people are bored. No one is bored with this. Everyone has something that they're bringing with them, even if it's just cautious, you know, like, I don't know yet, but I don't know. I think that we are so encouraged. I just want to say thank you to everyone for joining us today, but also just thank you, Rebecca and Sari. It has been a gift for me to have colleagues like the both of you.
58:18
Amanda Bickerstaff
And I think that what I would say is that while these two reports take two very different angles, what we want is that these types of conversations are happening on the ground in spaces that include students and include families, include leaders. But wherever you are in this world, like, there's a. There's a choice here to talk about things and to. To build knowledge, to apply new practices, to innovate, to. To. To release some things we've done forever that we don't need anymore. We can be like Marie Kondo. We can remove some things that, like, don't.
58:54
Rebecca Winthrop
I like that. MARIE Kondo.
58:55
Sari Factor
The education we're gonna.
58:56
Amanda Bickerstaff
Marie Kondo. Exactly. If it doesn't give us what? Maybe not pleasure, but if it doesn't actually really help. This is our opportunity to make those reframing moments, but just appreciate you both and this recording will be available. We want to say thank you to all the audience wherever you are in the world. We appreciate you. And yeah, look forward to continued work and learning from you both.
59:15
Rebecca Winthrop
Thank you. Good night, guys.
59:18
Amanda Bickerstaff
Bye, everyone.
Want to partner with AI for Education at your school or district? LEARN HOW