From Passive to Active: Teaching Students to Critically Engage with AI Feedback
In this webinar we explored how to teach students to critically engage with AI feedback, addressing a common concern: that AI turns students into passive consumers rather than active thinkers and writers.
In this session, educators learned strategies to help students question, challenge, and strategically use AI feedback - building essential skills for navigating AI in all areas of life.
Rather than replacing teacher and peer feedback, AI becomes a tool that helps students become more intentional about their writing choices and build confidence in their voice while learning to use AI to stimulate not replace their thinking.
Key topics covered:
The Peer & AI Review + Reflection (PAIRR) feedback prompt: Designed by educators and grounded in core writing pedagogy, this prompt helps students see their work through readers' eyes—identifying strengths to build on and areas for revision.
Scaffolding critical engagement: Seven practical strategies for teaching students to iterate with AI feedback, including how to push back on suggestions, explore uncertainties, and seek contradictory perspectives.
Protecting human connection: Why teacher and peer feedback remain essential for meaningful writing as human communication, and how AI feedback can supplement rather than replacing these interactions.
Hands-on practice: Work with real examples of AI feedback to test out iteration strategies and evaluate the feedback prompt in action.
-
Annal Mills’ Substack: Getting the most out of AI feedback
Anna Mills’ AI Feedback Slides
-
Anna Mills
Anna Mills has taught writing in California community colleges for 20 years. She is the author of two open educational resource textbooks: AI and College Writing: An Orientation and How Arguments Work: A Guide to Writing and Analyzing Texts in College. Her writing on AI appears in The Chronicle of Higher Education, Inside Higher Ed, Computers and Composition, AIPedagogy.org, and TextGenEd: Continuing Experiments. She serves on the Modern Language Association Task Force on Writing and AI. As a volunteer advisor, she has helped shape the pedagogical approach of MyEssayFeedback.ai, and she currently serves as co-Principal Investigator on the Peer and AI Review & Reflection project funded by the California Education Learning Lab.
Amanda Bickerstaff
Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.
-
00:02
Amanda Bickerstaff
Hi, everyone. It takes a little bit of time for everyone to come in. Although I'm saying, hey, Zoom has made a little bit of an effort. You're almost all in already. Well, really excited to have you here today. I'm Amanda, the CEO and co founder of AI for Education. It's been a couple of months since we've done a webinar, which is. We've been really busy on the ground. But I cannot tell you how excited I am to talk with Anna Mills today. I was just saying to Anna, it's very rare for us to essentially cold outreach, being like, we love your work, and we'd love you to join us on a webinar.
00:35
Amanda Bickerstaff
But I just was really struck, and our whole team was struck by the really thoughtful, practical approach that Anna's thinking through in her own work around really having students be active, like, kind of collaborators with AI, especially around feedback, which is one of the areas that we feel like generative AI is just so well suited to this moment in time of having an immediate impact on young people and they're learning. And so I'm really excited about that. First thing is always, if you've never been with us before, I see some of you have, and you're already starting to say hello, and what you're going to notice is everyone is from all types of places. We are already from Columbia, and I see New York already in the audience, a couple of people. But if you want to say hello, where are you from?
01:21
Amanda Bickerstaff
Got Arkansas, North Carolina, Spain. I feel like some people are up really late or up really early, but say hello. It's a big group. As always, you all are a community practice, and so drop in, you know, anything that you'd like to share in terms of resources. If you have great examples of how you've used feedback in the past or currently with your students, drop those in. And then if you have a question specifically for Anna or myself, please use the Q and A function. So what you'll notice is the Q and A will ask us directly, and then that way I can make sure that we answer the questions, because this is really meant to be a conversation. And then, as always, drop any resources that you have. But we're really excited.
02:04
Amanda Bickerstaff
So, you know, one of the things that, you know, AI for education, what we. We're always in schools, right? We're now, I don't think, Anna, we're more than like a day away or six hours away from talking to teachers or leaders or students. And one of the things that I think we've always felt from our own experience is that one of the ways in which AI can be used in a really low risk but high reward way is around feedback. And I think that when we saw your piece, and I know it's in your book as well, and the research that you're doing, it just struck us so much, and me particularly of like, now this is something that educators can do today to start making a difference around AI fluency and adoption.
02:47
Amanda Bickerstaff
So I'd love you to introduce yourself, Anna, and maybe talk a little bit about what you're doing. We'll drop in the link to her wonderful blog post that really brought us to you. But yeah, and I'd love to know a little bit more about you and what brought you to this work.
03:02
Anna Mills
Great. So I've been teaching community college English in the San Francisco Bay area for around 20 years, and I wrote an open educational resources textbook on argument. And I kind of grew up in Silicon Valley. And so when I discovered AI in June 2022, I really felt a calling that this was something that I wanted to engage with and think about how this affects writing education. And I really jumped into sort of social media conversations and sharing resources openly. And so I'm just grateful I've been able to be in a lot of conversations on a task force with the Modern Language association and give a lot of faculty workshops and now developing my own kind of open educational resources AI orientation textbook. And I focused on feedback from early on.
04:01
Anna Mills
So I got to sort of be a volunteer advisor to a NonProfit app for AI feedback, my essay feedback. And I've been using that. And, and then I teamed up with researchers from University of California, Davis, who had also been doing a parallel approach with AI feedback and had some great data on that. And we're now doing a grant where we're developing kind of that approach and just sharing our materials and inviting input and people to adapt them. And great to be in dialogue with Amanda and find sort of a kindred approach here. So that's. Yeah, that's a little about. About me right now. And.
04:47
Amanda Bickerstaff
Well, I think what's really interesting, Anna, which I really like because I think I started, you know, kind of the same time, maybe a little bit earlier, like April of 2023, maybe late March. But one of the things I think is really interesting is that you couch your discussions of AI and feedback from a very personal place where you say, I found this useful. I think that we see as such a beautiful on ramp to educators getting more comfortable with AI use, especially with young people is when you start to see value for yourself. Can you talk a little bit about what was that light bulb moment of here's this new chatbot and I'm going to just ask it for feedback. What brought you to that question?
05:31
Anna Mills
I had a moment In, I think, June 2022, when I was playing around with OpenAI's very technical plat at that time for interacting with these with GPT3. And I. And I gave it an essay and asked it. I think I asked it for feedback and it pointed something out that I hadn't noticed. And it was like, this is not earth shattering. You know, I don't like all of the things it's saying, but it did direct my mind to something I wouldn't have stumbled upon. And so I could see that it could be an assistant for thinking. It could help support my own thinking process. And I've since used it a lot for feedback on my own writing. Of course, I'm still asking humans for feedback as well. But, you know, it is really important to me.
06:22
Anna Mills
I'm glad you said that about being very practical and coming from a personal place, because I want to be really grounded in what I'm doing as a teacher of writing right now and how I'm sharing with my students where I'm coming from, how these practices are emerging, the good and the bad and the big questions and uncertainties that I still have around it. I think it, you know, with all the craziness of AI, that feels much better to me to come from that place.
06:56
Amanda Bickerstaff
I mean, I think, you know, it's so interesting though, because I do think that what I find really fascinating is that if you, like, go to the Internet, everybody, so the Internet, LinkedIn or wherever you are, it's gonna be like English teachers, writing teachers hate AI. And what's really interesting is that I think that some of the most innovative practitioners and thinkers of today are those that are working and writing and ALA instruction, English instruction or writing and reading because it has been so disruptive. Right. Like to your practice. And some are rejecting this. Right. I'm sure some of your colleagues are. No one use AI ever. It's only pen and paper or blue books in college if you have those where you are.
07:41
Amanda Bickerstaff
But then at the other end, there's this real amazing almost moment where those that are willing to lean into the uncertainty, the innovation, the opportunity, are doing things like you're doing. And so I think it's a really important thing to prop up that all the noise that we hear isn't always true. A lot of the people that are innovating are those that have had the most disrupted piece of. And I think that's where I really enjoy the work that you're doing. Have you found any pushback from your colleagues on this approach or are you finding them to be more open to thinking of AI as a potential tool for students, even around feedback or other types of assistance?
08:20
Anna Mills
Well, I think there's a lot of really important discussion that's happening and real concerns that I share about both AI itself and how it's been built and the ethics of it and about protecting some space for students to develop their own voices, for us to have transparency about what's AI and what's not so we can understand where the student's coming from and whether they're learning. I share those concerns and I don't think it has to be one or the other, as you're saying. I think that really where I'm headed is the idea of guiding students in this moment and helping them to engage with AI in a more empowered way and to understand how they might not want to use it, how they might want to use it, what their choices are, what people around them are doing.
09:13
Anna Mills
And so I think it, you know, I think that's practical. I think that helps us work with AI in the future in workplaces, and it also can be critical at the same time. So I've been having great discussions with my colleagues and I find that more people are kind of exploring that uncertainty and that a middle ground of some criticism and some exploration.
09:42
Amanda Bickerstaff
Absolutely. And so one of the things that we talk about, that balance before we kind of maybe dive into the like the why of AI fee and feedback is a good approach, is that you talk about a little bit about in your blog of this like time where you use this feedback, you use the tool to help you write and you thought you built this beautiful output and you're like, Claude was like risen you up saying, yeah, you got this, Anna, you're so smart. This is the best conference description. And then you submitted it to people and they were like, Anna, this is like not, this is like very list heavy and not really how we would write this. And so, and it took you a little bit of time to kind of come back and realize what had happened.
10:29
Amanda Bickerstaff
Can you talk a little about that? Because I thought it was such a beautiful. Again about that, like the practical nature of this until you experienced that, I'm sure you weren't even thinking that this was like going to happen. But it did, right?
10:41
Anna Mills
It did, yeah. And I'm susceptible to a little praise and validation that I'm craving. And I think that was really interesting because this came up as were developing our feedback prompt in the PEAR project that I'm part of, because we found that, you know, we wanted it to be so validating, but maybe sometimes the student, it would better for them to start over with the draft and, you know, and really take a different approach. And so we don't want it to over praise. And so I think that the AI companies are really struggling with this tendency of chatbots to be sycophantic, to be fawning, to maybe encourage things that we need some pushback on.
11:31
Anna Mills
And so I think part of AI literacy and part of learning to respond to the feedback and how we can support students to do that is kind of that self monitoring, the awareness of how I'm responding, what I need in the moment, how I might be susceptible to being misled, and why I shouldn't trust the chatbot too much. I should keep a little distance and think about different ways I could respond. Maybe I could say, okay, but what's another perspective on that? What would you say if you were going to gently critique me a little bit? Right. So.
12:11
Amanda Bickerstaff
Or even be brutal, Pretend that you are like, you know, like, give me the hard feedback. I do, yeah. So everyone, the sick of fancy thing, man, everybody. So if you don't know what sick of fancy is that these we call the bots. Yes bots, they're. They're instead of yes men or women, they're yes bots. And so they're designed and they're trained to be pleasing. And so if you have ever been asked by ChatGPT or Gemini or Claude to pick which of the two responses you like better, it doesn't ask you which is more accurate or which one is going to be. It's. It's what's better or what you like the best. And so we pr. Those companies have actually prioritized pleasing over, like direct.
12:57
Amanda Bickerstaff
And so what ends up happening is, you know, ChatGPT or KLOT or Gemini will be like, you are brilliant. This is the smart. This is the thing that no one's ever thought about. This is the best conference abstract that's ever existed or oh my gosh, you're right even when you are not right. And so this is where that sycophantic piece has to be something that you really spend time with young people to see when they need to Put pressure on these systems to be more direct, to give better feedback, to act as a professor. Because I'm going to tell you right now, if you ask ChatGPT and give it a role of your hardest professor, that sycophancy will actually decline because you've given it more structure.
13:40
Amanda Bickerstaff
So what I'm going to say, there's such a great conversation in the chat happening, which is so great. And a lot of this is actually what we're going to bring up. So I know that Dave and others have talked about, like, how do you use this? Have AI is almost like one of the tools in your toolkit, if that makes sense. Meaning like AI feedback is not on its own. It must happen within the context of human judgment from the individual student, the teacher, the tutor, et cetera. So maybe Anna, do you want to talk a little bit about your three principles of how AI feedback can be really helpful? Because I think that would be a nice contextual piece to what people are navigating in the chat. And if you want to share your screen, you're more than welcome to.
14:20
Amanda Bickerstaff
You should be able to share your screen.
14:22
Anna Mills
Sure. One second. So I really want to encourage students to be skeptical, to be critical of AI outputs, to recognize things that are not quite right about them. One second. It's not letting me shoot. Okay. It's not letting me share the right thing.
14:47
Amanda Bickerstaff
Sorry, I can. Do you want me to share it? And then we can go. But like, yeah, so I think that I'll just share your slides and you can.
14:54
Anna Mills
There we go. We fixed it.
14:56
Amanda Bickerstaff
You got it. Go ahead.
14:58
Anna Mills
So helping students build that skepticism, we've got to question these systems, to work well with them and looking for things like inaccuracy and bias. Even so before I invite students to engage with feedback, I do have them do some basic AI literacy readings that cover so called hallucinations and bias. And I also want a use of AI that supports existing learning goals. So a sweet spot where this kind of critical AI literacy and support for what we're already trying to do can go together.
15:41
Anna Mills
And then, you know, I want to help students build confidence in relation to AI and sort of self awareness, metacognitive awareness of how they're using it, the choices they're making to develop a sense of agency and of the value of their own judgment, their own voice, their own ideas, so that they're not worshiping AI, they're not looking to it as an authority, but they have a very different relationship with it. So that the question is, you know, I Think how can I support them to engage with writing feedback in this way? And I think it's kind of a natural fit. I don't know if you want to.
16:22
Amanda Bickerstaff
Yeah, I think. I think that the metacognitive component of this is really appropriate. I think that you also, like, one of the things that's really interesting is, like, what do I take and what do I leave? And I think that is maybe easier to do with AI than a human. Like, meaning, like. But Anna. But Anna, like, knows me and everything. But I don't agree with Anna, and maybe I don't feel confident to, like, stand behind my own reasoning of, like, why. Although Anna's perfect in all ways in this way, I actually think that this is a thing. But you could train students to have that reflective process and to build that, like, reasoning of why the feedback should be rejected.
17:10
Amanda Bickerstaff
And some of it might be easier with AI, because the AI's feedback is, you know, it's going to be more, potentially more superficial, or it could be sycophantic. It could be limited to create, like, the creativity could be limited. It can. It knows less about. It knows nothing about me, really. But you can almost train students to say why they didn't take the feedback. And then the next time they get feedback from a human that they don't agree with or not sure about or want clarification, they're actually going to better at advocating for their voice. And I think that's something that's so interesting here, is that while I think the reason why Anna, you both. I agree is that feedback is a relatively safe place because it's subjective. Excuse me. Feedback is subjective.
17:56
Amanda Bickerstaff
And it really should be a path for guiding us instead of replacing our thinking. Like, we wouldn't want to have anyone tell us this. There's a better place for us to go, like, because that's that. But our voice should be the one that's prioritizing our thinking. That you can practice these systems with these systems and then have advocacy, metacognition, reflection, voice development be even stronger. And I think that's really interesting. So can you talk a little bit about how you've used AI feedback as a component of the larger. So there's some questions about this in the chat and others is like, where does the AI live? Is it something that's like, first pass feedback?
18:39
Amanda Bickerstaff
I know you're training students to be a better prompter, but when and how are they actually using AI in comparison to your feedback or other pieces of feedback that are human? Human led.
18:51
Anna Mills
So it's really important to me and to the PEAR project that we're sharing a way to invite AI feedback as part of a human centered writing process where teachers and peers are still responding to student work. Because writing is a communication, it's social, it's rhetorical. That's where the meaning lies. So I'm having students do peer review and also meet with a tutor before they engage with AI feedback. And then the peer project has a really nice simple structure, so they are asked to chat back to the feedback at least twice and then they're asked to reflect on how it compared to the peer feedback. They're asked to answer questions like, you know, what resonates with you, what doesn't, what do you think you'll actually do now?
19:49
Anna Mills
So that really emphasize their agency as writers and their own exploration of their ideas, which should be the point of assigning writing, is to sort of help the thinking process, right? Help them figure out what it is they want to say more clearly.
20:06
Amanda Bickerstaff
So, yeah, I have a question about the two times because we kind of, we joke with people that like, you have to pinky swear us that you will never accept the first output from an AI system, that you always have to give feedback. Do you think two is like, how'd you come with two? Like, like two times that they have to do a turn taking and like push the bot for in its feedback. How did you guys decide about that?
20:31
Anna Mills
Well, we just wanted to keep it, you know, not feeling onerous, but also not just being one, a one off. So, you know, we share these multiple strategies for responding that are really very different ways to get more out of AI feedback. And so we want there to be a little bit of experimentation, exploration there. And we also didn't want it to become like, you know, 10 or a huge long requirement. And so far it seemed like it's worked really well. Some students appreciate a little support as to how to chat back and some of them just launch in and they've got. And it feels very authentic and sometimes more frank than they might be with a tutor or a peer friend.
21:28
Amanda Bickerstaff
And can you share, can we go back to your slides and share some of the push those kind of strategies that you're using to build. Someone asked about critical AI literacy, so maybe if you talk a little bit about that. But there's the seven strategies of being really active in working with the bots around feedback. So do you want to share those? Sure.
21:54
Anna Mills
So the first strategy is really to be frank and to push back I really want to encourage that boldness and that sense of confidence. So, you know, and just that it's kind of fun that we can be more frank. So, you know, I'm not convinced that I need to make thesis more specific or I just don't think I can do that. Or I could do that, but it would be boring. Right. And so, you know, we can see what comes out of that when we're really frank, asking for clarification, asking the bot to give examples of what it's talking about or more specifics about where that is in the draft, quotes from the draft that support what it's saying.
22:48
Anna Mills
And then, you know, I think it's kind of liberating that we can just ask it to do it again, try again, give me a new version of the feedback, and we can ask for the flavor that we're interested in, maybe a more casual approach, like a friend would give me feedback or in the style of someone we admire. I just tried with Trevor Noah and it was very interesting how it represented his voice and. Or a reasonable perspective that goes against what it just gave me. Right. So contradiction to what it said before. And again, that's raising awareness that we can't just take what it gives us, that it might say the opposite the next time and we have to make our own judgment.
23:37
Amanda Bickerstaff
I just want to kind of, maybe for three. I think the only thing I want to kind of talk about in terms of if you know us, you know, we're very much around the ethnic. I would be a bit careful about James Baldwin and Trevor Noah and others, especially around bias and also around familiarity. Some of our students are using these tools for like therapy or friendship and like anthropomorphizing them like this could lead to something that might start to drift into something that is maybe unintended, but also every. Like having someone that's a real person, a bottom, replace them far. Like it's not going to be maybe a best ethical practice, but I mean, it's an interesting way. But maybe instead of it being Trevor Noah, it's like, you know, a comedian that talks a lot about.
24:28
Amanda Bickerstaff
A lot about race and ethnicity or a famous author who lived in the 50s, like just a little bit less because then it's not trying to kind of almost teaches them how to be more ethical users and understand what the bots really can do, because it might feel very real. It could be misrepresenting and. Or kind of creating a false kind of interaction, if that makes sense. So I Think it's really interesting. But again, these are the ways in which you build critical. Someone asked about critical AI literacy. Critical AI literacy is not just what you can do, but how you should do or. Or really understanding the complexity of, like, how the tools work, but also how it's best for me to interact with them. So I think there's like a really. These are always really interesting places to dig into.
25:15
Amanda Bickerstaff
Cause even that, Anna, I would assume, would be an amazing conversation with your students about, like, what are the right approaches for like, getting that different perspective in without kind of potentially impacting bias or something along those lines.
25:32
Anna Mills
Yeah, I. I think you're right to push back on me on that. And I'm really. Especially since I did that experiment with Trevor Noah, and I thought that it was sort of stereotyping his voice, and I did have a moment of, like, maybe I shouldn't be doing this.
25:50
Amanda Bickerstaff
But until you do it, though, right? It's. It's one of those. But think about that teachable moment, Anna. Right. Think about that moment where, like, I'll have someone drop it from our train. We have these critical analysis prompts and then also, like a one pager. And one of the things is, like, comparing, you know, what you could do is if you had someone in your life give me feedback like, you're my, you know, my parent, and then have your parent do it and kind of compare, like, what it is, and like, you can even look into, like, how these bots are. Are essentially probability machines that are taking these, like, stereotypical ideas of what it could be and then, like, actually comparing it. Or even, like, compare the tutor. The tutor who takes this approach to my tutor, like, those are.
26:37
Amanda Bickerstaff
It just are these rich places to build such significant, like. Like. Like critical AI literacy that it isn't necessarily what you're trying to do, but could be a wonderful offshoot from it. Yeah.
26:52
Anna Mills
And I think that we have tried to push back against the anthropomorphization and too much sort of relationship building that's not quite right by making the language precise around. It's not that the bot is James Baldwin. It's what might this person say? And maybe that's still a little too far over the line. But even when we talked about how it should talk about their writing, you know, we told it right in the style of a coach who is curious, respectful, all these things, but don't actually express emotion because it was saying things like, I'm so excited to read your next version.
27:37
Amanda Bickerstaff
Yeah, right.
27:38
Anna Mills
And so he said, no, don't say that say I'm here to respond to further questions. Right, right. So a really precise kind of, you know, we can build on that emotional dynamic a little bit but we also want to be really careful about it and distinguish what a human can bring.
27:58
Amanda Bickerstaff
Absolutely. And I think it's a couple things in the chat that are interesting is like talking about building like a feedback gem or a chatgpt or a play lab bot of actually pre prompting or system prompting it to. To. To be less friendly, to be more direct, to be encouraging but also like find real places of opportunity of to understand. And I think that can be a way. But I will say though the world is going so for example, Copilot is releasing avatars where you can pick your avatar to talk to you the same way like a character. I would do as well.
28:36
Amanda Bickerstaff
So the more that I think we can teach our kids that like, you know, our students, whatever they are, that these things are not real on the other side like the better off they're going to be when they become more anthropomorphized because they're stickier. Right. They they sticky in terms of like the tech companies know that if these tools are pleasing, they give you the answer that you want. They feel like real life that you're going to use them more. And I think that is something that you can really have like a nice like conversation with your young people.
29:05
Anna Mills
So.
29:05
Amanda Bickerstaff
Okay, let's keep rolling though because I know we have four more suggestions and I just want to say everybody, two things. One, and I think you're comfortable sharing your slides.
29:12
Anna Mills
Is that correct?
29:13
Amanda Bickerstaff
Correct. Yes. So we'll share the slides in the follow up and secondly the chat is always going to. I think we can do this. I'll let Dan answer but we'll definitely try to share the chat where we can only thing is around transparency and privacy. But just let us know. We will try our best to share all the good thinking that's happening. But I'm going to turn it back to you Anna. Talk about your fourth kind of strategy.
29:41
Anna Mills
Yeah, it's just really simple. Ask about something it didn't address yet. Whether that's from the assignment or it's something the student has heard before they're concerned about and we give them examples. The fifth is to actually get it to help us explore our own uncertainties. Sort of tell it. Ask me questions about what I want to do with the conclusion. Get it to sort of draw out our ideas. Maybe say I don't like one of its Suggestions, but I'm not really sure why. Can you help me explore that? So again, it implies that, you know, that the writer has to sort of trust themselves and they're not looking for the bot's opinion, they're looking for it to help them figure out their own opinion.
30:33
Amanda Bickerstaff
See, I love this though, learning how to trust yourself. Because again, like, I mean, some of us are not confident writers and if anyone gives us feedback, we immediately will over prioritize that feedback because we don't believe in ourselves. Right. Like, and I think that is like a really interesting one of like, you know, like, you could even potentially use AI also to be like, here's my update. Like, this is why I made the change. And then look at it again, like, and to get feedback on the update. I don't know. I find it really interesting because think of how few young people today feel confident about their writing that like, is there a place for them to really use this to build confidence and to do it in a structured way? Seems like such a really thoughtful, like, durable.
31:22
Amanda Bickerstaff
Talk about durable skills, Anna. Durable skill. But also, like, something that like, could really help bring them into the classroom again.
31:31
Anna Mills
Absolutely. And then I think we can also sort of be honest with it about where we're stuck and where we need some ideas as to writing strategies. Not just it telling us what to do. But, you know, what could I do besides outlining to help me organize or how can I tell if it's time to take a break or keep poor pushing on this draft?
31:58
Amanda Bickerstaff
Right. Oh, man. Okay. I mean, this is the thing. Like, I will say that we're not talking about this so much, but like, how funny. Like, sometimes you gotta just like, get stuff down. Just get stuff down. Like, just hear them all my crazy notes. Does any of this make sense? Like, like, because one of the cool things, everybody, that is like, if you're like a handwriter or your kids are like handwriting people or. Or they need to take notes, or it's like voice notes. You can use computer vision where you can upload those and like, have it help you kind of find a path and give feedback on your thinking. You could use it with a voice memo to transcribe it and like, give you feedback that way.
32:40
Amanda Bickerstaff
There's some really fun ways to like, use multimodality too, which I think is really, like, fun, because you can start to like, get feedback in ways that are impossible from a. Okay, I guess this is my one push. Anna too. Like, are there ways now that you've built this we should come back to where you can start to think of feedback that isn't possible from a human meaning, like, would a human be able to take in one setting? Your voice, notes, your kind of handwriting, your thoughts and help you organize them in real time is something that a tutor would really struggle with and probably not have time to listen to you. But think of, like, the fun ways that you can build on this wonderful framework and start thinking about the possibilities of now. Because I still think that with. Sometimes.
33:28
Amanda Bickerstaff
I don't know if you agree, Anna, but sometimes it's like we're still thinking in the moment to which we're in what we've been and where we have been, of what feedback has been. But even pushing further into what's possible, I think it starts to get really interesting as a next step.
33:42
Anna Mills
I love that. Yeah. And that kind of segs into this last approach, which is, you know, to ask for things that a tutor probably wouldn't do or wouldn't do very easily. Not for ethical reasons, but just because it's kind of a stretch. So, you know, what if we ask it, you know, what possible connection does my essay have to octopi? Why should that be in my conclusion? And so, so there can be this playful element that also helps us build that habit of having a little distance and a little sense of humor about how these systems are different from us and how they lack that real intention or purpose. They're just, you know, spinning words according to what we're asking for.
34:29
Amanda Bickerstaff
Right.
34:31
Anna Mills
And so, you know, that helps us have that little bit of distance that also returns us to our own sense of purpose and sense of. Well, I am going to say something with intention, right?
34:42
Amanda Bickerstaff
Yeah, absolutely. And if we want to come off share. I mean, I think that what's really. I would say that one thing that's really nice about this, Anna, is that, like, it is really practical and people are really, like, starting to, like to. To latch on to certain parts of this and just always think also everyone that like, sometimes these ideas and these approaches, they're the things that we learn and know and we apply with a new lens. And I think what you've done here, Anna, is you've done a lot of that applying to this new lens and opportunity, leaning into the potential negatives. Right. But then finding the path where even the negatives can be turned into positives. Because, for example, someone's asking in the chat, like, how do you keep kids from, like, using it to rewrite?
35:25
Amanda Bickerstaff
Well, if they do, giving them the opportunity to recognize that I need to reframe my prompts. This is down to me too. If I understand the bot's first instinct, you know, first response will be to give me the answer. If I say in my prompts, avoid giving me the answer. And I want you to. I want to take it step by step, ask me questions about my writing. Like, that is critical AI literacy. Like the same way that's been. This is like the thing here is that like, if students see that this is what happens, just because it's possible isn't the best part or isn't what we want to do.
36:04
Amanda Bickerstaff
But even those small pieces of like, oh, I need to reframe how I ask and direct the bot to better serve my needs in this moment is going to carry over to all of the work that they do, whether it's feedback or brainstorming or creativity or all of these different pieces. I do think that this is something that really can be quite helpful. So we're kind of. We have about seven minutes left. First of all, Anna, first of all, I hope you've enjoyed this. It's been very impactful. But I want to think that we definitely have some ideas of how to think about this and some questions.
36:38
Amanda Bickerstaff
And so I think one of the things that people are really interested in is how do you make this more tactical in the sense of if you are going to use AI feedback at the beginning of a school year right now, how would you start, Anna, Would you start with it's all human feedback and then they learn how to use AI feedback. What's the trajectory of how to build towards this critical approach? Because we know it won't happen all at once.
37:09
Anna Mills
Yeah. With the PEAR project, what we do is we have some really short curated AI literacy readings and then we assign peer review on a draft and then we assign reflection on the AI feedback. And we find that it's easier to do that through our sort of nonprofit partner, my essay feedback, because that guides them through the process and it has the teachers, it has the prompt that we've developed and tested intensively, it has the teacher's assignment and rubrics so that the feedback is customized to that. And then it prompts the student with ideas for the follow up chat and it prompts them to reflect on the feedback and give and you know, give some credit in the learning management system for that participation. So it kind of guides them through that process in this structured way.
38:07
Anna Mills
And it's really an additional process assignment for each essay. So it's sort of Tucked in there after the peer review and maybe with the tutoring and before the final draft is due. And we also offer an alternative assignment for students who don't want to engage with the AI feedback. Yeah, that's kind of the basic structure we're using.
38:31
Amanda Bickerstaff
Yeah. And I think that this is where I think that it's deliberate. A, it's pedagogically sound. B, you're using bespoke tools, which is something that a lot of people have been sharing, and you're doing it in a way that really builds towards mastery.
38:48
Anna Mills
Right.
38:48
Amanda Bickerstaff
And I think that is something that's really interesting. And I'm sure as you're working on it, you're seeing, like, connections to, like, outside of, like, class and, like, how students are starting to think about AI just. Just different and more thoughtfully. I will say people are asking about AI literacy. Like, for us, like, we would always suggest that intentional AI literacy is a part of this, too, is, like, the teachable moments of, like. One of the things I think we sometimes believe is that students will just be able to do this because they already use technology. I think Helena asked about, if we remove Trevor Noah or James Baldwin, would students know to engage with bias? But if you've actually talked to students about bias as part of their foundational AI literacy, then they should be able to identify that for themselves.
39:35
Amanda Bickerstaff
I think that's where some of that foundational AI literacy can come first. We have our AI literacy framework that we're hopefully going to launch externally, even though it's taken us forever, which is our C framework, which is around the knowledge, skills, and mindsets around using AI safely, ethically, and effectively. For example, with this, the way this would fit, Anna is they effectively would actually know how to ask for feedback, when to ask for feedback, and then the safety would be what. When to share my intellectual property or how to. And then ethics would be how to avoid bias or how to avoid, you know, other types of, like, ethical issues. But what you can see is, like, you can start to, like, frame these as much larger conversations that lead to, like, critical literacy, but also potentially for outside your classroom.
40:23
Amanda Bickerstaff
I'm sure your students have to be so excited, Anna, about how much this is helping them, especially since everybody. The thing we have not talked about is that Anna is not available at 2:00am you know what's available at 2:00am? Generative AI. And it will not be mad at you for calling you up. And I think that this is really just so important that, like, it is so important to also Provide, because not every student has a tutor at home, a parent they don't have. They have multiple jobs.
40:52
Amanda Bickerstaff
And to give, equip them with these tools which you're doing, Anna is going to support them in not just your classroom, but in all the ways in which if they're using it meaningfully, intentionally and ethically, like just imagine, like knowledge gaps could be helped to be pushed through assistive technology opportunities in ways that have not been done so many opportunities. But I'm going to give you the final word, Anna, because we, I just really appreciate. First of all, I just want to say thank you. We had over 400 people today and you guys have been such a beautiful group as always. But Anna, do you want to kind of like leave with like a final statement as your cat walks by before we say goodbye?
41:29
Anna Mills
Well, I love what you brought up because I think there is an equity argument for introducing AI feedback. I know that, you know, I had peer review, I had my parents helping me giving me feedback as a student. And most students don't have that additional support. And especially low income students who are working full time, community college students many times don't have access to, you know, multiple iterations any time of night. And I think that, you know, AI feedback can be a piece that they find really valuable and it doesn't mean it has to take over. They will also see value in the peer feedback, in the human feedback at the same time.
42:15
Anna Mills
And we can, as teachers, I think it's important to sort of see how can we try to be wise about guiding students, not throwing out all of our writing pedagogy, not throwing out the human meaning of the writing process, but, you know, making a place for AI that's limited within that, within a vision of, you know, writing as a practice that helps them develop their own thinking and confidence. Absolutely.
42:43
Amanda Bickerstaff
And you know what? I got to my PhD program in college and I was so self directed, but I was, I'm not a good, I'm a good writer, but I'm a, I do not, I struggle with writing and one of the ways that I use generative AI the most now is to like get the feedback I didn't get when I didn't finish my PhD. You know, like, I like genuinely find it to be something where if it's done the right way, with the right AI literacy, the right intention, the right support, and it's given a space for us to experiment in safe places that allow for like calculated risk taking too of this might not work or we might have to reframe what we do. I think the outcomes can be really meaningful. So thanks everybody for hanging out.
43:22
Amanda Bickerstaff
I can't believe it's been like, two months. So we'll do more of these now that I'm not on the road every day. But, Anna, you were so thoughtful and such a pleasure to get to know. We appreciate your time and effort and just thanks everybody. Go to bed if it's late. Otherwise, have a good afternoon and we appreciate you and we'll see you next time. Thanks, everybody.
Want to partner with AI for Education at your school or district? LEARN HOW