Generative AI & Assessment

This webinar featuring international AI consultant and author Leon Furze explored how recent advances in GenAI have exposed the vulnerabilities in existing assessment methods, and offered proactive ideas and practical strategies for addressing the technology in classrooms.

Since the release of ChatGPT in 2022, Generative Artificial Intelligence has developed rapidly, including in mathematical reasoning, image recognition and generation, and other domains. This makes online assessment problematic in many contexts.

In 2023, Furze first introduced the AI Assessment Scale (AIAS) to help educators and students work together in understanding appropriate use beyond simple "use it or don't use it" policies. It has since been adopted by schools worldwide and featured by UNESCO. The session included a discussion of the AIAS and examples for different disciplines. 

Key Takeaways:

  • The AI Assessment Scale is a design tool, not a security measure - You can't just tell students what AI level to use; you must design assessments where AI use (or non-use) naturally serves the learning objectives

  • Start with learning goals, not AI policies - Always begin by asking "What knowledge and skills am I trying to assess?" before determining where AI fits in the assessment process

  • Faculty-level implementation works better than institutional mandates - AI means different things to math teachers vs. English teachers; discipline-specific approaches are more effective than one-size-fits-all policies

  • Students need foundational AI literacy before using the scale effectively - This includes understanding AI limitations, verification skills, ethics, and bias awareness - not just how to write prompts

  • Educator expertise is essential for quality AI integration - Without domain knowledge and understanding of AI capabilities, it's difficult to design meaningful AI-enhanced assessments or catch AI errors

  • Transparency and trust between students and educators enables better outcomes - When students can honestly communicate about their AI use, it leads to improved assessment design and learning opportunities

  • Leon Furze

    Leon Furze is a consultant, bestselling author, and PhD candidate with over fifteen years' experience in secondary and tertiary education. His PhD is focused on the implications of Generative Artificial Intelligence for teachers of writing.

    Corey Layne Crouch

    Corey is the Chief Program Officer and a former high school English teacher, school principal, and edtech executive. She has over 20 years of experience leading classrooms, schools, and district teams to transformative change focused on equity and access for all students. As a founding public charter school leader, she ensured that 100% of seniors were accepted to a four-year college. Her focus now lies in assessing the broader K-16 edtech ecosystem, uniting stakeholders at all levels to build a more equitable and abundant future for all. She holds an MBA from Rice University and a BA from Rowan University.

  • 00:02
    Corey Layne Crouch
    My name is Corey Crouch and I am the Chief Program Officer here at AI for Education. And I'm so excited to have my friend Leon Furze here to chat with us today about assessment. And we know that this webinar and today's session has been a long time coming. Some of you that are joining and saying hello in the chat, I know, registered. Gosh, when was it, Leon? March, April and I don't know, 2023. As fate would have it, we've, you know, had to reschedule for a variety of reasons. So we are so excited to be here this evening and actually to have this conversation and there's so much great content and things for us to chat about and we look forward to hearing your all thoughts in the chat as well. So really quickly, I'm just set the scene.


    01:04

    Corey Layne Crouch
    If you have been here at our webinars before, this is not new to you and I know some of you are already engaging in the chat, but we want you to get involved and talk with one another and react to what we're saying. As you can see, we do expect a good number of participants and the chat can get busy, which we encourage. But also, if you have a specific question for Leon and myself, please use the Q and A function so that we can see your question and do our best if we have time to get to those.


    01:43

    Corey Layne Crouch
    Also, we know that this topic, assessment and assessment practices and you know, AI misuse Resistant assessment and AI Enhanced and Empowered assessment, all those things that we're going to talk about is a topic that you all are curious about and I'm sure diving into and have tried some strategies as well. So please share resources in the chat as we go too. All right, well, let's go ahead and dive in. Leon, please share more about yourself and something that we all in the space have been using and referencing for what, at least two years now. You, you tell us how long it's been out in the space, but I know it's inspired a lot of our work. The AI Assessment Scale.


    02:36

    Leon Furze
    Yeah, yeah, it's been a bit of a journey to get to this point and just say hi, I've got the chat here and I can see people from near me, Australia, I see a few names that I recognize, but there's people from Prague just popped into the chat and Spain, which is where my dad lives actually he's been there for 25, 30 years, the UK, America. So hi everyone. And for me, the AI assessment scale has been a really interesting part of my work since 2023 or thereabouts. And it started with a conversation with some universities here in Australia around needing more than just a binary kind of use it or don't use it approach to AI. And back in the day, I know that was the same in the States.


    03:22

    Leon Furze
    In New York for example, there were a lot of jurisdictions trying to ban CHAT GPT as a sort of a knee jerk reaction to this technology coming out of nowhere. And it's kind of surprising everyone, there were a few others around who sort of seen it coming. So my PhD started actually back in July 2022. So before the release of ChatGPT, which gave me about a six month Runway, I was reading about AI ethics, reading about the implications of all of this stuff, and then ChatGPT came out and really blew everything up. So I was on the front foot. I finished my school role as a Director of Teaching and Learning at the end of 2022 and sort of overlapped with the beginning of my PhD.


    04:07

    Leon Furze
    And when ChatGPT came out and the Atlantic started publishing articles like the End of High School English and the Death of the Essay and all of these things, I thought, well, hold on, I mean, I'm an English teacher, I don't feel like this is going to completely kill my subject area, but there's got to be another way to talk about this stuff. So I published a blog post about a five point scale from no AI to full AI originally. And then Mike Perkins and Jason McVaugh at British University Vietnam and Jasper Rowe at James Cook University Singapore. He's now at Durham University in the uk. They picked it up, they emailed me and said, can we adapt this for higher ed?


    04:46

    Leon Furze
    We produced version one, which was the original publication in Journal of University Teaching and Learning Practice, the traffic light colored one, which I know that has been adapted all over the world by now. Then we released version 2 after about a 12 month period of getting feedback and we did a pilot study with a whole bunch of students which we've published and have been working with the community. That's the version that you're seeing on the slide here. I'll quickly run through that in a moment. But Corey, we might jump in and ask any questions before we launch into that.


    05:27

    Corey Layne Crouch
    Yeah, I mean as you're talking through it, Leon, and too I, my background is that I was a English teacher as well. And so similarly the, you know, the death to high school English just really was something that spoke straight to my heart too. But as you're going through this, tell us also like what components are from feedback that you've heard of course walk through it. But you're sharing with us this example of really, you know, community and practitioner driven frameworks and understanding of what is truly happening on the classroom and what is the impact of the technology from early days. Right, right. Sounds like you really dove into this. IMMED would also just love to hear the kind of input and feedback that influence the evolution of the scale.


    06:28

    Leon Furze
    Yeah, I mean so we've heard from probably thousands of educators around the world now about different adaptations of the scale and how they've used it. And we've seen it used in K12, vocational education, adult education, higher ed. And what's happened along the way is that we've also seen obviously the technology changing really rapidly and then policies and government, state and federal level policies changing around AI. So we've tried to adapt a lot of the features around that. I mean the first change that we made obviously was the movement from the traffic light colors to this kind of nice pastel color arrangement. And that was to step away from the idea of kind of stop or go approaches to AI because we don't like, realistically we don't think there is a way to stop AI outside of a controlled environment.


    07:23

    Leon Furze
    When we say controlled we're talking maybe like a tech free environment, not necessarily an exam because they're not the best mode of assessment for many reasons. But you know, tech free discussion, debates, performance, practical work, all of that stuff that we've been doing for many years before AI at level one and then we, the other big change that we made was we dropped full AI down to level four and added in this AI exploration stage. And that was to acknowledge that a lot of educators were finding that students were coming to them with really good ideas. And so we wanted to have a space in there for kind of co design. You know a great example I had of this was at an Australian university where an accounting finance lecturer contacted me.


    08:08

    Leon Furze
    They put all of their work for the semester onto the learning management system ahead of time as they do. And one student contacted them and said yeah, thanks for uploading all of the work, I've done it all over the weekend and you know, I absolutely smashed it with AI and I, I'm just a bit concerned that I'm going to be sitting on my hands for the rest of the semester. And the lecturer said, well thanks for being honest. First of all, many students wouldn't bother. What, what exactly did you do? And the student showed them, you know, they'd Used a bit of Copilot in Excel for some spreadsheet stuff. They'd use ChatGPT and Claude for a bunch of stuff. And the lecturer said, actually the way that you've done this is really interesting. Could you share that with the rest of the group?


    08:49

    Leon Furze
    And I might amend some of the assessments along the way. So that was just a great example of an educator being really responsive and a student being really honest about how they were using and that honesty and that transparency allowed for an improved assessment overall. So we've, we've shifted some of our language around how the scale is used. We talk about it now more as an assessment design tool. It's not an assessment security tool. You can't just wave this at a student and hope that they're going to do the right thing. You actually have to design your assessments so that it doesn't really matter if a student uses AI or that the AI is advantageous because we know that you can't say to a student, oh, you're going to do an essay, please just use AI for the planning stage.


    09:42

    Leon Furze
    But then stop and cross your fingers and hope that they do that. You know students, they're not all horrible, nefarious cheats, but many of them will do the most expedient, most efficient thing. And if I as an English teacher set an essay and say, go away for a week, complete this essay, submit it via Turnitin. 80, 85% of those students are probably going to use CHAT GPT at this point. I've made it too easy for them to use ChatGPT. So yeah, I mean those conversations about transparency and trust and you know, I can see people commenting in the chat about the relationships between students and educators. All of that is really the most important piece here.


    10:23

    Corey Layne Crouch
    Yeah, I love that. And I also, and I also want to say as an educator, you know, former middle school, high school leader myself, I know that some of these things are easier said than done, but relate the relationship is so key again, which is any time why anytime we hear this like wondering is AI going to replace teachers? No, because the trust, the honesty, the relationship, the knowing and the relevance of the learning. Sure. Young people are going to cheat like and we have studies that show that they were, they've been cheating way before ChatGPT existed as we know. And in fact the overall percentage of high schoolers that self report cheating behavior didn't change drastically after the release of ChatGPT. And maybe my colleague Dan can Find that study and share it in the chat. We reference it often out of Stanford.


    11:25

    Corey Layne Crouch
    But the relevance piece I, the exploration that you added and this idea of co designing new approaches, one, it's to me, and I haven't been a high schooler or you know, a college student for a long time, I'll say that. But to me that feels more exciting and more engaging because it's a, gives you choice and direction. And the other thing about that, that I often think about because this is work that we're doing in this current moment in time. Right. But there is also the reality of how things evolve and how we are. We have a responsibility as educators to set our students up for the future, for success in the future as much as possible.


    12:14

    Corey Layne Crouch
    And my theory is that the more that students have the space to be designers, to think in an entrepreneurial way, not just to start a business or anything. Right. But to say here is the thing I'm trying to solve for. Here is the totality of resources that I have available to me. What makes the most sense for how I can solve for that thing. That's just like general skill set which is what I see in this AI exploration. This like creativity and open like let's see what you can come up with. Really aligns with that type of more durable future facing skill.


    12:57

    Leon Furze
    Yeah, absolutely. And I think what we've got to do is we've got to come back to the sort of the core business which is what are the knowledge skills that we're trying to assess.


    13:09

    Corey Layne Crouch
    Yeah.


    13:09

    Leon Furze
    And how do I make a valid judgment that the student has that knowledge or those skills? And like once we keep that conversation really tightly focused on those questions, it doesn't really matter in some instances if they are or are not using artificial intelligence. I think we have to look for ways to design assessments and teaching and learning tasks where it really doesn't matter if a student is using artificial intelligence. If I think about a level two.


    13:39

    Corey Layne Crouch
    There, brainstorming, Give us an example.


    13:44

    Leon Furze
    It's pretty common that I'll hear back from teachers. Oh, I don't want students to outsource their creative thinking to AI. I don't want them to do brainstorming. You know, that's really good creative work. And that's true sometimes. Okay. But it's not true all of the time. So if we take brainstorming as the example, I'm an English and media teacher primarily I might have students in my media class thinking about making short films or doing a bit of storyboarding or whatever. Now I can spend an hour with a class of Year 9 students having them brainstorming. And at the end of that hour they will have produced a three legged spider diagram with a couple of ideas that they've copied off the person sitting next to them. You know, brainstorming isn't a particularly effective or engaging activity all of the time.


    14:32

    Leon Furze
    And the point of that assessment really is I want to get to the end point where I'm showing them how to use cameras and how to actually produce film and how to do the work in front of the screen and behind the screen. So the brainstorming is a very kind of secondary or even tertiary activity. I don't mind if they use AI for some of that work. It's where we draw the line. We can't make blanket statements like, oh, well, I always want students to do the brainstorming for themselves because that's creative. Because sometimes, frankly, brainstorming is not creative. It's a function of an activity which then leads to something else. The same with planning and idea development. We can look for ways that students can use AI intelligently with their own ideas. I work a lot with vocational education.


    15:19

    Leon Furze
    I've got some agriculture studies students nearby because I live out on a farm in the middle of regional Australia. And a lot of those students now they've started to use AI and voice transcription tools and they'll have conversations in groups and they'll transcribe bits of that and then they'll use AI to refine that into their initial notes and planning. And so the AI is still not doing the work for them, but that part of the task can be assessed separately. And I can say, right, you know, level two, we can use AI in this planning part here. And then later on maybe we're going to do some draft work and we'll go, no AI for that.


    15:54

    Leon Furze
    So moving back and forth across the levels and using AI where it's appropriate, breaking down the tasks a little bit further so that they're really clearly articulating to the students. In this instance you can use AI, and in this you can't. And if you can't, it's because it's not a good use of the technology or it's not a good use of your brain.


    16:17

    Corey Layne Crouch
    Agree. And some of the conversation in the chat or comments is around the concern around cognitive offloading and over reliance, which is a real risk and concern. And what you're articulating, Leonardo, you know, is to me is actually clarity on both the instructor the educators side and their role is having clarity about what is the learning goal, the objective, this, the standard. Right. And, and what is the evidence that students are making progress and on that particular standard. Right. And so we can start to separate where are they doing the most important cognitive create, you know, creative, evaluative, etc. And where can they use their tools, AI included to you know, further themselves along that way or get to that heavier cognitive lift more quickly. And so there's the teacher clarity or the instructor clarity of that learning goal.


    17:26

    Corey Layne Crouch
    But two, it also makes me think of, and it goes back some to this, you know, the relevance and the relationship and thinking about what students are going to need. And again we have former high school educator and principal and all of those fun things. I know, easier said than done. But I also think that underpinning this is that we need to continue to focus and be at a renewed focus or a new focus on student metacognition and students even being really clear about what they need, like what needs to be driven by their own original thought and their creativity and what it feels like to know that they're making that cognitive development and getting better at something versus just you know, handing it off or over relying on AI or chat bot to do it.


    18:27

    Corey Layne Crouch
    And that's, I mean I, I think even as adults folks that some of that I've talked to had kind of caught themselves that even being a little over reliant or a little like cognitively, excuse the word, but cognitively like lazy because it's so easy. So I do think to your point, like it's human nature in some ways, but also if there is like this self commitment to the cognitive development and awareness of what you need to be able to do on your own, I believe students want to use it appropriately and we can set them up to do that by being even more clear with them. Like you're saying. Yeah, yeah.


    19:16

    Leon Furze
    I mean that clarity for students is what they're looking for as well. They keep telling us that we've done research now in K12 and in universities. There's been three years worth of research trying to hear from students what they think of AI. And the thing that they keep telling us is we want you to, we want the educators to help us to understand how to use it because students aren't experts in their domains.


    19:39

    Corey Layne Crouch
    And we can bring the scale back up if you want to reference dropping the scale. Yeah, we're mostly chatting and I think we've, we threw the scale in the chat so folks could have it up.


    19:50

    Leon Furze
    Yeah, I will, I'll just answer. I can see there's a question from Holly in the chat in the Q and A which relates directly to the scale. So I'll just answer that before we bounce onto something more broad. But Holly's question is around, are we using it to guide teacher thinking of what the teacher wants to see? Or is, are we using it to tell the students what they should be doing? And there are the little practical. Yeah, well, there are the little bold statements on the scale as well, which is language directly to students. And this is an interesting point because obviously the scale has four authors and Mike Perkins, Jasper Rowe, Jason McVo work in different university contexts. So Mike tends to be the lead author on our academic publications and he speaks a lot about the scale in higher education.


    20:35

    Leon Furze
    He really likes those student statements because he's able to give those to his, you know, his students at British University, Vietnam, for example, and say, right, here's a really clear, distinct message to you about how I want to see you using AI and not. And in my practice, I tend to not use them because I use the scale more with teachers than with students. I use it more in my workshops with faculties and things. So one thing we've been really conscious of is having options in there for how people use the scale.


    21:08

    Leon Furze
    We don't want to say like this is like this set in stone policy documents that you need to adhere to, and we want you to put it into institutional policy because the reason we released everything in that really flexible CC by NC essay mode is so that people can play around with it and change it and, you know, translate it and mix up the levels and rebrand it and all of that work. So my answer to Holly would be just to be flexible and if you see it serving teachers more than students or vice versa, use it in the way which suits that community, that cohort better. Yes, that's a great question.


    21:49

    Corey Layne Crouch
    Yeah. Yeah. Well, I wonder if you could give us an example or talk us through, like when you are working with teachers, working with this scale and supporting them and starting to think about assessment design in this way. Like what, you know, say it's a science teacher or pick whatever subject or content area, what's some of the guidance that you give them to help make it like, practical for themselves and how do you move them along the assessment skill journey?


    22:25

    Leon Furze
    Yeah, it's. I've got a bit of a process now, having done this for a little while and I, I was ahead of English and then I was a director of teaching and learning, which is kind of like an AP curriculum role. So my job for over 15 years has been curriculum design and assessment. And I think my approach is always start with heads of faculty, start with, you know, curriculum leaders, and build up the understanding of what the technology can do within that discipline.


    22:58

    Leon Furze
    So I've published something recently around three dimensions of expertise and I've spoken around the need for domain expertise, subject knowledge, technical expertise in AI in this case, and then a situated expertise, which I think comes from, you know, time on the job, time spent with colleagues, time adapting content to different students, and all of those, the interpersonal and the situated kind of experiences that teachers develop over time. When you combine those three things, you can make really great stuff happen with the technology and you also know where it doesn't work. Okay. So, you know, you don't fall into the trap of thinking, okay, I'm just going to throw AI at the problem and it'll be solved.


    23:45

    Corey Layne Crouch
    You think?


    23:45

    Leon Furze
    Well, based on my X number of years experience as a literature teacher, I know that this class here would best served well away from AI, whereas this one here might be useful. Right, so those. And Maureen in the chat's just said, can we, can I repeat them?


    24:04

    Corey Layne Crouch
    Yeah.


    24:04

    Leon Furze
    So I'm going to post just chuck a link to the blog in because there was a blog post about it a few weeks ago. But domain, technical and situated expertise is what I've called them. And then on a practical level, what I do with that with faculties is I often do this approach that I call attack your assessments, which sounds pretty blunt, but we start with all the assessments on the table from a, you know, a selection of year levels or whatever. And we say, okay, so first of all, is this a valid assessment? Is it, is it valid? Are we assessing what we want to assess?


    24:42

    Corey Layne Crouch
    That's such a great question because it for it does force. It's hard to interrupt you, but I will say, like part of this is let's really be honest about the quality of assessment to date too. And that's not to say that everybody's assessment practices have been irrelevant and invalid, but that question is a question as educators, like, let me know, is this still a valid assessment? Is it authentic? We should probably be asking ourselves more than we do.


    25:13

    Leon Furze
    Absolutely. I mean, over here in Australia, and I don't know if it's the same in the States, but in senior school particularly, a lot of our assessments just tend to be miniaturized exams. So if I take my state here in Victoria, we have a senior certificate called the vce. And many of our VCE classes have coursework. But the way the coursework is assessed is through basically like a replica of the end of your exam, like a little tiny in class exam where the students do something which is very much a duplicate of content that they'll see in the exam. And we all know, we know it's teaching to the exam, you know, we know what game we're playing here, but it doesn't have to be that.


    25:53

    Leon Furze
    And I would argue that's not a very valid form of assessment because it's probably, you know, a decent chunk of our students who can't do their best work under those conditions. So we have to ask before we even start talking about AI, you know, are we assessing what we want to assess and are we using the best mode of assessment possible? You know, business studies is a great example here. You see so many business studies classes where they're producing like a market research report, but the way that they're doing it is under exam conditions, with no access to the Internet. What, what business in the real world would be doing market research with pen and paper in a two hour time?


    26:32

    Corey Layne Crouch
    You probably wouldn't have the job for very long if you were doing it like that, right? Like if you work for as a marketing and business, you wouldn't keep your job if you did it like that.


    26:45

    Leon Furze
    And once you get those conversations out on the table then you can introduce AI. Okay. And you have to bring up the understanding of the faculty of what AI can do because there's still a lot of misconception. I find people's people saying I used ChatGPT when it first came out and it was rubbish and I never used it again. And we have to acknowledge that it's moved along a locked since then and that you know, if we look at like a GPT O3 for example, or Gemini Pro, they are really good at certainly up to senior high school level mathematics, you know, 18 months ago, pretty hopeless. You can't do mathematics with next word prediction. They don't do that anymore. Okay. And they might use a bit of code under the hood.


    27:30

    Leon Furze
    They might write a little script in Python and crunch the numbers if it's particularly hard. But the mathematical reasoning is improved that like if I throw a specialist mathematics exam, which is our highest level in year 12, 18 year old students, that kind of exam can be probably 80 to 95% completed in like a couple of seconds by a decent AI model. So people have to understand what the technology can do and as Desi said, improved but not still not 100% accurate. Yes, we acknowledge that it's not getting Everything right. OpenAI released its study mode in the last 48 hours.


    28:11

    Corey Layne Crouch
    We have a question.


    28:12

    Leon Furze
    Yeah, jump into that question. Throw that question at me, Corey, because.


    28:16

    Corey Layne Crouch
    I. Oh, well, it's actually Jorge here. Thank you. Jorge is asking, you know, what it, what do you think about it? What do you think about the new study mode?


    28:25

    Leon Furze
    I wrote an article yesterday when it was first, when it first came out in Australia. I was just on my way into run a PD webinar and just prior to that I saw that study mode had been released. So I quickly jumped in and tested it out and wrote an article about it over a coffee and I wasn't impressed. I mean frankly. It's a, there's a system prompt, there's a set of instructions in there that says, you know, don't answer the question directly. Break it down step by step. Use Socratic questioning. Like these things are. They're educational buzzwords. There's nothing inspiring happening there. And why would a student use it? Like I would just turn it off, right? I wouldn't, I wouldn't even bother turning it on. Dan's just posted a link to that article in the chat.


    29:12

    Corey Layne Crouch
    There you go. Yeah, we got it. Go ahead.


    29:17

    Leon Furze
    The interesting thing for me was that I posted a little video of it doing a maths problem and now I'm not a maths teacher. My observation was I don't think this explanation is very helpful because I still don't understand this problem. And I also noticed that O3's response was much better than 4O's response. So the paid version was much better than the free version. And when I shared it Online straight away, two maths teachers, Mike Abicena and Shanti over here on LinkedIn commented and pointed out all of the errors like that the answer was wrong, the methodology was wrong, the step by step process was wrong. It was terrible.


    29:56

    Corey Layne Crouch
    I mean that's just, we're already. That just makes me, it makes me nervous about how it's marketed towards students. Clearly that's what and what that potentially means for you know, how they are practicing or not adequately practicing. And yeah, see some of the comments about why they may or may not have done does. I will plug for us. There we go. As we threw it in the chat that even before the study mode we did some work with student achievement partners and in fact we have a webinar that we did on that too about a guide to integrating generative AI into deeper math learning and really get into thinking about what are the appropriate tools and the approaches and very foundational in that work is that you can't do it like that with ChatGPT. It's not effective for a variety of reasons.


    30:54

    Corey Layne Crouch
    The inaccuracies like doesn't build strong conceptual knowledge if you're already lost as a student and we really want students to learn math procedurally and conceptually and you're just trying to get to answer anyway. I also was not a math teacher but similarly as directing instruction get why that might be the case and I'll plug our webinar.


    31:19

    Corey Layne Crouch
    I don't know Leon, if you had a chance to see it, but we had the opportunity to have Kristin decervo, the chief Academic officer from Khan Academy with us in our last webinar and she really pulled back the curtains for us and talked about developing Khanmigo and how they really need to actually give the step by step and all of the background on practice problems in order for it to work effectively and that their tool with all of the background tuning even that and designing that they're doing is still not as effective as they want it to be. Ultimately, I'm definitely paraphrasing her words, but it's not as effective as they want it to be with problems that aren't problems that they had, you know, previously put into their practice set because the tool doesn't have that background scripted step by step answer.


    32:19

    Corey Layne Crouch
    Anyway, all that to say like what you're, you're seeing and what the math teachers are seeing with this new feature that OpenAI released just reiterates that the tool has not been drastically improved for that kind of learning.


    32:37

    Leon Furze
    Yeah, and I mean there's a whole host of tools, applications, products out there now which claim to do X, Y and Z for teachers and for students. And I think really this study mode and the partnership with Canvas LMS, both of those things just indicate to me that OpenAI has recognized that there's a lot of smaller third party people building exactly those products through the OpenAI API. They're using the GPT model to build these products. And now OpenAI said, oh, we might as well do this ourselves. We recognize that students are our biggest user base. Students don't really pay much money for stuff notoriously. So let's just kind of consolidate all of this and maybe push some of those third party applications out of the window. Now. So yeah, it's, it is what it is.


    33:30

    Leon Furze
    Like, I, I'm pretty cynical and you know, people, if there's, I recognize a lot of names of people in the chat. So I'm sure people have read my blog posts and things. You know, I'm not super enthusiastic all of the time around the tech companies and what they put out there, but my main interest is, you know, how can teachers actually use this stuff within the boundaries of their own expertise?


    33:54

    Corey Layne Crouch
    Yeah.


    33:55

    Leon Furze
    And how can we do that in a way which values that professionalism and the expertise of the educators in the room? And I think a lot of these tech companies, they don't really understand that.


    34:07

    Corey Layne Crouch
    Yes, yes, I agree. And we, and for anybody that's been in our workshops or webinars before, we just say as often as we can. At least I know I do what I'm facilitating that as educators and leaders and teachers, we have got to lean into our expertise because we know our, to your dimensions of expertise. We know what good instruction looks like. We know what developing understanding of our content should look like for our students. We know our students and their context. And without that expertise, I, I feel strongly you cannot get high quality instructional materials or support for students out of any of these tools.


    34:59

    Corey Layne Crouch
    And I know sometimes we don't make friends when we say that, but even the tools that are designed for the classroom and for, you know, specifically empowered by the APIs, I still feel strongly that if you know, aren't leaning on your own expertise or continuing to be coached to build that, if you're new to the field as a teacher, which welcome if you are new to the field, that these tools don't nudge you toward understanding better practice unless you're intentionally looking for that.


    35:32

    Leon Furze
    Yeah, and that's exactly that. That kind of expertise paradox. I've done a bit of, you know, had a few conversations with Punya Mishra, the TPAC guy, and you know, I share a lot of ideas around the fact that, you know, if you don't have the expertise in whatever, then it's really hard for AI to help you get there because you don't know what's wrong, you don't know what's right. I saw a lot of people commenting in the chat about hallucinations. Even in supposedly less hallucination prone products like Google NotebookLM. These models, they're very sophisticated predictive text. They don't have any ground truth, they don't have any accessible data set beyond the ability to connect to the Internet. And so as they're predicting text, a certain amount of that is going to be totally fabricated. And we can't have that in education.


    36:20

    Leon Furze
    That's not, that's not a good model for learning. It's right most of the time, except for when it isn't right. So you know, that's a huge problem. Sort of scrolling back up through the chat because I noticed a comment from Calvin as well around the assessment scale and Calvin said, you know, there's a problem that if we take a top down approach and say as an institution, you know, everybody has to use AI this way, then it misses out on the fact that an LLM to a maths teacher is very different to an English teacher. And I would say, Calvin, that's exactly the point that I've made in that expertise article that we have to, I think start at a disciplinary level.


    37:02

    Leon Furze
    We've got to say, okay, so to a literature teacher an LLM means this, but to a math teacher it might work this way and to a science teacher it might work this way. And when we take that kind of more bottom up approach, faculty by faculty, lean into the expertise of the teachers, then that's much more effective than trying for a system wide or even an institution level kind of policy. Because as we know, and you and I have both been in educational leadership positions, if you try a one size fits all, institutional or statewide or federal level policy and education, there are so many places that it will fall apart.


    37:42

    Corey Layne Crouch
    Yes.


    37:42

    Leon Furze
    It really just becomes a waste of everyone's time. So I say spend the time up front working with AI and assessments at a disciplinary or a faculty level and that pays off huge in the long run.


    37:57

    Corey Layne Crouch
    Right, right. And to the topic that were talking about just before, when you do it in a way where it is community driven like that and you are giving, you're validating that educator expertise and their agency in the work that they're doing with their students Matters then. And we even actually have a question which I, it's one of my themes, I guess today I, I recognize easier said than done because system change is hard. We have a question around it like advice on encouraging faculty to give AI a chance. You know, part of that is creating the safe space and giving them agency to continue to like opt in and figure out where is it going to work for them and in their content.


    38:51

    Leon Furze
    Yeah, I mean, I think that there's a lot of good reasons that some educators would resist AI. And again, this is, you know, I wrote an article about this on Monday, sort of picking up a theme here. Every time I pick up on a couple question I say, oh well I wrote an article about this on whenever.


    39:07

    Corey Layne Crouch
    You have all of this time to.


    39:09

    Leon Furze
    The keyboard hammering out articles. But I do think that there are genuine reasons why some people might resist the technology. And, and this has been misinterpreted in some cases as like a call for a ban, which it absolutely isn't because you know, obviously I use the technology a lot myself. But you can imagine where you know, like a visual arts teacher resistant to the way that generative AI image generation has been produced because of the copyright concerns, the intellectual property concerns. Now if that teacher is resistant to using Gen AI themselves, that's a perfectly understandable stance on the technology. Ditto that, you know, the literature teacher who's also an author, who is offended by the way that Meta and co have constructed their LLMs.


    39:55

    Leon Furze
    So and like beyond copyright and the obvious, there's, you know, this is the environmental concerns, the biases, the centralization of power, all of those really complex ethical concerns. So there are good reasons to resist. But what I would say is if a faculty member or a teacher is resisting because they don't like it or because they're scared of it, or it's on third party information about what they think it is, right. That's an opportunity to work with them and educate them about the technology. So I think resistance is, it's tricky territory and we don't want to see like institution wide bans and things like that as a knee jerk reaction to academic integrity because that doesn't work. But within institutions I think it's perfectly acceptable to have a few staff who are resisting their own personal use of the technology.


    40:44

    Leon Furze
    And I mean I don't really hold with this idea that every teacher has to teach students AI literacy now because I mean I'm a literature teacher, I could teach 90% of my course without a computer, let alone without AI. Like, you know, we could be a pile of books on the table discussing those texts and have a really rich series of lessons. But we have to negotiate where it is appropriate to teach students to use the technology as well.


    41:10

    Corey Layne Crouch
    Yeah, which we cannot finish this conversation, Leon, without touching on that point. We know it's not an AI for education webinar. If we don't talk about the importance of the foundation of AI literacy both for teachers and for the students. And when we talk about that AI assessment scale and we're exploring those different levels where AI is involved, that is I Don't want to say assuming, but part of the underpinning that is that students do have a foundational understanding of what generative AI is and safe ethical and effective use. Now, of course, there is some initial introduction to that, but what we've been doing, spending a lot of time on, is really talking to educators and collaborating with them about what does it look like to lay that foundation of AI literacy.


    42:08

    Corey Layne Crouch
    And then you're using the things like the assessment scale and, you know, AI integrated student learning experiences to continuously spiral in AI literacy. So the con. The conversation around hallucinations. Well, one, you need to know how to verify information and say you're preparing for a debate or something like that, and you're using AI tools to prepare for that. You both are using critical analysis and really thinking about the sources that you're going to use for your evidence for whatever you're. You're arguing with this understanding that AI hallucinates and even things like perplexity and, you know, generative search tools that cite sources, you've got to click in and make sure those sources are one, real and two, that they're citing them appropriately because they will mess up data points as well. You know, I think, anyway, all of that to say it.


    43:09

    Corey Layne Crouch
    I'm sure if you're here today and this is not your first AI for Education webinar, you have heard us really talk about the importance of AI literacy for students and for educators before. But it's important to remember that you can't do this AI assessment scale or any of this, you know, AI integrated learning design without that component too. And it's some of what to. To your excellent point of, like, there are very valid reasons for resistance, but also just the, like, it's a lot. Fear of the unknown. Like, a lot of what we really work to do with educators is to demystify, like, what is actually going on here and what are the edges of the technology? What, you know, what is it like to start to plan or collaborate with AI and then start making some decisions based on that?


    44:07

    Leon Furze
    Yeah, absolutely. And I just think that, you know, there's a good point to end on is that AI literacy isn't just how to prompt ChatGPT. And, you know, we're seeing now some schools and universities and even entire regions, jurisdictions publishing AI literacy programs that are very focused on just technical, I don't know, jumping through hoops. If we separate out the ethical concerns and we separate out the, you know, the biases and the media literacy or the critical literacy parts. It's not really AI literacy. It's just teaching kids how to use ChatGPT. It's teaching them how to better consumers of products. And we don't want that. That's not the job of schools. Kids can learn how to use ChatGPT on TikTok. You know, that's the, the primary way that they are using, learning to use AI.


    44:55

    Leon Furze
    So I think we can do a better job than TikTok as educators, so we can educate around the whole system of AI.


    45:04

    Corey Layne Crouch
    Yeah, yeah. What a point to cap on, we could do a better job than Tic Tac, Tick Tock. But you know, it is to your point that this is so much. The AI literacy component is so much more than the, like, what's the quick. I don't even know what's the time limit on TikTok videos. I'm not on there, but I know it's short. You know, this is much deeper. And the, it goes. The AI empowered and AI integrated and the AI exploration aspects of the assessment scale really go hand in hand with that deeper understanding of AI safety and ethics and using it effectively in a way that enhances rather than replaces or undermines really cognitive development and student learning. Yes. So great. All right, well, let's see. Are there any other, I know we really are just.


    46:14

    Corey Layne Crouch
    Are there any other big questions, Leon, that you saw here that we didn't quite get to?


    46:23

    Leon Furze
    There's a couple of questions in the chat just around how we use the assessments and how I use it in some of those workshops and things like that. So a lot of those processes in various places on the blog. I've kind of written down exactly what I do in those workshops and I'll drop a link into the chat. Now this is a subscription, like a mailing list thing. So what I normally tell people is grab the free stuff and then leave if you want to. I mean, it's great if you stick around. But there's on that link that I've just shared there's a, an ebook that I made of all of the pieces of writing that I've done on the blog about assessment. It's about a 200 page ebook on everything.


    47:04

    Leon Furze
    But within that there's a step by step thing in there about that attack your assessments process. Literally. Like, do this, then do this. So if you're a faculty leader and you're looking for like a little workshop that you can Run yourselves to review assessments and all of that work that's in there. And just great conversations, I think in the chat, you know, way too many conversations to keep on top of.


    47:27

    Corey Layne Crouch
    I know, it's so much.


    47:30

    Leon Furze
    If there's anything we've missed, I'm always happy for people to jump onto LinkedIn where I spend way too much time and DM me or send me a message through the website.


    47:41

    Corey Layne Crouch
    So, yeah, hang on. Actually, can you share it? It looks like, Sorry, I've shared it.


    47:47

    Leon Furze
    To the host, I've shared it to Kobe and not to anyone else.


    47:50

    Corey Layne Crouch
    Yep, there we go. Share it to everyone. Yeah, and I too, I'm seeing, you know, a few questions about differentiation, like using Gen A for Gen AI. There you go for differentiation and for, you know, how to further avoid plagiarism and academic integrity concerns. So I'll say to everyone, number one, definitely go to Leon's resources. Leon, thank you so much for joining us this evening. Just incredible material and we're so thankful to be in collaboration on this topic and conversation with you. And then also if you're here this evening or this morning or this afternoon, you will receive a recording of this webinar as well as all of the resources. And so it'll send you to our site and check out the other webinars that we've done.


    48:41

    Corey Layne Crouch
    Differentiation and also the webinar about the math instruction and our conversation with Kristen from Khan Academy are all there too. So thank you everyone. Thank you, Leon. Have a great rest of your day, your evening, wherever you are in the world, and we will see you all at our next session.

Want to partner with AI for Education at your school or district? LEARN HOW