Surveys show that outside of in-class instruction, feedback and planning are the most time intensive tasks in a teacher's work week. Many educators spend at least 1 hour per day giving feedback.

Learning science tells us that timely feedback is key. However, with growing class sizes and limited teacher time, it can take days for a student to receive feedback on an assignment. And feedback quality suffers as well since it can be hard to give the same level of feedback to all students on every assignment every time.

How can AI help solve these problems?

  • AI can quickly generate formative feedback to help teachers be their best.

  • With prompting, teachers can tailor AI feedback to rubrics, growth areas, and next steps. It can pose questions that stimulate higher-order thinking.

  • Equity - more immediate, abundant feedback ensures all students have opportunities for growth through feedback.

  • Teacher-in-the-loop AI feedback allows tailored, high-quality feedback based on learning goals and students' needs.

  • But AI should only provide formative feedback, not scores or summative evaluation.

In this hands-on webinar, we discussed these strengths and weakness, explored practical strategies for using AI to help with feedback, and walked through live prompts and examples to show how tools like ChatGPT or Brisk can help with the process.

Presented as part of our AI Launchpad: Webinar Series for Educators.

Formative Feedback with AI: Opportunities & Risks

  • Amanda Bickerstaff

    Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.

    Alex Muscat

    Alex is the Head of Data and AI at Brisk Teaching. He made his start in EdTech on the teacher support team for the Summit Learning Program in 2016, serving 5,000+ teachers and 80,000+ students. Since then, he's maintained his passion for helping teachers learn how to effectively use technology in the classroom. A literature geek at heart, the advent of Large Language Models presented a perfect opportunity to combine his expertise in language, data, and education.

  • Amanda Bickerstaff: Hi, everyone. I am Amanda, the CEO and co-founder of AI for education. You're here as part of our AI Launch Pad Webinar series that we started back in June. I can't believe it's now fall weather. But we're here at that time our Wednesday night webinar. And today we're talking about formative feedback with AI, we know from the Facebook group of ChatGPT to teachers, we talk to every day and educators that there's a really strong want to use AI for feedback. But we wanna really situate our conversation, as always in the balanced approach. So we're gonna be looking at both the opportunities and risks of using AI to support your feedback that you're giving to students. And so I'm really excited to be

    joined today by Alex, Alex is the head of data at Brisk Teaching. He is

    an amazing technologist, but also, he's been building these tools for brisk. And so we're really gonna talk about like what feedback has been, why, it's important and how we can make it better so to start. What I wanna say is, this is your time, not just ours. So whether you are, it's in the middle of the morning. It's the evening with us, or you're somewhere in the middle of the night, which has happened. We want you to get involved. So I already see some people adding where you're from. So say, Hello! Where you're from, even what you do. That would be great. So Esther from Taiwan. Hello! I know it's definitely the morning there, Morocco. We'll see people from all over the world, but you engage with your peers because this is the opportunity to really connect and share best practices and resources. And so I really think it's an opportunity for you all to build your own networks. We also want you to prompt with us. We'll be doing some work a little bit later with ChatGPT. As well as looking at Brisk.

    Amanda Bickerstaff: And so we want you to get hands-on. So this is meant to be not just a conversation between Alex and I, but an opportunity for you to have this moment of really getting our hands on keyboard and understanding what the possibilities are. I wanna say, you know, thank you for joining us again. We're so lucky to have you here with us. This is I think our fourteenth webinar, and we have a couple of really great ones coming up as well. So we have writing a generative AI PD at your school. So actually, I get to do that a lot with both school districts and schools. So I'll be sharing our best practices, including some resources and some Pd itself to share.

    Amanda Bickerstaff: We're also gonna do a prompt engineering for educators piece. That's gonna be very hands-on. And then the final one is, I'm gonna be doing one with Harry Pickens, who is one of the moderators for the chatGPT teachers Facebook group that has almost 400,000 people in it, including some people I'm sure that are here. We'll be talking about the top mistakes that educators make with AI. And this is actually really based on – I was doing a presentation just a couple of weeks ago, and I watched the teacher make 3 mistakes with ChatGPT in 2 min. Because it's not that intuitive so welcome. We're so excited to have you here, and now I get to hand it over to my wonderful guest, Alex. So, Alex, I always ask the same question. Please take some time to introduce yourself to our wonderful group of people that are watching, and then also tell me about the first time that your first experience with generate Byi.

    Alex Muscat: Sure, yeah, thanks so much for having me, Amanda. My name's Alex. I'm the head of data at brisk teaching, which is a chrome extension that uses generative AI to help teachers in a variety of their work flows and I've been working in education technology for about 7 years now, and a lot of that time has actually been spent building feedback tools. And with brisk, we're adding in generative AI into those feedback tools. In terms of my first experience with generative AI. I was trying to look back through my chat. Gbt chat history. And I'm pretty sure my very first one was. I was about to go on a trip with 2 of my friends, and I asked ChatGPT. To write this like funny poem for me about this trip. We were gonna go on to Vancouver, to get us like excited for the trip. It was very cheesy, but I think it was like especially when ChatGPT first launched. There was a lot of people using it for poems, and all these like creative tasks, which was a lot of fun. Of course, since then I've used it for a lot of other things aside from that. But that was my first experience.

    Amanda Bickerstaff: That's fine. I always love it. There's like the we were saying, this is kind of the rush hour test of what you use, and that a lot of people that use it for their own like a trip is a really common one, and II use it to build a rubric because I hate rubrics. And so, you know, we're we all have our own things that we focus on. But you know, I think it's really great to have you here because you're a really great example of some someone that was building formative feedback and feedback tools. And then there's been this explosion of possibility with generative AI, especially around this ability to generate new things from text right? And so I think it's gonna be really interesting to get your perspective of how that is grown. But I wanna start with, what do we know about why feedback is important. In the first place, like, what do we know about what what good feedback is.

    Alex Muscat: yeah, of course. so yeah, the learning science tells us that. The really big, important things with feedback are that it's timely and that it's targeted so by timely. That means that after a student submits work or not, also expanding beyond students like, if I give a proposal to someone else I'm working on, and they're able to give me that feedback. Really, quickly. Then it is it means that it's much more likely that I'll be able to take in that feedback and actually put it forward to a better next product and then targeted in terms of being specific, giving me real idea, concrete ideas for how to improve and get better. yeah. So those are the first things that come to mind for me are timely and targeted.

    Amanda Bickerstaff: I mean I will be honest. I'll be the proxy for a lot of educators here. I don't know how timely and targeted I was, especially in a as a high school teacher when I had, like a hundred 25 to 150 lab reports to grade sometimes. What end up happening is that I was timely, but I definitely wasn't targeted. So what would happen is, I would, you know, do the kind of like check check check? And if it was there. It was enough, or I would do the deep dive, and then it would take me a really long time to do. And so I think that that's a an a natural thing that we see happen over and over again, especially when we have more and more types of assessments that we have to grade. And so we're talking like in in informant assessments, even more interesting, because that's something that's supposed to build towards like a student being able to do a summative assessment. Right? So it's even more important because these are incremental progressions that students should be making right. And I think what I see now, is that so much more important now to focus on the smaller pieces because summit of assessments are so easy to use AI to complete. So I mean, I think, like for us, you know, when I talked to educators and love to get your perspective on this, you know, we see that this 5 paragraph essay, or the lab reported, just talked about, or an argumentative essay, or even like a portfolio project. It's a lot easier for them for students to use generative AI tools to do that formal work

    that it's very hard to understand where they've actually come and grown farm and what skills they've learned.

    Alex Muscat: Umhm, yeah. I think that is a really big question that and and from the schools that we've been talking to that that have been started partnering with us with brisk that's definitely on all of their minds in terms of like, how do we make summative assessments more AI resistant? How do we assess students knowledge and the skills that they're building and I also wanna touch on the piece you were talking about of, like, those formative pieces being so important. I think another thing that I've seen from my work on building feedback tools is also just giving students sort of more at bats to improve their skills and have more of those feedback cycles is really important. We've definitely talked to teachers who I have been really grateful to be able to have ways of speeding up the feedback process on those formative assessments, so that when the student actually gets to that final product. They, they have better material that they can share because they've gone through more iterations. so yeah, that formative piece is is really critical.

    Amanda Bickerstaff: Yeah, I mean, I think I think it's really interesting, because what we see like, what I see a lot, what I'm talking to at tech founders is that you're building these these feedback tools. And you know who the major user is like, they're building it for teachers. So like, okay, a teacher is going to want to use this feedback tool. What's happening is, students are using the feedback tool. So I've talked to mote talk to co-grader and others. And it's really interesting to see how quickly students are adopting this work to get maybe not perfect feedback we'll talk about in a moment, but timely feedback so that they can improve their writing or they can improve their output. So I think that's a really interesting point of view. And so let's talk about this. So like, you know, in the past. let's talk about the built things you were building. So if you were creating a technology that was going to be providing feedback. There were essay graders right there, you know the I think the sat, or like standardized tests, had some automatic grading like how it is like, what was that? And then how is it change? And what's the difference between that and generative? AI. Now?

    Alex Muscat: Yeah, for sure. So the tools that I was working on. I was on a team. That was work on something called the Summit Learning program. And actually a lot of, or a couple of the people that are on the brisk team. We were all working together on that project as well, and actually, a lot of the tools that we were building were actually focused on trying to help teachers give better feedback. So building tools like being able to tag a comment with a specific rubric dimension, to like help connect the feedback that the teachers giving to this standard aligned rubric that is attached to this project. Also things like comment banks. And I think what really excites me about the potential with generative AI is that it's much more adaptive and responsive compared to a comment bank which is static and also takes a lot of time to populate at the beginning. And I think that generative AI can really make that work workflow like 10 times faster than ever before.

    Amanda Bickerstaff: Yeah, for those that may not know like a comment fake was like, Is this opportunity? A lot of AI before was like role based. And it still is. But it's this idea that you'd build like a whole bunch of comments that were pretty, you know, applicable, and then you would tag them to like different types of feedback. That like this is a. This is passive voice, or this is active voice, and then, if it was seen as active or passive voice, the comment bank would say, Here's some comments that you can use. And so that's an example of an older school. Look at AI or machine learning. But now what you're talking about is that it could be significantly more customized. It can be on the fly, like the comment bank is important, I think, for training. meaning the tool right which I meaning that they're seeing great comments and those types of pieces before it's generating new types of comments. But the fact that it can be directly like attributed to someone's experience of something that they've written instead of something generic, even if targeted. I think that's really where it starts to get pretty interesting. And so we think about like what you know. Let's talk about like what's possible right now. and like what the limitations are before we talk about what we think will be possible in a couple of years. So like, what are the if I was gonna say, Hey, Alex, I'm a brand new teacher, or I'm a 25 year old, 25 year teacher. And I want help grading. I wanna help. I want that to do. What should I know? What's what can I do. And what can I not do right now?

    Alex Muscat: Yeah. And II think I mean all of this also. We're all kind of experimentalists here that are part of this webinar. So I think a lot of this is also trial and error. And also seeing what you're personally comfortable with. from my perspective and the testing that I've done. I think that generative AI can be really useful when given all of that context that you have as a teacher. So your knowledge about the students in your class, your knowledge about the learning objectives for the assignment and using that to help sort of guide the AI towards potential feedback, or comments that it can leave on a paper, I think, where you really want to be careful, as always with hallucinations, or like incorrect information. So if it's leaving feedback on something and saying that there's a grammar error. But there's not actually a grammar error, and that's definitely a big thing to watch out for. I think the other thing is just trusting that the AI will sort of magically be able to pull out the most significant misconceptions or most significant pieces of feedback that need to be given for the student. I don't think that the systems are actually there yet, and so, relying on it to be able to find the most important thing is, isn't something that you should do.

    Amanda Bickerstaff: Oh, yeah, I mean, I think that this is that idea of like magic, right? That these things, these new computer programs are thinking. And so they can use all of that context and knowledge, and that art and science of teaching to go. Okay, I know Alex and Alex does a great job of construction. But I need to really help him improve on X. And so I'm gonna target my feedback to that is not something that an AI can do. It also can't do like. you know, a forced ranking of like what the most important thing to look for is, unless you tell it that. And even if you tell it that there's a really good chance that if that thing doesn't exist in a paper instead of the bot saying, Hey that doesn't exist, it'll make it up. And that's a really common form of hallucination. Which is this idea of that? Something is in a rubric, or in the list, or something you're asking that doesn't exist in the existing and the paper that you're asking it to grade, it'll often instead, say it's not there. But like I'm a predictive engine, so it could be there, and if it was there, it looked like this, and it can be very convincing. And I think that this is where, like we really have to be careful. And when we think about feedback in this manner, whether it's formative or summative, is that you? You never want to have a bot or even peer feedback. Let's say, if you had students giving feedback be the final draft you always wanted to take this as an opportunity to get a second eye to get a place to start to get, you know, something that has maybe more targeted, or in a friendlier tone, or those types of pieces. But you'd never wanna say, Here's this thing. Let's grade it without looking at it. And the way that I think about this is that to your point earlier, let's say that we have a grading that we're doing. And you're like, there's it says it's lots of gram grammatical areas. But actually, what happened is a student use really interesting construction of of language that was really unique and interesting and still grammatically correct. And you submit that feedback to the student, and they get upset, and they ask you why? And then you don't know the answer because you didn't grade the paper, really, and that to me is the exact same correlate to a student using AI as a final draft, and then being asked what was in that paper, and not being able to say it. And so I think that that's where I see, like the concern here of even if you got really comfortable with the tool like brisk. You can't rely on it a hundred percent. But it may be that false sense of security will make you wanna rely on it, but it only takes one time to lose trust whether you're a student, but using it incorrectly or our teacher. And I think that's a really important point.

    Alex Muscat: Yeah. I think with all of these different uses of AI, where talking about like the 80 20 rule or 95 5 rule, all those types of things where you can use. Let AI do a portion of the work for you, and it could be a lot faster than doing it manually. But you still need to be that

    sort of critical piece in between showing it to a student, for example. To make sure it's relevant to make sure. You're not losing trust anywhere. I guess, adding on to what you're talking about in terms of limitations. And like using this term grading and feedback. I think something that we're definitely very against at risk is using AI for actually giving scores like numeric scores on on essays. I know we're talking about formative feedback here. But I think that's again a a place where it's easy to think that this AI is extremely intelligent and would be able to kind of objectively give a score on a paper. But then you run the same paper through the AI multiple times and get different scores. You change the language slightly and get a very different score. And so we think, for scoring in particular. Also, it's like very unreliable, and shouldn't be used for that.

    Amanda Bickerstaff: Yeah. And that's a I think that that's a very interesting point of that. These tools are designed to be variable and to be interesting, creative. And so you see it as a prompt works one day and not the other, because of that, like that variation, that temperature. So I think it's really interesting to look at. And someone just said, assessment is never objective. And so I actually wanna and II think this is a really good point. It isn't, and it hasn't been. But there is an opportunity, I think, with AI to create less biased content. And I think that this are like feedback, because that that bot isn't tired. It doesn't know you went to the Dean's office. It doesn't know that you could be quite loud in the classroom, and so I think that there's some opportunities for consistency of of of, you know, the same approach, especially with tools like brisk, which are designed to have that kind of consistent approach, that more boring approach where you get a good outcome. More likely. So like we're gonna do. So we're gonna actually show some stuff we talked about this being hands on.

    And so what we're gonna do is I'm gonna hand it to. We're gonna do 2 things. So Alex is gonna show kind of how to use like chat to you to potentially show some bit like limitations, the capabilities of grading with that tool and then looking at Brisk. And then I'm gonna show some examples of how to use our prompt library to create some formative assessments on the other end. That could be a little bit more engaging for your students using those tools. So I'm gonna hand it over to you, Alex. I'm really excited to see. Actually, I love watching. I love kind of breaking chatGPT, because I think it's such a great opportunity. And II haven't seen Brisk live. So this is going to be really exciting.

    Alex Muscat: Yeah, okay, I will share my screen. okay, great. So I'm gonna start in chat. Gp, and with 3.5 just the the free version of Chat Gp, and I have some like demo student essays here that I've prepared and so maybe to start, I'll pull this one in and we can start with something really basic like, Please give feedback on this student essay. And what we're gonna do is we're going to iterate from here and try to get to kind of a better prompt. That gives us more useful feedback. And then I'll also show you what the feedback and brisk looks like.

    So I'll submit this and okay. So let's look at this so as you can see, it's pretty generic feedback. It's certainly timely, because we put it in and it came out immediately. It's maybe somewhat targeted. It does sort of reference some of the topics that are explored in the essay. but it's not particularly helpful, I would say and then also the formatting is a little bit kind of hard to go through. II don't know if a student would be able to look at this and be able to make a reasonable revisions based on just copy and pasting this. so I think to start one thing that you can think about is the format of the output from ChatGPT. So, for example. really quickly, one thing is that can we make it a little bit larger, so that people can see.

    Amanda Bickerstaff: yeah. So. But also, I think this is a good example of this prompt is a bad prompt. And it's not because and so. But this prompt isn't is a prompt that's used a lot in that same format of something as like the expectation that the context doesn't matter. I think if you hit command plus. If you're on a Mac, it'll make it much bigger

    Alex Muscat: yeah. Great. So now we can scan through these this feedback a little bit. But yeah, as Amanda said. Intentionally, not a good, prompt but I think a common starting point for this kind of thing. So I think one thing that you can think about next is the format that the output is generated in. So right now it's a numbered kind of list. You could turn it into a bulleted list. If you want you could also do something like Here, let's see, I have something. Copy it here. So here. What I'm doing is asking Chachi Bt to actually format this pretty differently. So it's pulling out quotes from the paper and then leaving a comment for each of those quotes. In a way that's a little bit more like how you might make line edits on a paper. So let's try this just in the same context window. And as you can see, we get a really different output. And it also messed up actually, because this is not even the topic that this. So that is

    Alex Muscat: a great example of hallucination, because it's talking about the Industrial Revolution. Now, which is not what this paper is about so let me copy the paper back in

    Alex Muscat: and paste it there. And I think now it should

    Alex Muscat: do a better job. Okay, great.

    Alex Muscat: So now we have this different format. So this is again just a way that you can iterate on prompts and get it into more of a format of what you're looking for by giving examples to a chat. Gp.

    Alex Muscat: and then the last and probably most important thing that I wanna talk about is sort of tone, style, and content. So this is where all of your expertise as a teacher. Everything you know about your students and your community comes in is

    Alex Muscat: how to how these comments should be phrased, how? What, what the learning objective of this assignment was, and what students should be striving for, because

    Alex Muscat: as you can see from this initial prompt we have. It's just asking for feedback, but with no context on what the student was actually trying to do in this assignment.

    Alex Muscat: A simple thing. You could do it with sort of tone and style would be saying something like,

    Alex Muscat: please

    Alex Muscat: use like, how might you

    Alex Muscat: questions to engage

    Alex Muscat: the student and the learning process in your comments?

    Amanda Bickerstaff: So I really like that, because that's really specific. Also in terms of what you would want it to do but like, and I think we can go back up like this is a really good example of where you can start with the better prompt.

    Amanda Bickerstaff: but then the more specific get in your own language, or your pedagogical language of your school or system. What you can do is you can really get there faster. And and the good part about this is that we're not. I don't think we're suggesting you don't read that pair of that paper to do this, but to read there was actually a good question. And the and the thing like is this more work.

    Amanda Bickerstaff: I mean. What I would say is reading. The paper probably takes you maybe 2 to 5 min, depending on the length that gives you a good base to then be able to use this tool.

    Amanda Bickerstaff: because then you can actually see very quickly what's good and not, and start to pull, and what you're doing and same thing that we would want students to do, which is like getting to a better thing, quicker or in a less steps, and and to get in something that then you can refine to be even better than what you would do on your own, and I think that that collapsing of effort.

    Amanda Bickerstaff: but without removing effort. And I think this is the idea that was a similar thing to students like this is cognitive load. And we're not saying that this is a one or nothing that you can go and sit click, you know, like, in that same check mark. But what it is saying is that I can get to something more quickly and better, so that the next time I get to the next paper I can do the same thing, and by the end of the 100 papers, instead of getting tired and just giving up.

    Amanda Bickerstaff: What I've done is I could keep some level of consistency, and so I think that this is a really good example of of how you can do that. And I wanna do something. Can we do this? Alex? There's actually a john, put into the maybe we can start a new context window. And then John Mccormick put in a super cool, prompt around tbt like using with the free version, but maybe use the great Gatsby and then use his. His prompts a little bit up the screen.

    Amanda Bickerstaff: But I think it'd be cool to kind of like love having people from the audience share. And then, Dan, if you don't mind sharing the I 5 s. Framework with everybody that we have as well, I'd really appreciate that.

    Alex Muscat: Yeah, let me. Okay, I might actually put this the prompts a little long. So maybe I'll put it into a

    Alex Muscat: doc so everyone can see it here. And then we could try it with the Gatsby essay

    Amanda Bickerstaff: awesome, and we'll make sure to share the prompts that prompt that the cool prompt that out showed, too, in the follow up as well.

    Alex Muscat: Great. Yeah. Let's let's try this. So we have this promontier. and then I'll paste, and the Scatsby SA.

    Alex Muscat: And we can see how this works.

    Amanda Bickerstaff: I will give an anecdote where?

    Amanda Bickerstaff: I read a lot, but don't remember a lot of what I read, and so II did not do great on my college. My Ap. Exam on

    Amanda Bickerstaff: auto essay, because I picked the great Gatsby, and then I couldn't remember the character names, so I only got a 3 which is not good enough for Emory, but wish I could have had a little bit more support along the way, and remembered what Dave's name was. I think I think we've all been there.

    Amanda Bickerstaff: so this is great. So let's see what? Let's see. So

    Alex Muscat: yeah, so we have

    Alex Muscat: the strengths here that are outlined. And then some areas of improvement. And it looks like a lot of it is sort of about like writing mechanics. And structure.

    Amanda Bickerstaff: Yeah, that's really, really interesting. And so for this, they asked for paragraph structure and sentence flow. So since it's variety depth, so it looks like it's done a decent job of providing feedback on those areas.

    Amanda Bickerstaff: I think that this is. It's a really good example, though, of like, why prompt libraries are so good because something like this, you could go back like John. It's got a great job. We have our prompt library, which shown a bit. But this is a really good example of

    Amanda Bickerstaff: a great, you know, customizable, prompt, that can be used to get you a bit closer. But now we're actually gonna look at brisk, so I've never seen Brisk live like I know, Alex. I know Armand. Amar and I are like all in the same places. So you see him person next week. But I would love to see like how you do this, because I think the cool thing. One of the cool things I know about Prisc is that it's it's an extension to chrome. And so a lot of schools, about 70% of schools in the Us are covered by Google

    Amanda Bickerstaff: like Google classrooms. That means a lot of work is being done in Google docs. So I'm really interested to see how you kind of collapse the the collapse the amount of effort a little bit more.

    Alex Muscat: Yeah, definitely. Let's transition to brisk and actually think that last prompt is a nice transition, because it's kind of mirrors, the like basic prompt we have. And the

    Alex Muscat: free version of brisk. So here I am on a Google Doc, and I have my brisk icon here in the right hand corner. So whenever I'm on a Google Doc, I can click on brisk and use it for a variety of things. But in this click case I'll click, give feedback

    Alex Muscat: and actually you already have a little prompt in here.

    Alex Muscat: But I think, to start so I'll choose the grade level here, and maybe ninth grade. I think that's usually with the great Gatsby. What?

    Alex Muscat: What grade level students are at

    Alex Muscat: and then I'll maybe I'll do kind of similar guidance, as the prompt that was submitted around. Providing feedback on paragraph, paragraph structure and sentence flow.

    Alex Muscat: So I'll just do a really simple, prompt

    Alex Muscat: and then I'll click this pink button here that, says Brisket.

    Alex Muscat: And what brisk will do in the side panel is kind of act as a thought partner for me. In giving feedback on this essay. So by default. We

    Alex Muscat: structure the feedback and glow grow and wondering. And we've actually heard a lot of

    Alex Muscat: teachers really like this, and like start to incorporate this into sort of their school language and using glow and grow

    Alex Muscat: outside of just the context of using brisk but we like this as like a starting point, because it helps to make sure that we're providing kind of some balanced balanced starting point for the feedback that you're gonna give in this essay. We have some kind of positive comment, more constructive comment, and then a some pondering questions that are engaging the student in in thinking about sort of where they could go next with this.

    Alex Muscat: yeah, I'm muted, of course. Is there a possibility to change up that format if like grow glow and wondering wasn't the format of the the teacher uses for the school uses.

    Alex Muscat: Yeah, I'm muted, of course. Is there a possibility to change up that format if like grow glow and wondering wasn't the format of the teacher uses for the school uses.

    Alex Muscat: Yeah, and let me show you so maybe I can demo quickly.

    Alex Muscat: what it looks like when also we could incorporate a rubric into the feedback, too. So let me open up a new doc

    Alex Muscat: and

    Alex Muscat: I'll use brisk again to create a rubric, this time with right? With brisk I'm already on a rubric. Actually, it looks like so

    Alex Muscat: I have some options here in terms of like grade level and the point scale of the rubric. I think I'll do a 3 point scale, because these rubrics can get pretty long. I do not want to be that student or that teacher. Oh, goodness, yeah,

    Alex Muscat: and I'll do something really basic for this demo. But of course, like, be as specific as possible. For your actual classroom. So we'll do something like, create a literary

    Alex Muscat: analysis rubric with. And let's just do 3 rubric criteria. You could also be more prescriptive about

    Alex Muscat: the types of rubric criteria you want whether it's like grammar conventions or style or organization. So it starts out looking a little wonky, but in just a moment it will be formatted into a real table. And then we could use this rubric to

    Amanda Bickerstaff: more advanced feedback feature looks like that goes beyond that glow, grow wondering type format. And can you, can you actually edit this now and like, change this rubric? Yeah, you're not. You can't use the generative AI, though, to keep changing it right? You're gonna have to do it by hand at this stage.

    Alex Muscat: So we're actually going to release a feature pretty soon. That lets you keep kind of riffing on that same like prompt you are working on so coming soon.

    Alex Muscat: Let's copy in this rubric that we have in Google docs. Now. And I'll open up brisk back in this Gatsby essay and click, give feedback.

    Alex Muscat: and I'll toggle on this specific feedback, and we'll see in just a moment what that does. You can just paste in the rubric like this. And it'll be able to parse the different rubric criteria, and all of that when giving feedback.

    Alex Muscat: So I'll get feedback now. And

    Alex Muscat: with this more advanced feature it actually drafts the comments as Google Doc comments.

    Alex Muscat: So obviously, you should read through all these comments, and you have the ability to dismiss the comments to personalize them. In any way that you want

    Alex Muscat: But, as you can see, it's a lot more targeted and specific in terms of actually pulling out

    Alex Muscat: specific quotes from the essay, and also

    Alex Muscat: being tied to that rubric that we that we generated earlier.

    we go up. But Alex to the the one above.

    Alex Muscat: sure, yeah, that one.

    Amanda Bickerstaff: Yeah, II think it's really interesting, because I think that like, you know, with that symbolism piece like they identify symbols. And then there's the example of them, right? Which is

    Amanda Bickerstaff: which is really good. I think it. I think it's really interesting, though, because I think that

    Amanda Bickerstaff: there's potentially like the possibility of it being like maybe really positive or overly positive or like, I could see that this is something you still would need to look at like, and I think it's pretty interesting. And you have a lot of interest from people. Alex. I know you're not the boss, but they would like this for free and not in the premium features. But so I'm guessing it's not free at this stage. And does this work outside of Google Docs at this stage, are you really native and Google Chrome? Could it work in a word, document or something else?

    Alex Muscat: Yeah, so right now, it's really

    Alex Muscat: just within Google, docs. Although we are we have some features. That sort of you can start from outside of Google docs like changing the reading level of articles across the web. And we'll we'll be adding some more features soon. That will allow you to kind of generate text from anywhere or generate text into a new Google Doc, even if your starting point isn't to Google, Doc.

    Amanda Bickerstaff: great. And I wanna say to everyone, we're generative, I is brand new. It's a brand new baby. So there's a reason why it's in Google docs. And it's not because we don't want. They don't want to support other other avenues. But you know you have to make choices as startup to to figure out what's the best place to get it, because so many I'm assuming, because so many schools are actually using like Google classroom, that's a great place to help as many people as quickly as possible, which is really cool to see.

    Amanda Bickerstaff: This is great, I mean, I think we could for my can you show us an example of like when it's messed up, though, and I hate to do that like. But like, is there something that you know that like it can struggle with a bit so that people one of the things I don't. I really want us to always kind of couch with is you just have to always read it. You'd have to always like, be cognizant is so is there something that you know that like is just something to be a little careful about?

    Alex Muscat: Yeah, let me find a good example here.

    Alex Muscat: so I think kind of as you were. You were saying earlier. The definitely a big thing to watch out for is hallucinations so

    Alex Muscat: either that could be like things that are overly positive or overly negative. Because you're you're asking for a positive comment. You're asking for a negative comment, and so it will. It's just trying to be like as helpful as Po possible to you. And so it'll find something to say.

    Alex Muscat: And I think another place where that can happen is, if you, for example, ask it to correct the grammar in an essay where there's no grammar mistakes, it'll start saying that things are grammar mistakes. So I could show an example. Kind of like that. Like

    Alex Muscat: again. This is not a great prompt, but a starting point. Like, please correct the grammar in the student

    Alex Muscat: assignment?

    Amanda Bickerstaff: And does does brisk have that same issue, too? Or is it like

    Amanda Bickerstaff: like? Does does it sometimes try to correct grammar. That's fine. If the grammar is really good.

    Alex Muscat: Ii actually haven't noticed that we we do use Gpt 4 with brisk feedback, and so it is better in general than the like. Just using 3, for example.

    Alex Muscat: but II yeah, I think, with any of these AI tools that you're using, whether it's Chachi Bt or Bard or Claude, you should always be aware that you can kind of leave the AI in the wrong direction, or it could pull out things like

    Alex Muscat: saying that the grammar is incorrect when it's not actually incorrect. And obviously, you don't want

    Alex Muscat: to share that with a student, because that could really confuse them or break trust. As you were talking about earlier.

    Amanda Bickerstaff: Absolutely. So what I'm just. Gonna I know that everyone has access to to brisk. And and so I we saw some of the limitations of how formative feedback could be a little bit tricky, but it could also, with better prompting. I think you can get to Pretty. You can get to some good stuff, especially as the first draft. But I'm just gonna share screen just to kind of share some of our resources. So that it's a big part of what we try to do is make sure that...

    Amanda Bickerstaff: and we have quite a few around like feedback, and like creating spaces in which feedback happens. And one of my one of my favorite things that happened recently, and a session that we had was with a teacher using our exit tickets. There's a very common form of formative feedback. It's really how we kind of catch people like add the way out. What do they know? What do they need to know? And so we've got this exit to ticket prompt and the way our prompt libraries designed...

    Amanda Bickerstaff: it's much less advanced. It's it's it's the opposite of the great work that Brisk has done. But it's meant to be a little bit more just like kind of, you know. Control your thing, but also see like what's possible. So we have like a prompt that you can change. We have an example prompt, and then we have kind of how it works for you, so you can kind of push and pull it in a lot of ways...

    Amanda Bickerstaff: and so for this one, it's a fourth grader. I'll go to Chat Gbt, and I can put this in, and I'll I'll make this bigger so people can see it. And so you're an expert educator. So like the 5 best is that we have is set the scene. So who do we want to get? You're built? You're skilled at creating engaging hands on lessons and activities for students. And then we're as specific as possible. We want to create 5 exit ticket ideas I can use in my fourth grade math class at the end of a dramatary lesson on identifying, labeling and measuring angles...

    Amanda Bickerstaff: and then the agency activity. Add questions should be very brief, and so to serve as a form of assessment to gauge students, understanding the material covered. And so one of the things that's really great about Chatbot and other tools is, it's like a brainstorming partner, and so like it would take me forever to come up with 5, and I can ask Chatbot to come up with 70, and it's gonna keep doing it...

    Amanda Bickerstaff: and so I think that this is really where we have an opportunity to start using these tools to to to broaden our idea and our opportunity around formative assessment, and just like Brisk, has their rubric. You can then ask to create a rubric for these as well like create a, you know, a checklist or a rubric that can do that. And so here's an example, though, of a way in which...

    Amanda Bickerstaff: we can broaden our ability to create engaging formative assessments, because we often think of an assessment isn't engaging, or it's just something that kids have to do. But actually, what we could do is we can create spaces of bunch students can show their work and their understanding and ways that matter to them...

    Amanda Bickerstaff: so I know that it's in the morning, in the middle of morning, the night for some people. So we're gonna stop here in terms of content. But we're gonna have some QA. And I see some great questions. But what we'd love to do is Alex. I met Armon through all kinds of things, but like we love connecting with people. And so I'm here on on Linkedin. I'm very on Linkedin...

    Amanda Bickerstaff: and then we've got Alex. So please connect with us. We really love this conversation. And I'm sure you're gonna have some great questions for Alex coming up. But we just really appreciate those that have to leave for being here with us. And then those that can stay. Let's keep talking. So we have some questions already, Alex, so I'm gonna I'm gonna throw some questions to you...

    Alex Muscat: Yeah. So I think for the time being, at least, we're we're really just focused on...

    Alex Muscat: expanding the tools that were integrated with within, like the Google suite. So staying within the Google world...

    Alex Muscat: but as I mentioned a little earlier, where we have some cool features coming out soon, that will allow you to sort of work with any piece of text that you come across on the Internet, or like in an Lms so not Microsoft office, specifically. But...

    Amanda Bickerstaff: yeah, we have some exciting things coming soon. Those those Microsoft people. I don't know about them. But now I think that's really interesting. And II think it's a fascinating time. So do you guys have an AI detection tool. I didn't know this. But you have a question about it. You guys do have an AI detection tool. Does it work?...

    Alex Muscat: Yeah. So we we do have an AI detection tool. Ii mean, I think that with all of these AI detectors it is always giving you a probability. It's never a...

    Alex Muscat: for sure. Yes, or for sure. No. And so I actually, in addition to the AI detection we, we think that's like one important data point that can help you in making a decision about...

    Amanda Bickerstaff: Yeah, I'll I'll be honest. I'm not that comfortable with AI detection tools, and I'm sure it does work, but I'm sure it also has false positives involved this false negatives because there's no watermarking. There's no like ability, and and people get better at this and it's easy like there are ways to hijack pretty much any kind of technology. But I like the idea of shifting towards more of like what we've seen in other like, or organizations like roomy and curse if we did one with them...

    Amanda Bickerstaff: a while ago about proof of effort or proof of originality and actually talked to a a learning designer today, who is talking about how they were trying to think about creating AI resistant and AI assisted assessments. And they were just using Google Docs to like, look at version history...

    Amanda Bickerstaff: and that is not what it's been designed for. In fact, unless there's massive change it doesn't actually record. And you're only going to see, like the in the like, the the outcome, so to speak. And so I think it's interesting that you all are moving that way. I really like that idea of, instead of trying like, what are we trying to do? We're trying to catch students, are we trying to teach students?...

    Amanda Bickerstaff: and like, if our job is to catch students and the problems and the cheating, or the plagiarism, or or something that is...

    Amanda Bickerstaff: you know, problematic, then like we'll keep going down that pathway. But if it's really more about, like, how do we teach? And we shift towards these AI resistant ways of thinking about education, then it opens up a whole new world of opportunities. So I like that...

    Amanda Bickerstaff: and, uh, yeah. So then the second question is from Jerry or Gerry about the math. So what's the breakdown? Do you see English and maths being the top two subjects, or is it other subjects that people are using your tool for?...

    Alex Muscat: Yeah, that's that's a good question. I think in terms of subject areas, we see like a pretty broad spread...

    Amanda Bickerstaff: I have seen more history. Yeah. Yeah, I think it's fascinating to see like, I always thought that one thing that would be really interesting is like economics and social studies. Because there's not, I mean, there's obviously...

    Alex Muscat: and then, like I mentioned earlier, we have we have some like anatomy and physiology, teachers and students that are using Brisk as well. So yeah, I think I think it's it's definitely like a broad, broad spread, and I think it's one of the things that makes our job fun, is that we get to work with folks in a lot of different areas and with a lot of different sort of ideas and perspectives...

    Amanda Bickerstaff: Yeah, absolutely. So then Paul has a question. So, Paul says, are the results. Of course, not. Yeah. This is this is this is the this is the hardest question. And so, yeah. So we have a student who comes into class. Let's just say they're a sixth grader, and they're performing at like a second grade level. What's interesting is that actually happens a lot. And this is one of the things that we...

    Amanda Bickerstaff: notice when we were in classrooms is that we have students who are not functioning at the at the in the grade level that they're in...

    Amanda Bickerstaff: and so I'll tell you. I think we have it's not. It's not a big percentage. But if we had, let's just say, let's let's just do a little thought experiment and say we have 5% of our students that are performing 2 grade levels below, which is actually probably close to accurate...

    Amanda Bickerstaff: you have two choices. You can either teach a second grade curriculum, and by the end of the year, they're gonna be they're gonna know they're gonna be you know, maybe performing where they are at that level...

    Amanda Bickerstaff: or you can teach them at the sixth grade level, but understand that like the progress that they're going to make is going to be slower, and it's going to be sometimes more difficult. It's not to say it's more difficult because they can't get there. It's just more difficult because they're gonna have to work a lot harder. And so, you know...

    Amanda Bickerstaff: Paul's question is like it's really complex. Because, you know, we want to always be moving kids forward. We want to be making sure that they're making progress. But we also don't want to short change them. Like if they are a sixth grader, they deserve a sixth grade curriculum...

    Amanda Bickerstaff: so like it's it's kind of it's it's a really complex question. It's one of the things that I think about a lot, and one of the things that I think that there is no great answer to. It's it's a lot of it's a lot of conversation. It's a lot of working with families and students, and it's a lot of individualization. So you can do things like create these adaptive pathways where students are maybe you can accelerate them in some areas...

    Amanda Bickerstaff: but then slow them down in others. So you know that we did this with some kids and I'd actually love to talk to you guys more about it. We did it with some kids that were that were moving really quickly through math. So like, let's say that you have a student that's like a sixth grade math, but like they're doing eighth or ninth grade math. You can actually slow them down...

    Amanda Bickerstaff: in the things that they know, and then focus on the things that they don't know. And so like you can you can build out these pathways for them to make sure that they're being challenged in the areas where they're doing really well, but you're also supporting them and building up their weaknesses. And so I know it's a complex answer...

    Amanda Bickerstaff: to a complex question, and I think that that's why like sometimes when people are like, what's your favorite thing about teaching? I'm like, oh, the hardest questions, because that's like, where all the fun happens. And I know Alex has has I'm sure thoughts about this as well. And and in terms of...

    Amanda Bickerstaff: that's you know, in terms of what we're doing with Brisk, and I think that one of the things that's really powerful about Brisk is it gives you data that allows you to start to ask these questions. And when you're working with it at scale and you can see the...

    Amanda Bickerstaff: patterns of students and you can start to have conversations, like as an example, like in a math classroom. And Alex, I'm sure you've seen this before. But in a math classroom, it's not uncommon that students struggle with fractions...

    Amanda Bickerstaff: and then if you see that like the majority of your students are struggling with fractions, it might be like, what's the grade before fractions? What's the grade after fractions? You might be like, what is the concept that's coming into it? And you can actually have conversations about like...

    Amanda Bickerstaff: instructional shifts that are necessary. So that's where I think the data is really powerful is, it allows you to see these things at scale, and you can start to make shifts, so that you're not having to make...

    Amanda Bickerstaff: the same mistake and see the same results over and over again. So I think I think that that's where the that's where it's helpful. It's it's certainly not perfect and it's not like, yes, no, but it gives you a lot of data to start to have these complex questions, and then start to build up more more and more targeted approaches...

    Amanda Bickerstaff: I think that's a big part of what we do here is we say like it's it's it's not about just knowing that a student is behind. It's knowing where they're behind. What's going to help them move forward. And then like, how do we support that. And so I know that we're going to have to end shortly, but I just really wanted to thank you, Alex, for being here and being able to share with...

    Amanda Bickerstaff: you know, all the folks here on the call about what you're doing. And I think this is just really exciting work and I and it's always a pleasure to connect with you and the rest of the team at Brisk...

    Amanda Bickerstaff: And if you have questions for Alex and you didn't get to ask them, like, please feel free to to shoot them an email. You know, we're we're all teachers here. We're all educators and we're always excited to support each other and the work that we're doing. So thank you, Alex...

    Alex Muscat: Yeah, no, thank you, Amanda, and thanks, everyone, for having me on today. It's been a pleasure. If anyone does have any questions or wants to chat more about Brisk or just education in general, feel free to reach out. My email is alex@brisklearning.com. And I'm happy to chat and connect. Thanks again.

    Amanda Bickerstaff: Awesome. Thank you, Alex. And have a great day, everyone. And we'll see you at the next session. Take care.

    [End of Recorded Session]