How is AI changing the future of work?

Today we're going to talk about a really important question: How is AI changing the future of work? We're reviewing a recent study from Ethan Mollick that talks about centaurs and cyborgs and the Jagged Frontier! All of which can be quite confusing and we’ll get to those ideas too – but most importantly I’m here to take you through what the study has shown on the impact of AI on work.

Transcript:

Hi, Amanda from AI for Education. Today I'm going to talk about a really important question: How is AI changing the future of work?

While this study talks about centaurs and cyborgs and the Jagged Frontier, which can be quite confusing and we’ll get to that later – I’m here to take you through what the study has shown on the impact of AI on work.

 Ethan Mollick and this team have done a fascinating study. I love evidence. So I was really excited to see this about the impact of generative AI, specifically ChaptGPT4, on consultants at BCG. About 7% of all consultants at BCG agreed to the study and they were given the opportunity to complete a couple of tasks either with AI or without.

What they found is that it actually increased quality and speed for those that use AI across the board. That was really interesting in the sense of this had an immediate impact on both the quality and the speed to which these consultants were able to complete very common tasks that are part of their workday.

The first piece of really interesting new jargon is this idea of the Jagged Frontier. If you listen to me or talk to or follow other thinkers in the space, [you’ll have heard] that AI is actually really good at some things and really bad at other things, but it's incredibly hard to tell the difference. The way that Ethan Mollick's team thought about this is calling it the Jagged Frontier. This is the idea that there are some tasks in which ChatGPT does very, very well - like ideation, writing, other pieces that are very common to consultants' work. But then on the other end is just some things it does very poorly. If you asked it to create a fifty-word short blurb for a consulting agreement, it wouldn't be able to do that because it doesn't actually count. There's this idea that there are some things that it seems like it would be very easy for AI but actually are out of the realm of its capabilities, but it's really hard to tell the difference between the two.

The tasks - there are two - one in which it had very high AI capability and the other which it didn't. They asked people to do these tasks. They weren't told in advance that there was any difference. What was found is, those that did not use AI not only did they have a lower quality of response across the two tasks, but they also took longer to do them.

Then there were those that used AI -and these are the ones that got training too (I love this as an AI in education person). Here we actually saw that with digital literacy training that the outputs were higher, [and] we could also notice that their speed also increased. People are able to do things not only better in terms of quality but also in terms of speed - meaning that they could be more productive and efficient. This is really interesting to me.

What happens though when you give these consultants a task that AI is not very good at?

What they found is that the quality did increase, but what ends up happening is the accuracy of the response did not. Those that did not use AI and actually dug into the data themselves found through some very nuanced kind of comments and the qualitative data that the recommendation had to change than what AI would have said.

It's really interesting in terms of what does that mean for us as we get really comfortable and see real gains in quality and speed but also can now have these subtle experiences in which the content itself is not accurate. Or it's not done very well, or it has hallucinations where it makes up information that looks correct. This is something that's really important for us in terms of training people around digital literacy is that you can't just trust it [blindly]. You can't just retain the content, cut and paste it and put it in. The more comfortable you get with AI, potentially you become less and less able to critically look at and understand that these outputs can be wrong. That was something I thought was really interesting.

Another piece that I just want to point out is that those that actually used AI that were low performers actually did better in the task by pretty significant margins. There's this opportunity here as well to have this kind of skill gap really being collapsed where we have a lot more people performing at a high rate.

 The final piece is that I promised to talk about cyborgs and centaurs which you know, I love both science fiction and mythology - but these can be kind of complicated terms. A “Centaur” is someone that is using AI in terms of it being an extension of what they can do - the more traditional, “I started [a project] and then I use AI for a specific task [to help].”

A “Cyborg” is using it generally all the time. You're using it in really deep and meaningful ways where there's really no set interaction that you're not using it. There are very big differences in those approaches. I think that's also something that we could talk about going forward as to which approach actually leads to better results.

Previous
Previous

How to Use AI for Lesson Planning

Next
Next

The Top 6 Questions for Schools to Ask Gen AI Edtech Companies