After being recommended it by a former colleague, I was listening to the “You can learn with AI” episode of the Change, Technically podcast (which was really good, check it out) and it got me thinking:
As educators, what do we really want to get out of generative AI (or even AI in general)? I tinker with things all the time because I like understanding where the edges are, but in an ideal world (setting aside the considerable ethical, technical, accuracy and fabrication issues) what would we want tools like this to do?
Would I want them to:
- Just create things for me wholesale (e.g. lesson plans, assessments)?
- Completely automate aspects of my job (e.g. feedback, emails)?
- Make ME better at some aspect of my job (e.g. by giving constructive feedback on things I create myself)?
- Replace some aspect of my job entirely (e.g. take over asynchronous teaching, like the AI bros seem to want)?
…something else I haven’t considered? Some combination of things?
It was making me think about what I really consider to be my work, and what AI tools seem to largely do out there. A lot of education focused AI I see tries to sell itself as being resource creation tools. As a teacher I feel that my job is not resource creation, it’s not assessment writing, it’s not marking assessments - it’s understanding my students to help them develop. The problem with replacement of time consuming tasks is that, to me at least, every time the machine makes something for me, I miss out on really thinking about how my students might understand it, engage with it, question it, and learn from it. It’s not atrophy, it’s lost opportunity.
I would cheerfully accept generative AI reading and writing every email for me ever, and avoiding all the overthinking, wondering why this was an email, questioning why someone used Reply-All, and transactional back and forth of scheduling. That actually sounds pretty rad. Or we could just not use email for 90% of what schools use it for. That would be good too.
I think if I was an AI booster, my answer would be that this is just a context problem: enough context provided to the machine about the students, and next time it’ll surely work. It’ll surely create something with a facsimile of understanding of who that student is and what motivates them. Maybe something containing a sports analogy and lots of emoji. Yeah, that’ll do it. If only we could get those pesky kids to tell the machine everything.
…which reminds me of another excellent podcast episode from CoRecursive: From Hacker News to TikTok. Wouldn’t it be great if, in order to use technology, our data wasn’t abused at every turn to maximise attention. Maybe that’s how we should be running education instead…