ChatGPT in Higher Education: a curse, a blessing or an inevitability?
“We should trust our students as the default and not assume that they’re all going to go running off.”
“The education about how to use it is much more important than trying to ban it or whatever. That’s totally useless.”
These are just two of the thoughts expressed in our special LLInC podcast on ChatGPT.
The episode features:
- Michiel Musterd, Manager Data, AI & Media at LLInC
- Martin Compton, Associate Professor at University College London
- Ludo Juurlink, Associate Professor at Leiden University
In our forward-looking conversation, we discuss:
- How universities are dealing with ChatGPT;
- Ways that ChatGPT can help us to look critically at information;
- Ethical and accessibility issues related to AI text generators;
- The need for those working in higher education to collaborate in their response to ChatGPT.
Hello, and welcome to this podcast on the hottest topic in education, if not society, at the moment: ChatGPT. As you just heard, there’s both fear and hope around what ChatGPT and similar models bring. I am your host, Michiel Musterd: Manager Data, AI and media at the Leiden Learning and Innovation center. And today, I will be interviewing Martin Compton from University College London, and Ludo Juurlink from our own Leiden University on ChatGPT. And similar models in higher education.
We should trust our students as the default and not assume that they’re all going to go running off.
I think the education about how to use it is much more important than trying to ban it or whatever. That’s useless. It’s totally useless.
For the listeners not that deep in the subject, ChatGPT is a conversational AI that can generate human-like text in response to questions written in natural language. The release of GPT in the public domain at the end of November last year has ignited quite the frenzy. And that’s why I’m here to talk to my guests. is ChatGPT a curse a blessing or an inevitability?
Okay, so welcome both. Before we dive in, it’s probably good to have you introduce yourself a bit. Martin, can I ask you to go first?
Hi, yeah, thanks for having me. I’m Martin Compton. I work at University College London, UCL for short. We are a global university disrupting education, we say to the world. And sometimes we do and sometimes we don’t. We’re massive. We’re huge. We’re a big player in the research arena. And I work in a department called Arena, which is research based, education and faculty development. My own disciplinary background is history. But in my academic role here it’s about bridging the gaps that exist, I suppose, between pedagogy, assessment, feedback, and all the digital approaches and tools that are available. And so, you know, inevitably my interest was piqued when a lot of this generative AI stuff started to manifest in normal people ways of using things. And obviously, it’s a big, big issue and interest for the university. And I’m keen to focus on positives as well as addressing some of the issues.
Thank you. Ludo?
Ludo Juurlink. I’m a chemist, and I work at Leiden University in the Netherlands. Leiden is like UCL a large university, broad interest, broad range of academic programs. I mean, chemistry myself, I do chemical research. But on the side, I also focus on pedagogical research. And I’m involved in various programs that relate to what not almost. My recent interest in in ChatGTP is amongst others, for similar reasons as Martin just mentioned. I’m also actually a member of University Council, which clearly is involved in and highly interested in all the developments with AI being introduced in the universities as well, and particularly in their bachelor’s and master’s programs.
Yeah. Okay. Thank you, Ludo. Yeah, so I’m curious. ChatGPT has been around for about four months now. And maybe starting with you Ludo. Can you tell me a bit about what has happened within the university?
Yeah, sure. What has happened? It’s mostly information. I guess our university is still trying to figure out what it needs to do. So I think from, say, university central initially discussions I think were set up in all of the different departments to faculties. This is led in part by say its center that is hosting learning development, innovations. And with them and some people, for example, from the computer science departments, there are sessions in different departments where people are getting acquainted with what’s happening: what ChatGPT means, whether they can use it, how they can use it. But at central level, there’s not really a policy yet: how to use it, how not to use it. So it from that point of view the University Council was also still getting informed. So we’re really pretty early on in the stage of development of what to do with this development.
Okay. And how’s that for you, Martin? Is that very different or similar?
It’s pretty similar. I think possibly we’re a little bit further along the road. You know, we’re so big. We had, we’ve got 11 faculties, 40,000 students. You know, we’re massive. We’ve had all the responses, and a lot of those came to me. So we’ve had people saying ‘we’ve got to ban it’, you know, like they’re doing in parts of France, New York schools. People who love it, already integrating it into assessments. People who want to detect it. You know, so we’ve got the whole range of possible responses here. And I think, you know, there was quite some significant initial panic, because we have a lot of online exams, which are short answer, knowledge based. ChatGPT and other generative AI text based tools are really good at that kind of stuff. But I think we’ve had long enough now for people to realize that the fluency, the linguistic fluency, masks some of the inadequacies in terms of what it puts out, the hallucinations and all that kind of stuff. So I think there’s a little bit more understanding and balance happening now.We’ve issued one staff briefing. It’s not really policy. But it’s guidance for staff. And I’m very happy. I’m on the working group for that. And I’m very happy that we’re not going down the ‘ban it, detect it’ route. We are, as much as we can, as an institution trying to roll with it. We’ve also issued a student briefing on this that we’re encouraging all students to engage with because I think that literacy is going to be fundamental: understanding that we understand what it is, and that there are ways that you might use this perhaps to cut corners. But let’s focus on the positives. That’s kind of our approach at UCL.
Okay. So you’re actually not that concerned in terms of students misusing this, then?
I wouldn’t say that. I’ve just come from another meeting with the faculty that I work most closely with, which is Life Sciences. And, you know, I put to them in my presentation that we should trust our students as the default and not assume that they’re all going to go running off generating their essays and assignments. And I think, you know, particularly with large international cohorts, perhaps with students who are multilingual, perhaps worried about their ability to express themselves clearly in English, there is a concern, I think, more generalised concern that this will be utilised to shroud some of the difficulties some of our students may have in expressing themselves in English. So there is more of a concern, I think, across the faculties but centrally we’re trying to put a positive spin on this.
Okay, that sounds good. At least, I think it can be a positive thing in many cases. But I understand the concerns as well. So Ludo, is that maybe a question on your particular field? Because, well, you mentioned you’re in chemistry. ChatGPT is a language model. Chemistry, well, to some degree you could call it a language as well, with some stretch of the imagination, or are you worried that it’s also applicable to your fields, particularly?
In terms of, say chemical knowledge? A model like ChatGPT is not very good yet. So I recently read an article that was just published in the Journal of Chemical education. Actually, the last issue had two articles just on the use of ChatGPT in chemistry curriculum. So you do see that it’s slowly evolving into becoming a really hot topic. But in one of these articles, people describe that they actually took a standardised test. It’s a very large test that people take in India in order to get into chemical programs in universities. And ChatGPT just failed that test. So the amount of chemical knowledge or how it can describe chemical things is still extremely limited. However, on the other side, you also read in these articles that people are very concerned about students using it to, for example, write a little bit of a lab report. Alright, so students could easily give it an assignment or write a little bit of it, using, for example, the numbers that they actually also have in their tables and whatnot. So there is concern. But with regards to actual chemical knowledge it turns out that it’s it’s really not that as good as it could be, for example, on sciences that do not have their own specific language, like the chemical language with symbols. It looks a little bit like math, but it’s not. So in that respect, I think for chemistry content, I’m not so worried. I am mostly worried, say, for the use of of ChatGPT in general, and then more in studies where people need to write essays much more than within chemistry, for example.
Yeah, so in different programs I can imagine that. In linguistics, it’s a different, different ballgame altogether.
Yeah, the need to write significant fractions of text clearly. At some point our bachelors’ or masters’ students need to do this as well. At the end of the bachelors, they need to write a thesis. At the end of the masters program, they need to write a thesis. So there are concerns clearly about people copying information from anybody else, regardless whether it’s from an artificial intelligence system or a human intelligence system. So I think it’s mostly educating people about what to do and what not to do. That’s where I think the biggest issue is, and the same thing counts for for testing as well. How to use ChatGPT for testing and how not to use it for testing. I think the education about how to use it is much more important than trying to ban it or whatever. That’s useless. It’s totally useless.
And Martin I see you nodding? You already mentioned something about it. But can you…
Yeah, I mean, the banning thing is just not going to happen at all. You’re not going to be able to work with that. It’s unworkable. You know, Microsoft are talking about a pilot now, and I’ve tested some generative text based AI tools already where, as you type, it suggests completion of sentences. I think, you know, we’re going to have to work with that being integrated into Microsoft Word. Once that’s there, you know, how much do we talk about spell checkers anymore? How do we worry about grammar checking anymore? How much do we worry about people using calculators to do both basic mathematical and complex mathematical functions? You know, it’s about, the labour needs to be pushed to one side but the understanding must still be developed. And what we’ve got to do as educators, in my view, is look for those positions where these tools and be great. But if we don’t enable the students to develop that foundational knowledge, understanding what is happening, then you’re going to come up with some really, really weird outcomes. So you know, if a student doesn’t know what happens when you multiply one number by another, and only knows how to do it on a calculator, if they press the wrong buttons, and they get a ridiculous answer, they don’t know that that’s a ridiculous answer. I do because I know what happens when you multiply one number by another. I’m gonna have a rough ballpark figure. And it’s the same, I think, with generative text based things. If it just churns out some old garbage, and I hand that in as my thesis, you know, then more for me when I fail that thesis because I haven’t actually grasped the fundamentals that are appearing there. And I think it’s literacy. It’s a literacy for us which implies a resource, actually, in terms of the time to understand and develop that and it’s fast moving as well. So it’s an ongoing development, but also developing that literacy in students. Absolutely critical.
Yeah. I think you have a nice comparison there in terms of connecting it to the calculator, to the spellchecker. It’s, well, you could say it’s a new type of those kinds of developments, where the initial reaction is sort of knee jerk. We need to do something about it and then slowly thinking in okay, this is just the new reality that we have to work with and redesign or in this case, education for. I’m just curious: do you also use it personally?
Yeah. I use it all the time. Actually it’s quite remarkable. And interestingly, the thing that makes people go, well, you know, the jaw dropping stuff where you put your short answer question in, and it bungs out loads of stuff, that’s the thing that, from my point of view, I’m least likely to use it for. Particularly when you’re starting to look at other tools like Bing Chat, which is pretty good at pushing out recent news articles and journal articles. ChatGPT famously hallucinates that kind of thing. All of this kind of stuff is evolving. What I’ve used it for is to suggest structures for presentations, for documents, for policies. I’ve been experimenting with it to create basic marking rubrics, I’m looking at ways in which it might be able to assist with marking and feedback on assessments. So you know, there are a bunch of things that you can use it for that I think, in all the excitement and anxiety, are being missed. So I’m trying to encourage my colleagues to think about other ways that you can use these tools. And I’m doing it as I go to try and say what about this? Have you thought about this? Have you tried that? So yeah, so experimenting, for the sake of showing, but also actually using it for myself. Just one quick example, if you if you can give me the time. My wife said to me the other day, I’ve got to do this intro to a conference thing. You know, it’s a platform for students. I don’t really know how to start it. It’s got to be upbeat. It’s got to be motivational, but I don’t want it to sound too cheesy. I said, Have you thought of putting that through ChatGPT? And she went, ‘no, okay, I’ll give it a go’. And so she put something in it generated a motivational speech. She said, ‘this is brilliant, I could use this’ and then went to it, modified it. She didn’t really use any of what was generated but it got her over that hump. It got her across a line, that sort of writer’s block thing. And I think it’s going to be brilliant in that domain. Yeah.
Yeah, I’ve heard that type of example a lot more: to basically get over that first well, to speak in Ludo’s terms, get over that activation energy to get started, and then develop something yourself in the end. But have something on paper already.
A nice thing, I think, when you actually use it for that purpose, you immediately start being critical about what it puts out. Right? You immediately start going: ‘Yeah but no, no, no, this is not good enough. Now I need to add something in between there. And the order of that is wrong. And then no, that’s not really the right type of phrasing, I should use something else’. So in that respect, I think it is really a helpful tool, if these are the types of things that you would like students to learn or to practice with. So in that respect, I think it’s just an extra gadget that we could possibly use to help our students develop things like critical thinking, like rephrasing in your own words. There’s a lot of options that we should explore. Let me put it like that.
And do you already have any concrete examples of using it in that way?
So fa, in our university, I’ve only heard of one teacher actually using it. But he used generated text from ChatGPT in an exam, and asked students to compare what ChatGPT actually put out to this question to the text that they had read already from just the literature that was part of the class, and then compare and contrast and write something about that at the exam. They can’t use open AI at that point. They just really have to compare and contrast. So it’s critical thinking skills that are that are being tested at that point which is, I think, quite interesting. I have been trying to figure out internally within Leiden University whether we actually can ask students to use it. Because at this point, for as far as I know, you still are required to create an account. And that probably means that you have to, if I recall correctly, at least have to provide a personal email address and that’s personal data. So I’m not sure whether I can ask students to actually actively engage with it without infringing the GDPR.
Yeah, there’s a GDPR concern there, which I think it’s something that maybe people miss because of the hype around it. And have you had have any luck figuring out what the answer to the question is?
So far I’ve only had unofficial responses, so I’m not going to repeat them here.
Okay. Well, what about you, Martin? Because GDPR is applicable for you as well.
Yeah, you never know what’s going to be un-applicable as a consequence of Brexit. But for the moment it’s still there. And, yeah, it’s a major issue. We can’t ask the students to do things that might have a cost. And, you know, some of these tools are being released on a drip-feed basis. So some people have got Bing but you know, some haven’t. And ChatGPT, you know, it’s unreliable. You can’t guarantee access to it. And it’s not internally supported. So we can’t require students to do these kinds of things. I’ve got a colleague in engineering, who is asking his students to, to structure a large piece of work, which includes a contextual statement and a literature review. And what he’s done is quite opportunistically and creatively, I think, is he suggested to students that they try out generative text tools, like ChatGPT, not specifically naming a tool as a starting point. But his wording is this, I’ve got it here. He says:
‘I’d like everyone to include a section in their coursework report with the title Artificial Intelligence Resources. In there, I would like you to explain how you used ChatGPT or similar resources. And I’d like you afterwards to read your comments on how useful you found them. It would be especially useful to see examples of work before and after AI intervention’
So it’s about acknowledging the existence about a little bit of opportunistic action research, but also nudging students towards thinking a little bit differently about how they generate ideas, and about how they create the texts that are ultimately going to be assessed on. So I think I think that’s really nice. But you know, I said it before, I’ll say it again, in six months time, we’re not going to be worried about this, because it’ll all be integrated into the tools that we use already. Assuming we’re using Office or Google products.
Yeah, exactly. So the privacy concern basically switches to the privacy concerns that we might already have with big tech companies but then it’s, well, it basically shifts into that discussion. Okay. Thank you. Switching gear a little bit here, in terms of using these kinds of technologies, in an ethical way. I’m curious what your thoughts are on what universities should be doing in that respect? And actually, maybe building on that? Are universities, the ones that should be doing this? Or should we have secondary school, primary school concerned with educating our students already?
Obviously there’s major ethical issues. I mean, there’s sustainability issues around carbon footprints. You know, just the learning that these large language models employ. There’s so much processing power, so much computer power. That’s a big ethical issue. Privacy implications, of course, which you’ve already mentioned, Ludo. But also around the way in which the language models are reinforced through human interventions, major ethical issues there. I think we have to acknowledge these but not use that as an opportunity to say we’re not touching it. So it’s about acknowledgement, about surfacing it, about being strident in our responses and resisting where we can. But at the same time acknowledging that this is part and parcel of how we move forward. We’re trying to shape and change that landscape wherever possible. In terms of the schools, in a way some people in the UK might argue that we’re in a good position relatively speaking because we’ve recently shifted a lot of our coursework-based assessments age 16 and 18 (the two critical points of assessment in the UK) to exam-based rather than coursework-based. I think that’s a retrograde step myself but it avoids some of the issues around this, actually, and it’s going to be missing an opportunity. I think they have exactly the same issues as we do in higher education. And dealing with the ethical side of this is major because kids with access to the technology at home, have an advantage. And if you don’t acknowledge that, and accept that, and work with it in the schools, then you’re just widening that digital divide ever further.
What about you, Ludo? I think you started to respond as well.
I was kind of surprised actually to find a letter from the director from the high school where my kids go, already a month ago, regarding the use of ChatGPT in coursework. And it basically stated more or less what I think we should do. Just basically, say, educate these kids early on what is fraud and what is not. Using something or somebody else’s ideas, and presenting them as your own, is fraudulent but using somebody else’s ideas and referencing them appropriately, and then adding what you believe or think about it, is perfectly fine. Actually, that’s what we should train these kids to do. So trying to avoid the idea of using this new technology in our education is just ridiculous. And it’s going to happen anyway. And so yeah, in line with what Martin already earlier said, I think this type of technology will be everywhere in a very short amount of time. So it’ll be in our cars, in our houses, it’ll probably be in our mobile phones. We’ll start talking to this mobile phone, it’ll start talking back to you. So just ignoring it is, it’s futile, it’s useless. And for sure universities should embrace this technology, I think, in the best possible way we could and move forward. But one of the ethical things regarding this technology is the availability. Currently, it’s already costs you 20 bucks per month if you want not a standard free use, but the advanced, I think it’s actually running version four already, for 20 bucks a month and you have unlimited access all the time. So this is going to start making differences between the accessibility. And that is, I think, at least in the application in education, an issue. So my point of view is that universities should actively engage with the companies that are generating this and start thinking about how to actively pull this into their education. Instead of just looking from a distance and waiting to see what’s happening. I think we should be more active about this and proactive about it. And take steps to connect whatever the backside of it is to our, say, digital learning environment to the options that teachers have to create tests, or whatnot.
So you see a big concern in accessibility, essentially, and the divide in who has access and who does not?
Exactly, and it’s the idea that these chat bots effectively have such a vast amount of knowledge, if you want. Whether it’s biased or non biased, and whether it spits this back at you for a question, in a biased or non biased sense. But just learning from anything that you would like to know is so much easier, right now. You can go to Wikipedia with a question and look for something, and then kind of just hop from item to item in order to learn something about a field that you know little about at that point, but by proactively just asking questions and getting feedback, asking questions, it goes much faster. So you can learn much faster if you use the bots appropriately. And then, if there’s a divide between students who have access to better bots or faster bots, or have more access to it, then you again start discriminating based on financial resources as students, which I am extremely against.
Yeah, so there’s clearly a topic for well, universities, but maybe actually more society at large to make sure that there’s equal, or at least more equal opportunity, there as well.
That’s exactly why I think universities should actively step in, so that they basically pull the resources into the university within a safe learning environment, digital learning environment, so that all of their students have access at the same level and to the same content, to the same box,
My mind immediately starts racing in terms of running these kinds of models on your own cloud, as universities. But well, maybe let’s not go into the technical side there. Just hopping on to something else that you mentioned in your in your answer. So, using this for sort of training and learning purposes, ChatGPT or, more in general, GPT models are known to hallucinate. Sometimes they just produce factual nonsense in a way that sounds like it’s fact. How do we avoid that? Well, either students or actually in many cases, also teachers completely start mistrusting this. Or completely start trusting. I mean, there’s two sides to it. Do you have any thoughts there?
It’s critical thinking skills right? There are also teachers who are hallucinating. Recently within the Netherlands somebody was, at a different university than our own, was basically was put on a non-active position because apparently he was teaching all kinds of nonsense from a scientific point of view. Yeah, so in that respect, I mean, people are also just algorithms, right? A very, very highly complex biological algorithm that also got a bunch of information infrastructure that is as a function of time and development. And these bots are not very different in that respect. I actually had a quite an interesting discussion with ChatGPT about itself in that and the comparability to humans. But yeah, we have to go along, get into it, and make sure that students start using it in the appropriate sense, develop their critical skills, and use it. It may actually be very useful, exactly for that reason. Right. To help students develop critical skills. Get their information quickly, from different sources. Check what your teachers told you. Also the resources that are provided, a textbook in chemistry is also not always correct. It also simplifies or oversimplifies things sometimes. So I’m not too concerned about it. Wikipedia was often wrong at the beginning as well. And it’s not anymore.
There’s some self correcting in there, as well.
I think that, you know, what you say is, is aspirational. And I agree, I think there are loads of opportunities here. But we’ve got to accept that we live in a society where populism is on the rise, and that people believe all sorts of garbage that they read on the internet. I type, you know, ‘Einstein said Britain is great’ or whatever. It’s a quote, it’s done as a meme. Everybody’s sharing it, nobody checks these things. And that is the thing, it’s instantaneous, I’m fed something, I feed it back to the world without really thinking about it and this, for a lot of people, might exacerbate that. I did a little experiment with it. I got it to write an essay on something that was easily verifiable. So I’m for my sins. I’m a Tottenham Hotspur football club supporter. And I got it to write about Tottenham Hotspurs FA Cup wins. That’s, you know, the signature trophy in England. And it said, Spurs had won it six times, not eight. It got a lot of the scores wrong. But it wrote it with such convincingness that anybody reading that would think this is the Wikipedia article. I got 600 words out of it. Eleven factual errors and half a dozen errors by omission. In 600 words, this is the kind of thing we’re dealing with. And the more that kind of stuff is generated, and put onto people’s blogs and what have you, that then becomes the material that future language models are learning from. So we’re not only getting to a plateau of there’s all the knowledge that these models are learning from, that’s the data set there. We’re also actually diminishing that dataset by introducing errors in it that have been created by language models themselves. We could potentially be in a real mess of check-ability and all of that. And so I think what Ludo says about criticality and developing criticality. This is where we need to go with all of this.
I could even imagine that you want to start teaching the language models this critical thinking skill as well. I don’t know how that would work but making a selection of what’s actual, factual and what is not.
Exactly. I mean, the whole thing about education, about academic education, I think, is about these academic skills. So that’s developing a critical attitude towards information. It’s either generated by yourself or in my case, through experiments, chemical experiments. You have to learn to be critical about the results that you get, and also know when to dismiss results because something may have gone wrong while you were not looking or having a coffee while the machine was running, or whatever. So that’s both critical towards information produced by yourself, also by others. Fact checking capabilities I think is extremely important. self reflection. Get your information from different sources and come up with your ideas from that, and then still even then do self reflection based on how you got ultimately to that point. And unbiased writing skills. I think that’s something that is also an academic skill. And again, I think these the output from chatbots is, in this respect, a very useful source.
So the implementation of this is, I think it’s very good. And I think Martin, you said something about the biases, right. And if I have concerns about this, this is one of my two major concerns. It’s the inherent bias. There’s two types of biases that I’m worried about. It’s a bias in the data that’s used for the training set. And it’s the bias introduced by humans involved in the training.
But if you currently look at the output from GPT people would say it’s rather woke. It’s very politically correct, everything that it’s saying. And that may be acceptable, it may be wanted, in some cases it may also not be wanted. So that’s one of the major two concerns that I have, the inherent biases, both from the data as well as from the people in training. The other concern is the non-free availability: subscriptions, stuff like that. That’s a major concern as well, I think. Otherwise, I really don’t worry too much about this.
Okay, nice. Maybe to wrap this up. Circling back to the original question. I think I can guess the answer based on the work you’ve been talking about, over the course of this interview. ChatGPT, do you consider it a curse, a blessing or an inevitability?
I think for some people it’s definitely going to be a curse because it’s going to require a change of not only practice but the way that you think about things, and what we value in terms of the purpose of education. And one of the things about chatbots is that these large language models, they can be trained. That’s what you do with them, you train them. People could also be trained. But what you can’t do to the Chatbot is educate it. But you can do that to people. And educating people is what we need to be doing more of. And that’s exactly the critical thinking aspect, and so on, and so forth. The ability to study, the ability to synthesise, the ability to bring different sources together and make rational, reasoned judgments about it. What we need to do is understand that when we’re training technology to do things, that we’re passing on some of that activity to them, but we have to understand what is happening there. If we lose sight of that, we’ve got a problem. So for people it might be a curse because it requires time to learn about these things.
I don’t know what it’s like in Holland but in the UK, we still have a few, only a few, lecturers who will go into a lecture hall. They won’t use any of the technology. Some of them might even wheel in an overhead projector and put their acetates on, or read from their hand-written notes aloud, like the traditional lecture of the mediaeval times. It takes people a long time to change in higher education. Although we’re about discovery, we are very conservative, and we don’t actually like change. We showed in the pandemic, that we can change quickly, that we have the potential to do things very swiftly and reactively. And that’s fantastic. But we are, by our nature, slow to change and respond to pedagogic Moore’s assessment needs, changing demographics, changing employment market. This is pushing us to really rethink the pace at which we change, the nature of the changes that we make, and the reasons why we’re preparing those changes for employment market for the skills that students need. So we need to educate our students about these tools for a better education and let the tools that we can train to do things, do what they do well. So yeah, embrace it, love it, worry about it cause its going to require a heck of a lot of resource, but stuff like this is good because this means we’re collaborating a little bit. More collaboration can only be good in terms of the response.
Nice, thank you Martin. Ludo for you the same question. Curious to hear your thoughts.
I wholeheartedly agree. This is clearly inevitable, and it will cause concerns to some people. But education should embrace this and see this as there is only one way forward, and that’s in a responsible way: try to incorporate this type of generative artificial intelligence technology into our education and make the best of it. For these kids it’s not going to go away ever anymore so better learn to deal with it and use it appropriately.
Thank you. Thank you both for your time and, well, maybe we can follow up some time in half a year or so. See where we are at, at that time.
Perhaps you can have a discussion with a bot at that point!