Ensuring a measured response to ChatGPT
Science fiction has often presented new technology as something with both great promise and the capacity for menace. Whether it was Hal 9000 from 2001: A Space Odyssey, the replicants in Blade Runner or the Terminator, we have often seen depictions of what might happen when artificial intelligence grows to match the capacity of humans.
In futurology, there is a concept called the ‘Singularity’. This is the point where technological advancement stretches beyond existing human capabilities and therefore is able to self develop in ways that humans could not manage. This hypothetical situation enables the conditions for such rapid cycles of advancement that it creates technology with superintelligence so far beyond human intelligence that it renders the future unimaginable.
While I find this idea slightly terrifying, there has been a small step forward in Artificial Intelligence (AI) with the recent release of Chat GPT and Google Bard. Over the past weeks there has been somewhat of a moral panic regarding the potential misuse of this technology. This has been particularly expressed in schools and universities which have developed a range of responses including attempts to block access to the technology on local networks and the rapid adoption of new policies delineating consequences for misuse of the technology.
While the technology is new, the conversation is not. In around 370 BCE, Socrates raised concerns about the new practice of writing and stated that “this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practise their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.” In 2007, many schools globally blocked access to Wikipedia due to concerns about the veracity of information and worries regarding students cutting and pasting others’ ideas and passing them off as their own.
Similar concerns have been raised regarding the use of scanners which utilise OCR (Optical Character Recognition) to convert printed text to editable documents.
In each of these cases what proved to ultimately be the most advantageous response in an educational setting was to find ways to utilise the technology to enhance learning.
There is widespread concern that Chat GPT will lead to student cheating. Without teacher adjustment of assessment practices, this could undoubtedly be the case. But this does not necessitate the banning of the technology.
Regardless, this would be a futile exercise for a number of reasons. Firstly, the ready access to proxy servers, VPNs (Virtual Private Networks) and mobile internet tethering which all allow users to bypass local networks, make restrictions ineffective. Further, while Chat GPT could be blocked locally, there is no capacity to do so in students’ homes so there could still be doubt regarding the legitimacy of work undertaken there.
A sounder educational approach is to consider how we can utilise the new technologies to improve student learning. There is a principle often mentioned in contemporary teacher education courses – if you can Google the answer, you are asking the wrong question. This is also true of Chat GPT. The main concerns regarding the use of the technology are not so much related to learning – they are around assessment of learning. Just like with the Google example, our teachers may need to change some assessment practices knowing that this new technology exists. I predict that in the short term this will lead to more handwritten responses or annotated essay plans which identify where ideas were derived from.
Just as has been the case with Wikipedia and Google, I believe that ultimately the new AI technology can help us to advance student learning once teachers find novel ways to incorporate it into classroom activities and instruction.
It is highly likely that as the technology is refined and new generations emerge, AI will impact many aspects of daily life. My hope is that judicious use and monitoring of the technology in our classes will equip our students with the requisite boundaries and skill sets to use this in ways that enhance their experiences and learning.
New technology has not lead to the dystopian world that science fiction tends to predict will be a direct consequence of their implementation. What is certain is that just like the Terminator, the moral panic around such advancements will “be back!”
Shabbat Shalom,
Marc Light