From groceries to academia: A critical lens on chatGPT

By Aiyana Vittoria Amplatz and Sol Zeev-Ben-Mordehai

The relationship between humans and AI is a complex one. Whilst AI has the potential to revolutionise many aspects of human life, concerns around job displacement, privacy, and ethics also exist. As AI continues to evolve, it will be important for society to ensure that it is used in ways that benefit humanity as a whole. 

Captivating introduction, right? This grammatically perfect and formal sentence was written by ChatGPT when asked to write about the relationship between humans and robots in 50 words. ChatGPT, launched last November by the American AI company OpenAI, is an artificial intelligence (AI) language system. It is available 24/7 to its users, providing answers to their questions. It relies on language models that analyse vast amounts of data, ranging from books, articles and social media posts. The AI language model was programmed on training data, and therefore, does not have access to current external data. Machine learning is used to help the AI predict what information would make the most sense in the generated reply. This happens without knowing whether the generated statement is true or false and without knowing if the user wanted to hear that answer. Machine learning uses data and algorithms aimed at mimicking how humans learn and improve. It creates a “neural network” which is a computer system that is organised in a fashion that imitates a human’s neural brain network. 

ChatGPT is not the only chatbot in circulation which utilises these modern programming techniques. Since ChatGPT’s success, Google and Microsoft have also launched their own chatbots. These tools are hard to avoid in the new reality of 2023. There are numerous implications, ranging from their impact on science, politics and student assignments. It is crucial to understand that although the chat is helpful in carrying out daily tasks, it also raises numerous ethical concerns. 

While ChatGPT may help us with trivial daily tasks and editing our essays, the negative effects of such AI tools cannot be ignored. Firstly, while the AI language model has been trained on a large collection of data, users are unable to view or critically assess the data which was used to provide them with an answer. This raises ethical questions regarding credibility and plagiarism. Machine learning does not distinguish between generated texts that are factually correct and backed by scientific evidence and generated texts that are based on conspiracy theories and tabloid articles. There is a risk that as our dependency on these online tools grows, we forget how to be critical and simply accept AI-generated text without assessing whether or not the answer is factually correct. This concern was amplified when several academic articles cited ChatGPT as their co-author. Many academics voiced their concern leading to Holden Thorp, the editor-in-chief of the leading US journal Science, announcing that ChatGPT could not be listed as a co-author. Additionally, the language model’s inability to provide credit to articles used in its generated answers raises concerns regarding plagiarism. 

A less dangerous downside of ChatGPT that is nonetheless worth mentioning is its knowledge gap. ChatGPT was trained on data prior to 2021. This means that events that happened after 2021 will not be taken into account when the algorithms generate text answers. However, that being said this limitation can be fixed as soon as a new updated version of ChatGPT is released. Although the knowledge gap does represent an overarching limitation on the existence of AI language systems as a whole. Their functioning relies on training them on past data. This means that AI language systems are always constrained in that they will have a limited understanding of recent events. However, new developments in the field of Artificial Intelligence show promising signs for the future. For example, there is a potential for new models to be released which have the ability to generate their own training data to improve themselves or models that can fact-check themselves. 

The aforementioned knowledge gap can be linked to an additional concern that has been raised relating to the political nature of ChatGPT.  While ChatGPT is officially not allowed to exhibit political bias, recent research suggests that it expresses a left-libertarian orientation. Machine learning systems rely on neural nets which are not explicitly programmable. However, it is suggested that the program was tested on internet sources from google and social media feeds. If these sources indeed exhibited progressive language and the general worldview it is very possible that the AI will reflect that. Just as children often mirror their parent’s beliefs it is completely natural that machine learning systems project what they have been taught. 

The final implication that the article will explore is the impact AI language systems have on students. Increasingly, ChatGPT is used for writing school and university assignments. Whether the AI only helps with the introduction or whether it writes an essay in its entirety there is no question on whether AI has infiltrated learning. While there have not yet been studies to research the cognitive effects ChatGPT has on students there should be some concern regarding students’ ability to critically think, write independently and search the internet. We asked ChatGPT to rewrite the sentence “the cat sat on the mat” in an academic way. The response by the AI system was: “A feline companion comfortably settled itself upon the fabric-covered flat surface commonly referred to as a mat”. This demonstrates how invaluable ChatGPT could become in essay writing for students. There is little incentive for students to learn the craft of writing when an AI system could do it 100 times more eloquently. 

While there are numerous negative effects of AI language systems it is worth noting some positive elements. Most essentially, ChatGPT can function as an indispensable tool in our daily tasks such as making lists for packing or groceries. An additional advantage of AI language models is their availability in multiple languages. Whilst this does not completely  eliminate language barriers, it does provide a valuable aid to non-native speakers. Non-native language users can benefit greatly from these tools as they can review grammatical, spelling and formatting mistakes. However, a dependency on AI language systems by non-native speakers may not aid in acquiring long-term language skills. Furthermore, the AI language model is not limited to language questions. For example, the Chat can provide users with help regarding coding problems. It provides simple well-rounded answers explaining the exact solution step-by-step.    

In conclusion, ChatGPT will inevitably be part of the future. Just like any other platform developed in the past years – namely PowerPoint, YouTube or even Google – the AI language model will accompany students, workers and policy-makers. It would be imprudent not to take the potential advantages of AI language systems seriously. And foolish to think that a few limitations will keep people from using it in the best way they see fit. However, the ethical dilemmas that these systems raise are still a cause for concern as UNESCO noted. For example, in the academic context universities are fearful of what AI language systems mean for essays and other academic efforts by students. Many universities are exploring ways to limit AI language systems’ involvement in assignments, worried about academic integrity. However, ChatGPT is a tool that everyone has access to, and it would be extremely difficult to ban completely. An alternative solution could be to enact partial regulations on its use, for instance for written assignments but not in the research stages, such as the regulation enacted in multiple British universities. Like the anti-plagiarism algorithms, there are now multiple services that are developing or already offering AI content checkers to universities. Unfortunately, moderation in the use of AI does not happen in reality due to a collective action problem; it is in the self-interest of everyone to use these tools to perfect their assignments. 

Things are changing, and fast. In the last decade technology has been developing swiftly and there is no way of stopping it. Once upon a time, students struggled to memorise as much information as possible to pass a test. However, technology and how we think are complementary. One does not develop without the other. Therefore, with the current technological trend, we should also adapt the way we approach things. This means that the focus in education should shift from a need to memorise facts to improving critical thinking and teaching constructive ways of using such digital tools.


Edited by Uilson Jones, artwork by Teresa Valle