ChatGPT: Cheating or contrivance?

A new artificial intelligence (AI) chatbot has emerged and has taken the internet by storm: ChatGPT. OpenAI released ChatGPT this past November, and it attracted the attention of millions of people within the first few days of its release. From being able to explain the laws of thermodynamics to coding a Tetris game in Java, the immense popularity of this powerful AI system is not surprising as it can basically act as one’s free and all-knowing personal assistant.

The function of ChatGPT may appear similar to search engines like Google in that users can input a query and expect a mostly accurate list of responses back, but ChatGPT is vastly different from these search engines. Firstly, ChatGPT returns one cohesive answer, unlike the millions of blue links and articles that orthodox search engines return. Secondly, ChatGPT can generate new ideas like project proposals, articles, cover letters, and even emails to send to one’s manager when sick! Finally, this system has an extremely conversational tone as it returns answers in the form of complete sentences and can remember previous questions to understand the user’s needs and thus tailor a better response for them.

With that being said, ChatGPT does have its limits, and OpenAI has openly expressed these limitations on its website. On occasion, ChatGPT does provide an inaccurate response that is deceptively plausible-sounding and well-structured. Additionally, the same two questions phrased slightly differently may produce two entirely different answers.

So how does ChatGPT work? ChatGPT doesn’t actually know anything by itself as it relies entirely upon information and text databases found on the internet to recognize patterns and generate its response. In total, exactly 300 billion words were first fed into this system. Next, the system needed to be trained to provide an accurate answer to a given question, and this step was accomplished using a method called reinforcement learning from human feedback. Human AI trainers acted as both AI and user to provide responses both ways in dialogue format. These responses were then ranked by the same trainers. Multiple iterations of this process were completed to train and fine-tune the AI model on how to compare different responses and return the best one.

The development of ChatGPT has created many waves within education. ChatGPT raises concerns on how to maintain academic integrity among students, as this AI system has made answers extremely accessible for students and is able to write essays about any topic in under a few seconds. In fact, ChatGPT is so good that it passed a final exam for the Master of Business Administration program at the University of Pennsylvania’s Wharton School. Professor Christian Terwiesch, who administered the exam, stated that this system has a “remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates.” This elicits questions on whether the education system and how students are taught still work and how teaching methods may need to be changed in the future given the rise of powerful AI chatbots like ChatGPT. For example, educators may place more emphasis on students asking the right questions instead of giving the right answer.

Regardless, ChatGPT represents a very useful tool for students to learn from if used correctly. The deployment of ChatGPT and its shocking capabilities prompts us to change our thinking all the while simultaneously giving us a peek at what’s to come in the world of AI.

Image courtesy of Wikimedia Commons