ChatGPT and University Assessments

ChatGPT and University Assessments

Chat Generative Pre-Trained Transformer (ChatGPT) is an artificial intelligence tool that has been trained using deep learning algorithms to generate conversational interactions to user prompts. A conversational chatbot that answers questions and provides information rapidly generating a typed response. According to the OpenAI website, the trained model can answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

I became aware of ChatGPT in early December 2022 although I didn’t get a chance to properly play with it until the day before my holidays when I had a quieter inbox.  The technology was released for public use on November 30, 2022, and during this research preview, ChatGPT is free for public use. In recent days, the number of users has reached capacity, so it has not always been available.

I am an advocate of incorporating technology in learning. I use polling tools such as Mentimeter in the classroom to promote active learning. I have flipped the student role using PeerWise where students create multiple choice questions, write explanations, and evaluate the work of their peers. My research evidence supports this technology enhanced learning.  My level of curiosity was naturally piqued with ChatGPT. As an academic, I had questions: Can ChatGPT answer assessment questions? If yes, what is the quality like? Can it be detected by the plagiarism software Turnitin? If several users use the same prompt will ChatGPT generate very similar responses?

Here’s my perspective with investigating ChatGPT. I wanted to better understand its functionality. My discipline is chemistry and the examples I share are chemistry assessment questions.

Can ChatGPT answer assessment questions?

 In some instances, yes but not all assessment questions. For questions that focus on knowledge and understanding of knowledge with “describe” and “discuss” verbs, then ChatGPT can generate responses.  For questions that focus on application of knowledge and interpretation then ChatGPT reaches a limitation. Questions that refer to information presented, for example, in a figure or graph cannot be answered.

Its knowledge cut-off is 2021 which became apparent in a question that referred to a medicine approved in 2021.

The word count on generated responses has been noted not to exceed 600 words. A prompt asking for a 2000-word answer did not generate the requested word count. The word count was 519 and 549 words for two different requests.

What is the quality of ChatGPT responses?

Generated responses tend to have a good structure and are well-written. The quality of answers varies and, it was found for one question, to contain an error. This was a surprise as a google search for the information resulted in the correct answer. Questions that required more complex analysis or interpretation were poorly answered. This ChatGPT response shown would not meet the pass criteria.

Can ChatGPT include academic references

Yes, it can when the prompt requests references. The examples show the iterations in ChatGPT where “give appropriate references” is included in the prompt and this produces weblinks as the reference sources. In the third iteration “do not use Wikipedia” and a specific reference style format were included in the prompt and the quality of references improved from weblinks to academic texts.

It does require the user to understand the objectives of the task and the ability to critically evaluate the outputs.

Will Turnitin detect ChatGPT responses?

Turnitin is a web based text matching service which compares assessment work submitted by students to identify any duplication or cheating. Electronic sources, databases and other student submissions are compared. Turnitin did not produce a high percentage matching score for the ChatGPT generated responses. For questions that requested structures to be drawn, only text answers were generated which is not the typical format for some disciplinary topics and this would alert me as the assessor. Another interesting outcome is that responses generated from different user accounts were unique.


My Thoughts

There has been a lot of ChatGPT hype in recent days. I don’t fear this technology. I can forsee educational benefits with ChatGPT particularly as a developmental tool for learners. The most important aspect to consider as educators is our assessment design. What exactly are we assessing? This is something we have questioned particularly during the COVID-lockdown and the pivot to online teaching and assessment. We needed to produce “take away” exams that could be completed online, in open book format and during an appropriate time window. Reframing our assessments from recall-based tasks to questions that required students to demonstrate how they use information was key.

Application and interpretation of knowledge is not well processed by ChatGPT. Using problem solving, data interpretation or case-study based questions are ways to redesign assessment beyond knowledge-based questions. This disruptive technology will help educators to question why are we doing things this way? That has to be a good thing in my book! Or maybe I will ask ChatGPT 🙂

(Date 19th January 2023)

Are Learning Styles a myth?

Are Learning Styles a myth?

When I started my academic career, over 16 years ago, I was introduced to learning styles. As a group of new lecturers, we completing a purchased questionnaire within our multidisciplinary teams.

“I’m a reflector”, “I’m a pragmatist”, “I’m an activist”.

I was a novice to learning theory but eager to understand anything that might help my teaching. Learning styles sounded like something tangible to use and apply. The key message from that session – students are not the same and learn differently.

However, there was no what next, no critical discussion or evaluations from practice. Distributing a questionnaire to all my students was not practical and even if I did, what do I change in my teaching? I was somewhat confused…

Learning styles refers to the belief that different people learn information in different ways.

There are at least 71 different learning styles described by Coffield et al. (2004)

The premise behind learning styles is that if the teaching approach matches the preferred mode of learning style then this will lead to improved performance. So for a “visual learner”, information should be presented visually to match or mesh with the learning style.

Pashler et al. (2009) provide an excellent critical review including their methodology to test the learning style hypothesis.
Any research study must:



    1. Divide learners into two or more groups (e.g. visual and auditory)
    2. Within each learning-style group, learners must be randomly assigned to one of at least two different instructional methods.
    3. The same test must be used with all learners.
    4. The results must demonstrate that the learning method providing optimal test performance of one learning-style group is different to the learning method that optimizes the performance of a second learning-style group.

Pashler et al. illustrate this with crossover interactions of acceptable evidence where the learning method with the highest mean test score for one category of learners is different from the learning method producing the highest mean test score for the other category of learners. In other words, if the same learning method optimizes the mean test score of both groups, the result does not provide evidence to support the learning style hypothesis.

They found only one study potentially meeting the criteria but with questionable evidence and methodological issues including removal of outliers and no reporting of mean scores for each final assessment.

So “what’s your learning style” has no supporting evidence. There are however evidence-based strategies that support learning. I use these and if you are interesting in understanding how to study smarter then my book Study Smarter: a lecturer’s inside guide to boost your grades will be a useful resource. It’s easy to read and aimed at learners.