Generative AI has challenged universities’ academic misconduct units. Hundreds of Sydney University students were accused of using AI to cheat in 2023, while there were 166 substantiated AI cheating cases at UNSW the same year.
Universities are concerned AI detection tools are ineffective and easy to get around.
Engineering and commerce student Angad Chawla, 20, and recent pharmacy graduate Helia Nateghi Baygi, 27, are part of a Sydney University AI working group and regularly use AI in their coursework.
“I do believe there are issues of overreliance – just walking through the library, everyone has that [ChatGPT] tab open,” Chawla said.
But he said, ultimately, students realised they could not succeed by relying on AI tools as their understanding would ultimately be tested, either in an exam or a job interview.
Academics also became more savvy when designing assessments, Chawla said. “There are times when the AI doesn’t cut it, plain and simple,” he said.
Baygi said some students were confused about when AI use was allowed, with different rules for different units. She welcomed a university-wide approach.
“Calculators never killed mathematics skills,” she said. “It’s better that we embrace this technology and empower students rather than banning it altogether.”
Sydney University is one of the first Australian universities to make use of AI in non-secure assessments expressly allowed.
Loading
Melbourne University has not banned AI but requires students disclose its use and, if students represent generated material as their own ideas, it could be considered misconduct.
At UNSW, teachers set a level of acceptable AI use for each assessment.
The Sydney University changes will require a major shake-up of the institution’s assessments, which will all be categorised as either “lane one”, and held under exam-like conditions, or “lane two”, and considered open book and integrate use of AI.
Bridgeman said there would be parts of degrees, such as program majors, in which students would be required to complete secure “lane one” assessments and AI use was controlled or banned. All other marked work would be considered “lane two”.
Secure assessments could include tasks such as interactive oral assessments and supervised pen and paper examinations.
“I think it’s actually strengthening our position on integrity – it’s certainly not giving up,” Bridgeman said.
“It’s realising that AI is here to stay but that we expect our students to do the things they say they can do.”
Bridgeman said the changes to assessment, requiring key skills be assessed in person, would help fight contract cheating, in which students paid others to do their work.
Deakin University cheating detection expert Professor Phillip Dawson said Sydney University’s new policy showed an acceptance that you couldn’t control a student’s AI use unless you were watching them directly.
“[The policy] is probably where the sector is going to go in the long term, but it’s going to take a while to get here,” he said.
Loading
“Everyone needs to look at what restrictions on AI they’re setting and how feasible they are to enforce. Students know when we’re setting pretend rules.”
Dawson said people wrongly assumed they could accurately spot when someone used AI, or that AI detectors would find students using the tools.
He said Sydney University’s new policy needed to be carefully implemented, so students were tested on whether they had key skills, in a secure environment, at important moments in degrees.
- Main image has been digitally altered.