Universities might be missing out on money due to Artificial Intelligence (AI).
Research is being accepted with AI mistakes.
And people relying on AI to do their jobs are starting to be caught.
Chat GPT
OpenAI is the company that gave us ChatGPT.
ChatGPT uses algorithms on words from the internet to write.
The writing is therefore artificial intelligence.
You could ask ChatGPT to write something, and it will give you an answer.
Answers to medical, law, reading and writing exams have all passed.
And ChatGPT is not just passing, it's in the top 90th percentile.
So it's good. Really good.
Australian Research Council
People working for the Australian Research Council (ARC) give money to universities.
Academics and researchers are getting more than A$800 million each year.
Those that work at ARC assess proposals for funding.
But there is evidence AI has been used to assess proposals.
One assessor seeing "Regenerate response" in a report, which is text that could have been accidentally copied from ChatGPT responses.
The assessor said:
"I think it’s a sign of someone being overworked and trying to cut corners"
And ARC said:
"the ARC advises that peer reviewers should not use AI as part of their assessment activities"
But AI is good and can be helpful.
The issue seems to be misuse of AI. Not AI itself.
Misuse of AI in Research
AI is bad when used in bad ways.
You could write faster with AI, but you also could copy others and use AI to avoid being caught, or use AI to make things up.
And making things up, is not good for academic integrity.
Some people pretend to be other people with AI and with how good AI is, it's “becoming increasingly difficult to distinguish between writing produced by AI and writing produced by a person.”
Using AI to detect AI is one option.
But AI detectors can be easily tricked, by simply adjusting punctuation.
AI detectors can also make mistakes like suggesting human writing was AI.
Some scientists using AI even getting their work published after obvious errors.
But as technology gets better, people will use it more.
And more AI use, means we need to be better at spotting misuse.
Spotting misuse
There was a paper published to highlight this misuse issue.
After the paper’s conclusion, it says:
“As the alert reader may already have guessed, everything up to this point in the paper was written directly by ChatGPT,”
Peter Cotton, one of the authors, said they wanted to surprise the readers.
So they gave ChatGPT prompts to generate parts of the paper.
After adding subheadings and some references to the responses, they had a paper.
But ChatGPT had made up references and used other people's words to write the article.
The example image above shows an obvious use of AI that was not caught.
Something people in the review process will need to think about.
But it is hard to tell the difference between human and AI writing.
And if AI writing can get through peer review, it could open up another way to game the publishing system.
Although poor use is being noticed, much is still undetected.
A developing issue, that could impact science, research, education and beyond.
Have an idea or story to share? Start a discussion here
Want to continue the conversation? Join us here