ChatGPT Releases New Tool to Identify AI Writing
A new tool from the maker of ChatGPT aims to help deal with concerns
about how artificial intelligence (or AI) can be used to cheat in school.
The tool is called AI Text Classifier. It is designed to identify
writing that was produced not by students but by AI programs.
The tool was launched by OpenAI, an AI technology company based in San
Francisco. The company is the maker of ChatGPT, an AI system that can
produce any kind of writing on demand. Many education officials are
concerned that ChatGPT could fuel academic dishonesty and harm learning.
But, OpenAI warns that its new AI Text Classifier tool – like others
already available – is not perfect. The method for detecting AI-written
writing “will be wrong sometimes,” said Jan Leike of OpenAI.
“Because of that,” he added, “it shouldn’t be solely relied upon when
making decisions."
In the U.S.
Teenagers and college students were among the millions of people who
began experimenting with ChatGPT after it launched on November 30. The
tool is a free service on OpenAI’s website.
Many people have found ways to use it creatively and harmlessly. Still,
some educators are concerned about the ease with which it could answer
take-home test questions or do other assignments.
School districts around the country report they are seeing discussions
about ChatGPT change quickly.
By the time schools opened for the new year, New York City, Los Angeles,
and other big public school districts in the United States began to
block its use in classrooms and on school devices.
The Seattle Public School district blocked ChatGPT on all school devices
in December but then opened it to educators. District spokesman Tim
Robinson said teachers wanted to use ChatGPT as a teaching tool.
“We can’t afford to ignore it,” Robinson said.
The district is also discussing expanding the use of ChatGPT to
classrooms to let teachers use it to teach students critical thinking.
Students could also use the service as a “personal tutor” or to help
create ideas when working on an assignment, Robinson said.
OpenAI wrote about the limitations of its detection tool on a blog
recently. But the company added that the tool could help to find
disinformation campaigns and misuse of AI to mimic humans in addition to
catching plagiarism.
The longer a piece of writing, the better the tool is at detecting if an
AI system or a human wrote something. AI Text Classifier can examine any
piece of writing whether it is a college admissions essay, or a literary
study of Ralph Ellison’s Invisible Man. The tool will then claim it as
either “very unlikely, unlikely, unclear if it is, possibly, or likely”
AI-created.
But much like ChatGPT itself, it is not easy to say how AI Text
Classifier comes up with a result, Leike said.
There is a lot about the tool that is still not well understood. He said,
“There’s really not much we could say at this point about how the
classifier actually works.”
International opinions
Colleges around the world also have begun debating responsible use of AI
technology. The Paris Institute of Political Studies, or Sciences Po,
one of France’s most famous universities, banned its use recently.
Sciences Po warned that anyone found using ChatGPT and other AI tools to
produce written or spoken work could be banned from the school and other
institutions.
To answer criticism, OpenAI said it has been working for several weeks
to create new recommendations to help educators.
France’s digital economy minister Jean-Noël Barrot recently met in
California with OpenAI leaders, including CEO Sam Altman. Barrot a week
later told people gathered at the World Economic Forum in Davos,
Switzerland that he was hopeful about the technology. But the government
minister said there are also difficult moral questions that will need to
be dealt with.
“So if you’re in the law faculty, there is room for concern because
obviously ChatGPT, among other tools, will be able to deliver exams that
are relatively impressive,” he said. “If you are in the economics
faculty, then you’re fine because ChatGPT will have a hard time finding
or delivering something that is expected when you are in a graduate-level
economics faculty.”
He said it will be increasingly important for users to understand how
these systems work so they know what biases might exist. |