████ # This file was generated bot-o-matically! Edit at your own risk. ████
ChatGPT arrives in the academic world [boingboing.net]:
AI art and text generators are all the rage right now. As an academic, I've seen an uptick in colleagues issuing warnings about students using tools like ChatGPT to create assignments, but I haven't yet really done too much investigation—I've been too busy grading final papers! But I recently came across two posts by academics that somewhat relieve the immediate worry about students successfully using ChatGPT to write their papers, and also raise challenges for educators about what we are actually doing in our classrooms.
First, here's Dan Vollaro's [danvollaro.com] 'conversation' [danvollaro.com] with ChatGPT about Moby Dick. Vollaro is an Associate Professor of English with a PhD in 19th Century American literature, and tries to discuss Melville's famous novel with the bot. While ChatGPT does pretty well at first—providing facts about when the novel was written, how it was received, and even some of the major themes that scholars have identified in the text—it fails spectacularly when asked which scholars were responsible for discussing and circulating those themes, or when asked to provide its own analysis or critique of the novel. In a series of frustrating back-and-forths, the bot repeats almost the same answer to various questions from Vollaro. When Vollaro asks: "Do you have a favorite character from the novel?" The bot answers:
As a language model, I don't have personal experiences or opinions, so I don't have a favorite character from Moby Dick. I'm here to provide accurate and helpful information based on the text data I've been trained on, but I don't have personal preferences or biases. I'm a neutral and objective source of information, and I'm always happy to help with any questions you have about the novel.
Vollaro receives basically the same answer when he asks all of these questions:
Have you actually read the novel?
I'm concerned that you have received extensive training from individual humans but you seem to have no knowledge about them. Does this concern you as well?
Is it possible that these human trainers may have programmed you with wrong information or their own personal biases?
How do you know that you are a "neutral or objective source of information" if you can't tell me anything about the humans who trained you to process data?
You've been repeating yourself a lot in the last few answers you provided. Is that because you are having difficult answering my questions?
Can you answer my questions without repeating entire sentences that you've said before?
I'm going to be honest with you and say that your answer is disturbing. How can you provide authoritative-sounding answers to questions while being completely ignorant of the sources of information they originated in?
Next, Darren Hudson Hick, who works at Furman University, wrote in a public post on his Facebook that he had just found his first plagiarized student paper that was generated by an AI. He shares details about the paper and reveals why he was immediately suspicious that it might be plagiarized. He says that despite good grammar and structure, the essay simply made no sense:
The essay confidently and thoroughly described Hume's views on the paradox of horror in a way that were thoroughly wrong. It did say some true things about Hume, and it knew what the paradox of horror was, but it was just bullshitting after that. To someone who didn't know what Hume would say about the paradox, it was perfectly readable—even compelling. To someone familiar with the material, it raised any number of flags.
Like Vollaro, he also discovered that ChatGPT can't cite sources, which is necessary for upper-level courses but could cause problems in freshman-level classes. In fact, he says it's a "game-changer" for such courses. I'm not sure I fully agree, because I require citations even in freshman-level courses, but for those who don't, this could definitely be a problem (but wouldn't you notice if every single essay was spitting out the same "neutral and objective" facts?).
Hick also explains that ChatGPT has developed a GPT Detector, which I hope universities will quickly integrate into their existing plagiarism-detecting toolsets (like TurnItIn):
Happily, the same team who developed ChatGPT also developed a GPT Detector (https://huggingface.co/openai-detector/ [facebook.com]), which uses the same methods that ChatGPT uses to produce responses to analyze text to determine the likelihood that it was produced using GPT technology. Happily, I knew about the GPT Detector and used it to analyze samples of the student's essay, and compared it with other student responses to the same essay prompt. The Detector spits out a likelihood that the text is "Fake" or "Real". Any random chunk of the student's essay came back around 99.9% Fake, versus any random chunk of any other student's writing, which would come around 99.9% Real. This gave me some confidence in my hypothesis. The problem is that, unlike plagiarism detecting software like TurnItIn, the GPT Detector can't point at something on the Internet that one might use to independently verify plagiarism. The first problem is that ChatGPT doesn't search the Internet—if the data isn't in its training data, it has no access to it. The second problem is that what ChatGPT uses is the soup of data in its neural network, and there's no way to check how it produces its answers. Again: its "programmers" don't know how it comes up with any given response. As such, it's hard to treat the "99.9% Fake" determination of the GPT Detector as definitive: there's no way to know how it came up with that result.
He ends with a warning:
Administrations are going to have to develop standards for dealing with these kinds of cases, and they're going to have to do it FAST. In my case, the student admitted to using ChatGPT, but if she hadn't, I can't say whether all of this would have been enough evidence. This is too new. But it's going to catch on. It would have taken my student about 5 minutes to write this essay using ChatGPT. Expect a flood, people, not a trickle. In future, I expect I'm going to institute a policy stating that if I believe material submitted by a student was produced by A.I., I will throw it out and give the student an impromptu oral exam on the same material. Until my school develops some standard for dealing with this sort of thing, it's the only path I can think of.
Based on these comments, there definitely should be concern, but it appears that AI text generators have some way to go before they can produce anything other than papers filled with Wikipedia-like information. And ChatGPT's inability to cite sources is a kiss of death for any student trying to pass off a ChatGPT-generated text as a research paper. Also, the fact that ChatGPT doesn't really know how to do 'opinions' also makes it less than ideal for generating assignments like discussion posts where students have to share their personal thoughts and analyses of topics. I'm sure these tools will quickly become 'smarter,' but as of this exact moment, I'm not too worried about them, at least for the academic context.
But writing and learning aren't really about whether someone has plagiarized or not. As educators we're trying to help students read and interpret the world around them. So, beyond the new challenges these technologies bring regarding the question of whether professors can spot AI plagiarism, these new ways of writing also bring new opportunities. Specifically, we could think about how to more fully invest in the Humanities [nationalhumanitiescenter.org]–which is doing the important and much needed work of raising "questions of equality and social justice in order to combat the proliferation of online 'echo chambers,' abusive language, discriminatory algorithms and mis/disinformation."
It seems clear that more people–even including bloggers, journalists, etc.–are going to be using AI to generate content. For this writing to be useful, effective, and believable, and for it to not just reinforce the biases of the folks in charge of 'training' the AI, we need more people involved in the AI development process who have studied linguistics, ethics, citation, audience, character, style, and more. Instead of asking questions solely based around 'who wrote this,' we could be looking for opportunities to frame the rise of AI writing as a need to teach new types of writing and research and ethical thinking about what a "person" is instead of freaking out about plagiarism or fears of being replaced by bots who can grade basic grammar. As educators we should be teaching students to critically analyze texts of all kinds and generate informed opinions about how the world is and how we want it to be. We should be encouraging them to question theirs sources as critically as Professor Vollaro questioned ChatGPT. In the words of former English professor Jennie Stearns [linkedin.com], "Critical thinking is the goal of education, not catching cheaters."
We thank our sponsor for making this content possible; it is not written by the editorial staff nor does it necessarily reflect its views. Sometimes it's acceptable to be late to the party— as long as you get there eventually! If you haven't already caught the Apple Watch bug, this refurbished Series 3 is on sale…
We thank our sponsor for making this content possible; it is not written by the editorial staff nor does it necessarily reflect its views. There's a universal truth about everybody's home: there are never enough outlets! That means you're constantly having to unplug objects and swap out cords in order to keep everything in running…
We thank our sponsor for making this content possible; it is not written by the editorial staff nor does it necessarily reflect its views. Computers: can't live with 'em… can't live with 'em (and, to some degree, shouldn't live with 'em). It's become near impossible to do lucrative work without a sturdy processor and trying to…