Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by janrinok on Tuesday December 20 2022, @09:11PM   Printer-friendly
from the he-gets-it dept.

ChatGPT arrives in the academic world:

AI art and text generators are all the rage right now. As an academic, I've seen an uptick in colleagues issuing warnings about students using tools like ChatGPT to create assignments, but I haven't yet really done too much investigation—I've been too busy grading final papers! But I recently came across two posts by academics that somewhat relieve the immediate worry about students successfully using ChatGPT to write their papers, and also raise challenges for educators about what we are actually doing in our classrooms.

First, here's Dan Vollaro's 'conversation' with ChatGPT about Moby Dick. Vollaro is an Associate Professor of English with a PhD in 19th Century American literature, and tries to discuss Melville's famous novel with the bot. While ChatGPT does pretty well at first—providing facts about when the novel was written, how it was received, and even some of the major themes that scholars have identified in the text—it fails spectacularly when asked which scholars were responsible for discussing and circulating those themes, or when asked to provide its own analysis or critique of the novel. In a series of frustrating back-and-forths, the bot repeats almost the same answer to various questions from Vollaro. When Vollaro asks: "Do you have a favorite character from the novel?" The bot answers:

As a language model, I don't have personal experiences or opinions, so I don't have a favorite character from Moby Dick. I'm here to provide accurate and helpful information based on the text data I've been trained on, but I don't have personal preferences or biases. I'm a neutral and objective source of information, and I'm always happy to help with any questions you have about the novel.

[...] Next, Darren Hudson Hick, who works at Furman University, wrote in a public post on his Facebook that he had just found his first plagiarized student paper that was generated by an AI. He shares details about the paper and reveals why he was immediately suspicious that it might be plagiarized. He says that despite good grammar and structure, the essay simply made no sense:

The essay confidently and thoroughly described Hume's views on the paradox of horror in a way that were thoroughly wrong. It did say some true things about Hume, and it knew what the paradox of horror was, but it was just bullshitting after that. To someone who didn't know what Hume would say about the paradox, it was perfectly readable—even compelling. To someone familiar with the material, it raised any number of flags.

Like Vollaro, he also discovered that ChatGPT can't cite sources, which is necessary for upper-level courses but could cause problems in freshman-level classes. In fact, he says it's a "game-changer" for such courses. I'm not sure I fully agree, because I require citations even in freshman-level courses, but for those who don't, this could definitely be a problem (but wouldn't you notice if every single essay was spitting out the same "neutral and objective" facts?).

Hick also explains that ChatGPT has developed a GPT Detector, which I hope universities will quickly integrate into their existing plagiarism-detecting toolsets (like TurnItIn):

Happily, the same team who developed ChatGPT also developed a GPT Detector (https://huggingface.co/openai-detector/), which uses the same methods that ChatGPT uses to produce responses to analyze text to determine the likelihood that it was produced using GPT technology. Happily, I knew about the GPT Detector and used it to analyze samples of the student's essay, and compared it with other student responses to the same essay prompt. The Detector spits out a likelihood that the text is "Fake" or "Real". Any random chunk of the student's essay came back around 99.9% Fake, versus any random chunk of any other student's writing, which would come around 99.9% Real. This gave me some confidence in my hypothesis. The problem is that, unlike plagiarism detecting software like TurnItIn, the GPT Detector can't point at something on the Internet that one might use to independently verify plagiarism. The first problem is that ChatGPT doesn't search the Internet—if the data isn't in its training data, it has no access to it. The second problem is that what ChatGPT uses is the soup of data in its neural network, and there's no way to check how it produces its answers. Again: its "programmers" don't know how it comes up with any given response. As such, it's hard to treat the "99.9% Fake" determination of the GPT Detector as definitive: there's no way to know how it came up with that result.

[...] He ends with a warning:

Administrations are going to have to develop standards for dealing with these kinds of cases, and they're going to have to do it FAST. In my case, the student admitted to using ChatGPT, but if she hadn't, I can't say whether all of this would have been enough evidence. This is too new. But it's going to catch on. It would have taken my student about 5 minutes to write this essay using ChatGPT. Expect a flood, people, not a trickle. In future, I expect I'm going to institute a policy stating that if I believe material submitted by a student was produced by A.I., I will throw it out and give the student an impromptu oral exam on the same material. Until my school develops some standard for dealing with this sort of thing, it's the only path I can think of.


Original Submission

Related Stories

A Watermark for Chatbots can Expose Text Written by an AI 5 comments

The tool could let teachers spot plagiarism or help social media platforms fight disinformation bots:

Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we're reading are written by a human or not.

These "watermarks" are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.

For example, since OpenAI's chatbot ChatGPT was launched in November, students have already started cheating by using it to write essays for them. News website CNET has used ChatGPT to write articles, only to have to issue corrections amid accusations of plagiarism. Building the watermarking approach into such systems before they're released could help address such problems.

In studies, these watermarks have already been used to identify AI-generated text with near certainty. Researchers at the University of Maryland, for example, were able to spot text created by Meta's open-source language model, OPT-6.7B, using a detection algorithm they built. The work is described in a paper that's yet to be peer-reviewed, and the code will be available for free around February 15.

[...] There are limitations to this new method, however. Watermarking only works if it is embedded in the large language model by its creators right from the beginning. Although OpenAI is reputedly working on methods to detect AI-generated text, including watermarks, the research remains highly secretive. The company doesn't tend to give external parties much information about how ChatGPT works or was trained, much less access to tinker with it. OpenAI didn't immediately respond to our request for comment.

Related:


Original Submission

Some Teachers Are Now Using ChatGPT to Grade Papers 68 comments

https://arstechnica.com/information-technology/2024/03/some-teachers-are-now-using-chatgpt-to-grade-papers/

In a notable shift toward sanctioned use of AI in schools, some educators in grades 3–12 are now using a ChatGPT-powered grading tool called Writable, reports Axios. The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers. But is it a good idea to outsource critical feedback to a machine?
[...]
"Make feedback more actionable with AI suggestions delivered to teachers as the writing happens," Writable promises on its AI website. "Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores." The service also provides AI-written writing-prompt suggestions: "Input any topic and instantly receive unique prompts that engage students and are tailored to your classroom needs."
[...]
The reliance on AI for grading will likely have drawbacks. Automated grading might encourage some educators to take shortcuts, diminishing the value of personalized feedback. Over time, the augmentation from AI may allow teachers to be less familiar with the material they are teaching. The use of cloud-based AI tools may have privacy implications for teachers and students. Also, ChatGPT isn't a perfect analyst. It can get things wrong and potentially confabulate (make up) false information, possibly misinterpret a student's work, or provide erroneous information in lesson plans.
[...]
there's a divide among parents regarding the use of AI in evaluating students' academic performance. A recent poll of parents revealed mixed opinions, with nearly half of the respondents open to the idea of AI-assisted grading.

As the generative AI craze permeates every space, it's no surprise that Writable isn't the only AI-powered grading tool on the market. Others include Crowdmark, Gradescope, and EssayGrader. McGraw Hill is reportedly developing similar technology aimed at enhancing teacher assessment and feedback.

Amid ChatGPT Outcry, Some Teachers are Inviting AI to Class 3 comments

Under the fluorescent lights of a fifth grade classroom in Lexington, Kentucky, Donnie Piercey instructed his 23 students to try and outwit the "robot" that was churning out writing assignments:

The robot was the new artificial intelligence tool ChatGPT, which can generate everything from essays and haikus to term papers within seconds. The technology has panicked teachers and prompted school districts to block access to the site. But Piercey has taken another approach by embracing it as a teaching tool, saying his job is to prepare students for a world where knowledge of AI will be required.

"This is the future," said Piercey, who describes ChatGPT as just the latest technology in his 17 years of teaching that prompted concerns about the potential for cheating. The calculator, spellcheck, Google, Wikipedia, YouTube. Now all his students have Chromebooks on their desks. "As educators, we haven't figured out the best way to use artificial intelligence yet. But it's coming, whether we want it to or not."

The article goes on to describe different exercises Piercey uses and comments from other teachers who are using ChatGPT to enhance their lessons.

[...] The fifth graders seemed unaware of the hype or controversy surrounding ChatGPT. For these children, who will grow up as the world's first native AI users, their approach is simple: Use it for suggestions, but do your own work.

Previously:


Original Submission

What to Expect When You're Expecting ... GPT-4 11 comments

Although ChatGPT can write about anything, it is also easily confused:

As 2022 came to a close, OpenAI released an automatic writing system called ChatGPT that rapidly became an Internet sensation; less than two weeks after its release, more than a million people had signed up to try it online. As every reader surely knows by now, you type in text, and immediately get back paragraphs and paragraphs of uncannily human-like writing, stories, poems and more. Some of what it writes is so good that some people are using it to pick up dates on Tinder ("Do you mind if I take a seat? Because watching you do those hip thrusts is making my legs feel a little weak.") Other, to the considerable consternation of educators everywhere, are using it write term papers. Still others are using it to try to reinvent search engines . I have never seen anything like this much buzz.

Still, we should not be entirely impressed.

As I told NYT columnist Farhad Manjoo, ChatGPT, like earlier, related systems is "still not reliable, still doesn't understand the physical world, still doesn't understand the psychological world and still hallucinates."

[...] What Silicon Valley, and indeed the world, is waiting for, is GPT-4.

I guarantee that minds will be blown. I know several people who have actually tried GPT-4, and all were impressed. It truly is coming soon (Spring of 2023, according to some rumors). When it comes out, it will totally eclipse ChatGPT; it's safe bet that even more people will be talking about it.

Dishonor Code: What Happens When Cheating Becomes the Norm? 20 comments

Students say they are getting 'screwed over' for sticking to the rules. Professors say students are acting like 'tyrants.' Then came ChatGPT:

When it was time for Sam Beyda, then a freshman at Columbia University, to take his Calculus I midterm, the professor told students they had 90 minutes.

But the exam would be administered online. And even though every student was expected to take it alone, in their dorms or apartments or at the library, it wouldn't be proctored. And they had 24 hours to turn it in.

"Anyone who hears that knows it's a free-for-all," Beyda told me.

[...] For decades, campus standards have been plummeting. The hallowed, ivy-draped buildings, the stately quads, the timeless Latin mottos—all that tradition and honor have been slipping away. That's an old story. Then Covid struck and all bets were off. With college kids doing college from their bedrooms and smartphones, and with the explosion of new technology, cheating became not just easy but practically unavoidable. "Cheating is rampant," a Princeton senior told me. "Since Covid there's been an increasing trend toward grade inflation, cheating, and ultimately, academic mediocrity."

Now that students are back on campus, colleges are having a hard time putting the genie back in the bottle. Remote testing combined with an array of tech tools—exam helpers like Chegg, Course Hero, Quizlet, and Coursera; messaging apps like GroupMe and WhatsApp; Dropbox folders containing course material from years past; and most recently, ChatGPT, the AI that can write essays—have permanently transformed the student experience.

[...] On January 2, a Princeton University computer science major named Edward Tian—who may be the most hated man on campus—tweeted: "I spent New Years building GPTZero—an app that can quickly and efficiently detect whether an essay is ChatGPT or human written."

So now it's nerd vs. nerd, and one of the nerds is going to win—probably whoever gets more venture funding. Everything is up in the air.

Previously:


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by looorg on Tuesday December 20 2022, @10:43PM (2 children)

    by looorg (578) on Tuesday December 20 2022, @10:43PM (#1283405)

    I have yet to see one in person as far as I know. I would like to think I could spot it one a mile away, but you never know.

    So while some students might think this ChatGPT doing all the work for them sounds great and all it will soon be aware that things will start to suck hard for them. Even tho these papers are clearly subpar at the moment one would assume they will just get better and better with each version and iteration of the software. Clearly there are some big faults at the moment. But it will learn to cite it's sources, which at first will be bad since their sources will be garbage akin to Joe Website told me so.

    To counter this all, or most, assignments will be changed and adapted to reflect this. All assignments will now be to analyze, compare different perspectives and ideas all while also backing up your opinions. Something the ChatGPT apparently is crap at for the moment. But it clearly works better for some topics and subjects then others.

    When that doesn't work anymore, or the software becomes better, this will just be ramped up one stage and all smaller papers and assignments will have to be written on site at the university under supervision. Just like normal test taking. Here enjoy your weekend with a 2x8h writing session at the uni library with all your classmates! You can't bring anything and you can't take anything with you when you leave for the day.

    Larger papers and such that take months to produce, in theory, will have to have some kind of revision thinking since they are normally not written in one stage you'll be expected to show revisions of the development to be handed in during the time allotted so that you can follow the stages of development. Add in that more and more things, not just your thesis, will have to be defended in person and discussed among staff and students, really bad for shy and socially awkward people. ChatGTP won't be there to hold you hand, not that it would help. Someone will have to bring popcorn to the seminars now.

    • (Score: 2) by stormreaver on Wednesday December 21 2022, @12:09AM (1 child)

      by stormreaver (5101) on Wednesday December 21 2022, @12:09AM (#1283420)

      ...one would assume they will just get better and better with each version and iteration of the software.

      Why would you think that? ChatGPT is hardly any better than ELIZA of 57 years ago. It's more verbose, has a larger vat of bullshit, but is largely the same.

      • (Score: 2) by looorg on Wednesday December 21 2022, @01:41AM

        by looorg (578) on Wednesday December 21 2022, @01:41AM (#1283435)

        ...one would assume they will just get better and better with each version and iteration of the software.

        Why would you think that? ChatGPT is hardly any better than ELIZA of 57 years ago. It's more verbose, has a larger vat of bullshit, but is largely the same.

        Wishful thinking? I don't know. You might be right. It might just be parroting from its ever deeper pool of data. But I would hope, at least as an academic project that if things get pointed out that it can't do the people behind it might be inclined to try and improve in those areas. I'm sure there is some team somewhere, if it has not been done to death already, trying to get a proper AI paper submitted to some journals and not just as a gag or a big reveal afterwards. So they would need to get their citation in order then and then have the "AI" also be able to analyze and argue in text form and not just parroting things it found online.

  • (Score: 2) by VLM on Wednesday December 21 2022, @01:04PM

    by VLM (445) on Wednesday December 21 2022, @01:04PM (#1283466)

    The next battle will be over "social media" closing its APIs because people are writing their own clients that filter the stream thru the "GPT Detector" linked above.

    Some social media sites won't have much content to sell advertising next to, if the bots are taken out, and follow the money, so the sites are going to fight 3rd party clients pretty hard.

(1)