from the ai-will-teach...your-children-well dept.
Under the fluorescent lights of a fifth grade classroom in Lexington, Kentucky, Donnie Piercey instructed his 23 students to try and outwit the "robot" that was churning out writing assignments:
The robot was the new artificial intelligence tool ChatGPT, which can generate everything from essays and haikus to term papers within seconds. The technology has panicked teachers and prompted school districts to block access to the site. But Piercey has taken another approach by embracing it as a teaching tool, saying his job is to prepare students for a world where knowledge of AI will be required.
"This is the future," said Piercey, who describes ChatGPT as just the latest technology in his 17 years of teaching that prompted concerns about the potential for cheating. The calculator, spellcheck, Google, Wikipedia, YouTube. Now all his students have Chromebooks on their desks. "As educators, we haven't figured out the best way to use artificial intelligence yet. But it's coming, whether we want it to or not."
The article goes on to describe different exercises Piercey uses and comments from other teachers who are using ChatGPT to enhance their lessons.
[...] The fifth graders seemed unaware of the hype or controversy surrounding ChatGPT. For these children, who will grow up as the world's first native AI users, their approach is simple: Use it for suggestions, but do your own work.
Previously:
- Seattle Public Schools Bans ChatGPT; District 'Requires Original Thought and Work From Students'
- ChatGPT Arrives in the Academic World
Related Stories
ChatGPT arrives in the academic world:
AI art and text generators are all the rage right now. As an academic, I've seen an uptick in colleagues issuing warnings about students using tools like ChatGPT to create assignments, but I haven't yet really done too much investigation—I've been too busy grading final papers! But I recently came across two posts by academics that somewhat relieve the immediate worry about students successfully using ChatGPT to write their papers, and also raise challenges for educators about what we are actually doing in our classrooms.
First, here's Dan Vollaro's 'conversation' with ChatGPT about Moby Dick. Vollaro is an Associate Professor of English with a PhD in 19th Century American literature, and tries to discuss Melville's famous novel with the bot. While ChatGPT does pretty well at first—providing facts about when the novel was written, how it was received, and even some of the major themes that scholars have identified in the text—it fails spectacularly when asked which scholars were responsible for discussing and circulating those themes, or when asked to provide its own analysis or critique of the novel. In a series of frustrating back-and-forths, the bot repeats almost the same answer to various questions from Vollaro. When Vollaro asks: "Do you have a favorite character from the novel?" The bot answers:
As a language model, I don't have personal experiences or opinions, so I don't have a favorite character from Moby Dick. I'm here to provide accurate and helpful information based on the text data I've been trained on, but I don't have personal preferences or biases. I'm a neutral and objective source of information, and I'm always happy to help with any questions you have about the novel.
Seattle Public Schools is joining a growing number of school districts banning ChatGPT, the natural language chatbot from OpenAI that has sparked widespread attention in recent weeks.
ChatGPT has garnered praise for its ability to quickly answer complex queries and instantly produce content.
But it's also generating concern among educators worried that students will use the technology to do their homework.
SPS blocked ChatGPT on all school devices in December, said Tim Robinson, a spokesman for Seattle Public Schools, in an email to GeekWire.
"Like all school districts, Seattle Public Schools does not allow cheating and requires original thought and work from students," he said.
The district also blocks other "cheating tools," Robinson said.
In a notable shift toward sanctioned use of AI in schools, some educators in grades 3–12 are now using a ChatGPT-powered grading tool called Writable, reports Axios. The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers. But is it a good idea to outsource critical feedback to a machine?
[...]
"Make feedback more actionable with AI suggestions delivered to teachers as the writing happens," Writable promises on its AI website. "Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores." The service also provides AI-written writing-prompt suggestions: "Input any topic and instantly receive unique prompts that engage students and are tailored to your classroom needs."
[...]
The reliance on AI for grading will likely have drawbacks. Automated grading might encourage some educators to take shortcuts, diminishing the value of personalized feedback. Over time, the augmentation from AI may allow teachers to be less familiar with the material they are teaching. The use of cloud-based AI tools may have privacy implications for teachers and students. Also, ChatGPT isn't a perfect analyst. It can get things wrong and potentially confabulate (make up) false information, possibly misinterpret a student's work, or provide erroneous information in lesson plans.
[...]
there's a divide among parents regarding the use of AI in evaluating students' academic performance. A recent poll of parents revealed mixed opinions, with nearly half of the respondents open to the idea of AI-assisted grading.As the generative AI craze permeates every space, it's no surprise that Writable isn't the only AI-powered grading tool on the market. Others include Crowdmark, Gradescope, and EssayGrader. McGraw Hill is reportedly developing similar technology aimed at enhancing teacher assessment and feedback.
FYI: SWOT = Strengths, Weaknesses, Opportunities, Threats.
Orit Hazzan at ACM.org says:
Over the past year, I have published a series of CACM blogs in which I analyzed the introduction of generative AI, in general, and of ChatGPT, in particular, to computer science education (see, ChatGPT in Computer Science Education – January 23, 2023; ChatGPT in Computer Science Education: Freshmen's Conceptions, co-authored with Yael Erez - August 7, 2023; and ChatGPT (and Other Generative AI Applications) as a Disruptive Technology for Computer Science Education: Obsolescence or Reinvention - co-authored with Yael Erez - September 18, 2023).
One of the messages of these blogs was that computer science high school teachers and computer science freshmen clearly see the potential contribution of ChatGPT to computer science teaching and learning processes and highlight the opportunities it opens for computer science education, over the potential threats it poses. Another message was that generative AI, and specifically LLM-based conversational agents (e.g., ChatGPT), may turn out to be disruptive technologies for computer science education and, therefore, should be conceived of as an opportunity for computer science education to stay relevant.
In this blog, we address high school teachers' perspective on the incorporation of ChatGPT into computer science education. [...]
The author then presents the SWOT analysis, concluding:
With respect to the adoption of generative AI, it seems that the chasm in its adoption process has already been crossed and that, due to the simplicity of using the various generative AI applications available, a huge population, either with or without a technological background, has already adopted them.
Based on the SWOT analysis presented above, the meaningful question for our discussion is: With respect to the community of computer science teachers, what stage of the adoption process of innovation is generative AI at? Has the chasm already been crossed?
Related: Amid ChatGPT Outcry, Some Teachers are Inviting AI to Class
Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.
The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.
In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.
To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.
In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."
Students say they are getting 'screwed over' for sticking to the rules. Professors say students are acting like 'tyrants.' Then came ChatGPT:
When it was time for Sam Beyda, then a freshman at Columbia University, to take his Calculus I midterm, the professor told students they had 90 minutes.
But the exam would be administered online. And even though every student was expected to take it alone, in their dorms or apartments or at the library, it wouldn't be proctored. And they had 24 hours to turn it in.
"Anyone who hears that knows it's a free-for-all," Beyda told me.
[...] For decades, campus standards have been plummeting. The hallowed, ivy-draped buildings, the stately quads, the timeless Latin mottos—all that tradition and honor have been slipping away. That's an old story. Then Covid struck and all bets were off. With college kids doing college from their bedrooms and smartphones, and with the explosion of new technology, cheating became not just easy but practically unavoidable. "Cheating is rampant," a Princeton senior told me. "Since Covid there's been an increasing trend toward grade inflation, cheating, and ultimately, academic mediocrity."
Now that students are back on campus, colleges are having a hard time putting the genie back in the bottle. Remote testing combined with an array of tech tools—exam helpers like Chegg, Course Hero, Quizlet, and Coursera; messaging apps like GroupMe and WhatsApp; Dropbox folders containing course material from years past; and most recently, ChatGPT, the AI that can write essays—have permanently transformed the student experience.
[...] On January 2, a Princeton University computer science major named Edward Tian—who may be the most hated man on campus—tweeted: "I spent New Years building GPTZero—an app that can quickly and efficiently detect whether an essay is ChatGPT or human written."
So now it's nerd vs. nerd, and one of the nerds is going to win—probably whoever gets more venture funding. Everything is up in the air.
Previously:
- Amid ChatGPT Outcry, Some Teachers are Inviting AI to Class
- Seattle Public Schools Bans ChatGPT; District 'Requires Original Thought and Work From Students'
- ChatGPT Arrives in the Academic World
(Score: 5, Funny) by mhajicek on Wednesday February 22 2023, @04:10AM (1 child)
I've heard of teachers having ChatGPT write essays, and have the students mark up the errors.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 4, Insightful) by Beryllium Sphere (r) on Wednesday February 22 2023, @05:25AM
Yes! That teaches critical thinking, that teaches double-checking sources, that teaches that unreliable technology is unreliable. I'm finding it professionally useful but lordie am I constantly correcting it.
(Score: 3, Interesting) by ilsa on Wednesday February 22 2023, @10:56PM
I think this is a good approach to technologies like ChatGPT. We need to teach kids how unreliable and untrustworthy it is.
ChatGPT can do amazing things when you provide narrowly confined requirements, but using it for anything else is outright dangerous because it is the ultimate echo chamber. It will tell you exactly what you want to hear, and will generate exactly what you want it to generate.
Clarksworld is just the first high-profile casualty thanks to how easy ChatGPT is to abuse, and things are going to get a hell of a lot worse. I'm waiting for when critical scientific journals are flooded with garbage papers. It's impossible to have reasonable discourse when truth itself can be DDOS'ed, and all generations up till now are not equipped to deal with this massive reality shift. Critical thinking skills have now become even more important than ever.
Unfortunately I'm pretty sure this kind of teaching will get banned in regions that are offended by critical thinking, because it's impossible to learn this kind of approach and not apply it generally (as it should be). Corrupt politicians need to be able to manipulate the masses, after all.