Wednesday, September 18, 2024

Kids who use ChatGPT as a study assistant do worse on tests





You may not recall this, but one of the difficult things to master is the multiplication table whose mastery allows calculation using complex algorithms in your head.  as annoying is learning just how to spell.

Having someone else use an algorithm does not internalize that algorithm for you.  Certainly the tutor model is much superior since it invite your mind to take over.

We cannot fix mental speed, but an internalized algorithm can be used by even a naturally slow mind, which is how the slow witted are able to often make useful headway.

I do think we need to see if slow wittedness can be fixed.  

AI is not a learning tool, but it can be a teaching tool.  Ladies, our brains have to be trained! Chess matters, just as soccer and basketball matter.



Kids who use ChatGPT as a study assistant do worse on tests

Researchers compare math progress of almost 1,000 high school students.




Posted on Sep 11, 2024 8:00 AM EDT


ChatGPT also seems to produce overconfidence. Credit: DepositPhotos

https://www.popsci.com/technology/kids-who-use-chatgpt-as-a-study-assistant-do-worse-on-tests/

This story was produced by The Hechinger Report, a nonprofit, nonpartisan news outlet focused on education.


The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox. Consider supporting our stories and becoming a member today.

Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale.

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

The researchers titled their paper, “Generative AI Can Harm Learning,” to make clear to parents and educators that the current crop of freely available AI chatbots can “substantially inhibit learning.” Even a fine-tuned version of ChatGPT designed to mimic a tutor doesn’t necessarily help.

The researchers believe the problem is that students are using the chatbot as a “crutch.” When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer. Students were not building the skills that come from solving the problems themselves.

ChatGPT’s errors also may have been a contributing factor. The chatbot only answered the math problems correctly half of the time. Its arithmetic computations were wrong 8 percent of the time, but the bigger problem was that its step-by-step approach for how to solve a problem was wrong 42 percent of the time. The tutoring version of ChatGPT was directly fed the correct solutions and these errors were minimized.

A draft paper about the experiment was posted on the website of SSRN, formerly known as the Social Science Research Network, in July 2024. The paper has not yet been published in a peer-reviewed journal and could still be revised.

This is just one experiment in another country, and more studies will be needed to confirm its findings. But this experiment was a large one, involving nearly a thousand students in grades nine through 11 during the fall of 2023. Teachers first reviewed a previously taught lesson with the whole classroom, and then their classrooms were randomly assigned to practice the math in one of three ways: with access to ChatGPT, with access to an AI tutor powered by ChatGPT or with no high-tech aids at all. Students in each grade were assigned the same practice problems with or without AI. Afterwards, they took a test to see how well they learned the concept. Researchers conducted four cycles of this, giving students four 90-minute sessions of practice time in four different math topics to understand whether AI tends to help, harm or do nothing.

ChatGPT also seems to produce overconfidence. In surveys that accompanied the experiment, students said they did not think that ChatGPT caused them to learn less even though they had. Students with the AI tutor thought they had done significantly better on the test even though they did not. (It’s also another good reminder to all of us that our perceptions of how much we’ve learned are often wrong.)

The authors likened the problem of learning with ChatGPT to autopilot. They recounted how an overreliance on autopilot led the Federal Aviation Administration to recommend that pilots minimize their use of this technology. Regulators wanted to make sure that pilots still know how to fly when autopilot fails to function correctly.

ChatGPT is not the first technology to present a tradeoff in education. Typewriters and computers reduce the need for handwriting. Calculators reduce the need for arithmetic. When students have access to ChatGPT, they might answer more problems correctly, but learn less. Getting the right result to one problem won’t help them with the next one.

No comments: