Researchers gave ChatGPT a 2,400-year-old math puzzle and its surprising answer echoed Greek philosopher Socrates’ student

Share This Post



More than two millennia ago, Plato recorded Socrates posing a riddle to a student: how to double the area of a square. The student tried the obvious—doubling each side—but stumbled. The true solution lies in using the diagonal. Fast forward 2,400 years, and researchers from the University of Cambridge and the Hebrew University of Jerusalem decided to pose the same puzzle to ChatGPT, asking whether artificial intelligence could navigate the same intellectual trap.

Why This Puzzle Matters

The “doubling the square” problem has long been a philosophical touchstone. Since Plato’s time, scholars have debated whether mathematical knowledge is something humans are born with or something we acquire through experience. For scientists today, the question carried a modern twist: could a chatbot trained on words, not geometry, figure out a problem that requires spatial reasoning?

ChatGPT’s Attempt and a Twist in the Tale

As reported in the International Journal of Mathematical Education in Science and Technology, ChatGPT initially handled the square challenge. But when researchers expanded the task to doubling the area of a rectangle, the system faltered. It claimed no solution existed in geometry because a rectangle’s diagonal “cannot be used” to double its area—an error that surprised even the research team.

Visiting scholar Nadav Marco explained that the chances of such a mistake being directly copied from training data were “vanishingly small.” In other words, ChatGPT appeared to be improvising, using fragments of past reasoning to generate new answers.

An AI That “Learns Like a Student”?

The researchers compared this behavior to the way human learners hypothesize and sometimes fail before reaching understanding. “When we face a new problem, our instinct is often to try things out based on our past experience,” Marco said. “In our experiment, ChatGPT seemed to do something similar.”

This pattern reminded the team of the “zone of proximal development,” an education theory that describes the gap between what someone knows and what they can achieve with guidance. If ChatGPT can spontaneously operate within such a zone, it may open new possibilities for how AI systems assist in learning.

Implications for Education and AI

The findings underline both opportunity and risk. While ChatGPT sometimes generates creative reasoning paths, it also produces errors that resemble those of Socrates’ confused student as written by Plato. Professor Andreas Stylianides, a co-author of the study, cautioned that “students cannot assume that ChatGPT’s proofs are valid,” arguing that understanding and checking AI-generated solutions should become part of modern math education. The researchers stress that the experiment does not prove AI “thinks” like humans. But it does highlight how AI can mimic the process of learning—complete with mistakes, exploration, and flashes of insight.



Source link

spot_img

Related Posts

spot_img