
Are machines about to become smarter than humans?
Chan2545/iStockphoto/Getty Images
If you take the leaders of artificial intelligence companies at their word, their products mean that the coming decade will be quite unlike any in human history: a golden era of “radical abundance”, where high-energy physics is “solved” and we see the beginning of space colonisation. But researchers working with today’s most powerful AI systems are finding a different reality, in which even the best models are failing to solve basic puzzles that most humans find trivial, while the promise of AI that can “reason” seems to be overblown. So, whom should you believe?
Sam Altman and Demis Hassabis, the CEOs of OpenAI and Google DeepMind, respectively, have both made recent claims that powerful, world-altering AI systems are just around the corner. In a blog post, Altman writes that “the 2030s are likely going to be wildly different from any time that has come before”, speculating that we might go “from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year”.
Hassabis, in an interview with Wired, also said that in the 2030s, artificial general intelligence (AGI) will start to solve problems like “curing terrible diseases”, leading to “much healthier and longer lifespans,” as well as finding new energy sources. “If that all happens,” said Hassabis in the interview, “then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy.”
This vision relies heavily on the assumption that large language models (LLMs) like ChatGPT get more capable the more training data and computer power we throw at them. This “scaling law” seems to have held true for the past few years, but there have been hints of it faltering. For example, OpenAI’s recent GPT-4.5 model, which likely cost hundreds of millions of dollars to train, achieved only modest improvements over its predecessor GPT-4. And that cost is nothing compared with future spending, with reports suggesting that Meta is about to announce a $15 billion investment in an attempt to achieve “superintelligence”.
Money isn’t the only attempted solution to this problem, however – AI firms have also turned to “reasoning” models, like OpenAI’s o1, which was released last year. These models use more computing time and so take longer to produce a response, feeding their own outputs back into themselves. This iterative process has been labelled “chain-of-thought”, in an effort to draw comparisons to the way a person might think through problems step by step. “There were legitimate reasons to be concerned about AI plateauing,” Noam Brown at OpenAI told New Scientist last year, but o1 and models like it meant that the “scaling law” could continue, he argued.
Yet recent research has found these reasoning models can stumble on even simple logic puzzles. For example, researchers at Apple tested Chinese AI company DeepSeek’s reasoning models and Anthropic’s Claude thinking models, which work like OpenAI’s o1-family of models. The researchers found they have “limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles”, the researchers wrote.
The team tested the AI on several puzzles, such as a scenario in which a person has to transport items across a river in the fewest number of steps, and Tower of Hanoi, a game where you must move rings one by one between three poles without placing a larger ring on top of a smaller one. Though the models could solve the puzzles at their easiest settings, they struggled with increasing the number of rings or items to transport. While we would spend a longer time thinking about a more complex problem, the researchers found that the AI models used fewer “tokens” – chunks of information – as the complexity of the problems increased, suggesting that the “thinking” time the models displayed is an illusion.
“The damaging part is that these are tasks easily solvable,” says Artur Garcez at City, University of London. “We already knew 50 years ago how to use symbolic AI reasoning to solve these.” It is possible that these newer systems can be fixed and improved to eventually be able to reason through complex problems, but this research shows it’s unlikely to happen purely through increasing the size of the models or the computational resources given to them, says Garcez.
It is also a reminder that these models still struggle to solve scenarios they haven’t seen outside of their training data, says Nikos Aletras at the University of Sheffield. “They work quite well actually in many cases, like finding, collating information and then summarising it, but these models have been trained to do these kinds of things, and it appears magic, but it isn’t – they have been trained to do this,” says Aletras. “Now, I think the Apple research has found a blind spot.”
Meanwhile, other research is showing that increased “thinking” time can actually hurt an AI model’s performance. Soumya Suvra Ghosal and his colleagues at the University of Maryland tested DeepSeek’s models and found that longer “chain of thought” processes led to a decreased accuracy on tests of mathematical reasoning. For example, for one mathematical benchmark, they found that tripling the amount of tokens used by a model can increase its performance by about 5 per cent. But using 10 to 15 times as many tokens again dropped the benchmark score by around 17 per cent.
In some cases, it appears the “chain of thought” output produced by an AI bears little relation to the eventual answer it provides. When testing DeepSeek’s models on the ability to navigate simple mazes, Subbarao Kambhampati at Arizona State University and his colleagues found that even when the AI solved the problem, its “chain of thought” output contained mistakes that weren’t reflected in the final solution. What’s more, feeding the AI a meaningless “chain of thought” could actually produce better answers.
“Our results challenge the prevailing assumption that intermediate tokens or ‘chains of thought’ can be semantically interpreted as the traces of internal reasoning of the AI models, and caution against anthropomorphising them that way,” says Kambhampati.
Indeed, all of the studies suggest that “thinking” or “reasoning” labels for these AI models are a misnomer, says Anna Rogers at the IT University of Copenhagen in Denmark. “For as long as I’ve been in this field, every popular technique I can think of has been first hyped up with some vague cognitively-sounding analogy, which [was] then eventually proved wrong.”
Andreas Vlachos at the University of Cambridge points out that LLMs still have clear applications in text generation and other tasks, but says the latest research suggests we may struggle to ever make them tackle the kind of complex problems Altman and Hassabis have promised will be solved in just a few years.
“Fundamentally, there is a mismatch between what these models are trained to do, which is next-word prediction, as opposed to what we are trying to get them to do, which is to produce reasoning,” says Vlachos.
OpenAI disagrees, however. “Our work shows that reasoning methods like chain-of-thought can significantly improve performance on complex problems, and we’re actively working to expand these capabilities through better training, evaluation, and model design,” says a spokesperson. DeepSeek didn’t respond to a request for comment.
Topics:
Source link : https://www.newscientist.com/article/2484169-is-superintelligent-ai-just-around-the-corner-or-just-a-sci-fi-dream/?utm_campaign=RSS%7CNSNS&utm_source=NSNS&utm_medium=RSS&utm_content=home
Author :
Publish date : 2025-06-13 14:30:00
Copyright for syndicated content belongs to the linked Source.