Future Tech

Maths test stumps AI models: which number is bigger, 9.90 or 9.11?

Tan KW
Publish date: Thu, 18 Jul 2024, 07:49 PM
Tan KW
0 456,404
Future Tech

The wave of artificial intelligence chatbots allowed for public use in mainland China enables many users to create new content - including audio, code, images, simulations, videos and grammatically correct text - to entertain and help with everyday tasks.

That demand has led to the local development of more than 200 large language models (LLMs), the technology underpinning generative AI (GenAI) services like ChatGPT. LLMs are deep-learning AI algorithms that can recognise, summarise, translate, predict and generate content using very large data sets.

In spite of such resources behind chatbots, AI models have been proven to struggle with basic maths knowledge this past weekend on the Chinese reality show Singer 2024, a singing competition produced by Hunan Television.

Mainland artist Sun Nan received 13.8% of online votes to edge out US singer Chanté Moore, who received 13.11% of votes. Some local netizens poked fun at the ranking, claiming that the latter number was larger. Ask AI, one commenter suggested. The results they got were mixed.

Both Moonshot AI’s chatbot Kimi and Baichuan’s own Baixiaoying initially gave the wrong answer. They corrected themselves, as well as apologised, after the user who made the query adopted a so-called chain-of-thought approach - a reasoning method in which an AI application is guided step-by-step through a problem.

Alibaba Group Holding’s Qwen LLM used a Python Code Interpreter to calculate the answer, while Baidu’s Ernie Bot took six steps to get the correct answer. Alibaba owns the South China Morning Post. ByteDance’s Doubao LLM, by contrast, generated a direct response with an example: “If you have US$9.90 and US$9.11, clearly US$9.90 is more money.”

“LLMs are bad at maths - it’s very common,” said Wu Yiquan, a computer science researcher at Zhejiang University in Hangzhou.

GenAI does not inherently possess mathematical capabilities and can only predict answers based on training data, according to Wu. He said some LLMs perform well on maths tests possibly because of “data contamination”, which means that the algorithm memorised the answers because similar questions were already in its training data.

“The world of AI is tokenised - numbers, words, punctuations and spaces are all treated the same,” Wu said. “Therefore, any change in the prompt can affect the result significantly.”

The maths issue shows that AI technology continues to evolve not only on the mainland, but elsewhere around the world.

It is a situation that may lead China to set aside efforts to establish a unified national AI legislation, according to Zheng Ge, a professor of public law at Shanghai Jiao Tong University, on the sidelines of a conference in Beijing last Friday.

“The vast majority of experts believe the timing to craft unified national AI legislation may not yet be right since the technology is evolving so rapidly,” Zheng said.

The “number comparison testing” for AI models went viral after Allen Institute’s researcher Bill Yuchen Lin and tech firm Scale AI’s prompt engineer Riley Goodside highlighted the technology’s basic maths inadequacies on social media platform X.

When asked which number was bigger, 9.9 or 9.11, advanced LLMs such as OpenAI’s GPT-4o, Claude 3.5 Sonnet and Mistral AI answered 9.11.

In a post on X, Goodside said he does not intend to undermine LLMs, but aims to help understand and fix their failures.

“Previously well-known issues in LLMs (e.g., bad maths) are now mitigated so well the remaining errors are newly shocking to users - any reduction in frequency is also a delayed increase in severity,” he wrote. “We should be ready for this to keep happening across many task domains.”

 - SCMP

 

 

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment