Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account


Tao Zhe Xuan supports! The AI Olympic Mathematics Award is coming, with a prize of 5 million US dollars, looking for a big model that can win the gold medal of IMO.

2024-07-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >


Shulou( Report--

Here comes the IMO International Mathematical Olympiad for AI--

With a bonus of $10 million!

The competition claims to be "on behalf of the new Turing test". How does it compare?

Compete head-on with the smartest little math geniuses of mankind and win the same standard gold medal.

Don't underestimate this competition, even Tao Zhe Xuan, a math bull, came and recommended it on the official website:

This competition provides a set of benchmarks for identifying AI problem-solving strategies, which is exactly what we need right now.

As soon as the news came out, netizens were very excited.

As President IMO said: which big model can compete with the smartest wave of young people in the world?

As the saying goes, "under great appreciation, there must be brave men." AI, which has its own way, is also really something to look forward to.

AI competes in IMO, or AI-MO for a maximum of $5 million in this competition.

Its original intention is to promote the mathematical reasoning ability of large language models and encourage the development of new AI models that match the highest level of human mathematics (IMO competition).

Why choose IMO as the benchmark?

The topics of IMO are generally divided into four categories: algebra, geometry, number theory and combinatorial mathematics, which do not need the knowledge of higher mathematics, but the contestants need to have the correct way of thinking and mathematical literacy.

Statistics show that gold medalists are 50 times more likely to win the Fields Prize than ordinary Cambridge PhD graduates.

In addition, half of the Fields Prize winners have participated in IMO competitions.

Based on the competition, the AI-MO competition for AI will be opened in early 2024.

The organizing committee requires that the participating AI models must deal with the questions in the same format as human contestants, and must generate human-readable final answers, which will then be graded by the expert panel using the IMO standard.

The results of the competition will be announced with the 65th IMO Congress to be held in Bath, England, in July next year.

In the end, the AI who reaches the gold medal level will receive a grand prize of 5 million US dollars.

The remaining AI models that have "achieved key milestones" will split the remaining progress prize, with a total amount of $5 million.

It is worth mentioning that in order to qualify for the award, participants must abide by the AI-MO Public sharing Agreement, that is, the award-winning model must be open source.

As for the specific rules, the organizing committee is still deliberating, and officials are still recruiting members of the advisory committee (especially mathematicians, AI and machine learning experts) and the director who leads the competition, all of which are paid and can be completely remote. I don't know which bosses will join.

It should be noted, however, that AI-MO is not an official IMO competition.

Its real sponsor is XTX Markets, a London-based non-bank financial institution that engages in machine learning quantitative transactions.

Among other things, XTX Markets focuses on arrogance.

It also set up a scholarship with Oxford University last year to encourage female students to study mathematics.

As for the competition itself, some netizens also began a wave of speculation: which AI model is the most promising?

GPT-4 with the Wolfram plug-in was the first to be brought out, but it was also the first to throw cold water.

However, the OpenAI behind it is still favored (although big technology companies are not the target audience of the competition).

Some pessimistic netizens directly asserted:

The game is cool, but no one should be able to do it in five years.

At the same time, some people also think:

It is not difficult to train such a model, but it is difficult to obtain and process data. After all, these topics involve not only the text, but also many images and symbols with complex meanings.

Everything will be announced in 2024.

It is worth mentioning that AI-MO is not the first AI to challenge IMO.

In 2019, several researchers from university institutions such as OpenAI, Microsoft, Stanford University and Google already launched a competition called IMO Grand Challenge.

No one has succeeded in IMO Grand Challenge before, and it is also a competition set up to find the AI who can win the gold medal in IMO.

Let's take a look at the five rules set for AI in this math competition:

About the format. In order to ensure the rigor and verifiability of the proof process, problems and proofs need to be done in a formal (formal, machine-verifiable) way.

In other words, the IMO problem will be transformed into an expression based on the Lean programming language and input to the AI,AI through the Lean theorem prover. It also needs to be proved in the Lean programming language.

About scoring. Each proof question of AI will be judged to be right or wrong within 10 minutes, because this is also the time for IMO judges to grade. Unlike humans, AI does not have a "partial score".

About resources. Like humans, AI takes 4.5 hours a day to solve three problems (a total of two days of competition), and there is no limit to computing resources.

About reproducibility. AI must be open source, expose the model by the end of the first day of IMO, and be reproducible. It is required that AI cannot be connected to the Internet.

About the challenge itself. The biggest challenge is to get AI to win the gold medal 🏅 like a human.

The competition was launched by seven AI researchers and mathematicians:

OpenAI's Daniel Selsam, Microsoft's Leonardo de Moura, Imperial College's Kevin Buzzard, University of Pittsburgh's Reid Barton, Stanford University's Percy Liang, Google AI's Sarah Loos and Radburgh University's Freek Wiedijk.

Now, four years later, it has also received the attention of some contestants one after another.

However, although many AI and math researchers have tried to challenge this field, or a small goal in the field, they are still a long way from the ultimate goal of winning the IMO championship.

There is even a suggestion as to whether or not to set up a "simple model" for this game:

For example, researcher Xi Wang has tried to use several existing SMT solvers to do IMO real problems, but the results are mediocre.

At that time, although the existing AI can prove some real IMO problems that are not very difficult, such as Napoleon's theorem (if each side of an arbitrary triangle is taken as a regular triangle to the outside, their central lines must form a regular triangle).

However, when proving some other real problems, such as the geometry problem of IMO 2019, several existing solvers cannot do it, or time out for half an hour.

It is also like OpenAI researchers (still at Microsoft at the time) Dan Selsam and Jesse Michael Han, who also studied AI solving IMO geometry problems for a period of time and summarized a blog.

This blog describes how they came up with a geometry solver and the steps to design it, including:

Geometric representation, constraint solving, algorithm selection, solver architecture, challenges and solutions.

The geometric representation, for example, represents geometric problems in a format that a computer can understand and process, and vice versa, including using a geometry solver to automatically convert programming languages into diagrams that are easy for humans to read:

In addition, it also introduces how to choose the appropriate algorithm according to different types of IMO geometry questions, and so on.

But even so, this blog does not give a specific solution, only in the conclusion that "the solver may achieve the goal of winning the IMO gold medal."

Moreover, the geometry problems targeted by the above challengers account for only 1/4 of the IMO questions (as well as algebra, combination and number theory).

Although it has been launched for 4 years, there is still not a real AI "IMO all-around player", but as the originator of this idea, IMO Grand Challenge has still made a lot of waves in the industry.

Alex Gerko admits that IMO Grand Challenge is also the opportunity for him to host AI-MO:

It's time to give AI Challenge IMO a little excitement!

Of course, the AI-MO bonus did attract the attention of IMO Grand Challenge organizers and many challengers:

I don't know whether there will really be an AI that can solve difficult math problems in the industry driven by money 💰, and successfully surpass a large number of human beings to win the IMO gold medal.

Judging from the current strength, which AI do you think is most likely to take the lead?

Reference link:




This article comes from the official account of Wechat: quantum bit (ID:QbitAI), author: Fengse Xiao Xiao

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information


© 2024 SLNews company. All rights reserved.