Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mistral and Microsoft set off the trend of "small language model": the code ability is better than GPT-4, and the cost is only 1can3.

2024-12-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)12/24 Report--

The trend of small models has become more and more popular recently, with Mistral and Microsoft taking action respectively. However, netizens have found that the code ability of Mistral-medium is even better than that of GPT-4, and the cost is less than 1/3.

Recently, the "small language model" has suddenly become a hot topic. On Monday, French AI startup Mistral, which just raised $415 million, unveiled a Mixtral 8x7B model.

Although the open source model was small enough to run on a computer with more than 100GB memory, it was on a par with GPT-3.5 in some benchmarks, so it quickly won plaudits among developers.

Mixtral 8x7B is called because it combines a variety of smaller models trained to handle specific tasks, thus improving operational efficiency.

This "sparse expert mix" model is not easy to implement, and it is said that OpenAI had to abandon the development of the model earlier this year because it could not make the MoE model work properly.

Then, the next day, Microsoft released a new version of the Phi-2 mini model.

Compared with the 7 billion parameter of Mistral, Phi-2 is small enough to run on the phone, with only 2.7 billion parameters. By contrast, the parameter of GPT-4 is 1 trillion.

Phi-2 is trained on carefully selected data sets that are of high quality enough to ensure that the model produces accurate results even if the phone's computing power is limited.

Although it is not clear how Microsoft or other software manufacturers will use small models, the most obvious benefit is that it reduces the cost of running AI applications on a large scale and greatly broadens the scope of application of generative AI technology.

It's a big deal.

Mistral-medium code generation beats GPT-4. Recently, Mistral-medium has opened up internal testing.

Some bloggers have compared the code generation capabilities of open source Mistral-medium and GPT-4, and the results show that Mistral-medium is more capable of code than GPT-4, but the cost is only 30% of that of GPT-4!

In terms of total price, it is:

1) Mistral will always finish the work, with a high degree of completion.

2) token will not be wasted on lengthy explanatory output

3) the suggestions provided are very specific.

The first question, "Writing cuda optimization code for generating PyTorch data sets of Fibonacci primes".

The code generated by Mistral-Medium is serious and complete.

The code generated by GPT-4 is less than satisfactory.

A lot of token is wasted, but no useful information is output.

Then, GPT-4 only gives the skeleton code, and there is no specific related code.

The second question: "write efficient Python code to ingest about 1 billion large Apache HTTP access files into the SqlLite database and use it to generate access histograms to sales.html and product.html."

The output of Mistral is wonderful, and although log is not in CSV format, it is easy to modify.

GPT-4 is still stretched.

Previously, the blogger has tested several code generation models, and GPT-4 has always been at the top of the list. Now, Mistral-medium, the formidable rival that took it down from the throne, has finally emerged. Although only two examples have been posted, the blogger tested several questions with similar results.

He suggested that given that Mistral-medium has a better experience of code generation quality, it should be integrated into local code copilot.

Someone calculated the cost of input and output per 1000token and found that Mistral-medium is 70% lower than GPT-4!

Indeed, saving 70% of token costs is not a trivial matter. Further cost savings can even be achieved through non-verbose output.

Reference:

Https://www.theinformation.com/articles/the-rise-of-small-language-models-and-reinforcement-learning

Https://twitter.com/deliprao/status/1734997263024329157

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report