Chinese AI firm releases DeepSeek V3, a new leader in open-source AI models

Share This Post


FILE PHOTO: Chinese firm DeepSeek has released a new open-source model, DeepSeek V3, which outperforms leading models.
| Photo Credit: Getty Images

Chinese firm DeepSeek has released a new open-source model, DeepSeek V3, which outperforms existing leading open-source models and closed models like OpenAI’s GPT-4o on several benchmarks. With 671 billion parameters, the AI model is able to generate text, code and perform related tasks. 

The team used a mixture of experts or MoE architecture that comprises of multiple neural networks where each has been optimised for different types of tasks. This reduces hardware costs since every time a prompt is entered, it activates just the related neural network and not the entire large language model. Each neural network comprises 34 billion parameters. 

Notably, DeepSeek has said that the training of the AI model was done in about 2788K H800 GPU hours or an estimated $5.57 million price tag, if the rental price is $2 per GPU hour. This is a much smaller amount than the millions of dollars that Big Tech companies in the U.S. have been spending on training LLMs. 

According to a technical paper released along with the news, the company said that the model surpassed open-source models including the Llama-3.1-405B and Qwen 2.5-72B on most benchmarks. It also beat GPT-4o beating it on most benchmarks barring SimpleQA that focuses on English and FRAMES. 

It was only Anthropic’s Claude 3.5 Sonnet that managed to beat out DeepSeek V3 on most benchmarks including MMLU-Pro, IF-Eval, GPQA-Diamond, SWE-Verified and Aider-Edit. 

The code is currently available on GitHub and the model can be accessed under the company’s model license. 



Source link

spot_img

Related Posts

spot_img