Technology

How China’s new AI model DeepSeek is threatening U.S. dominance

In this article

A little-known AI lab out of China has ignited panic throughout Silicon Valley after releasing AI models that can outperform America’s best despite being built more cheaply and with less-powerful chips. 

DeepSeek, as the lab is called, unveiled a free, open-source large-language model in late December that it says took only two months and less than $6 million to build, using reduced-capability chips from Nvidia called H800s. 

The new developments have raised alarms on whether America’s global lead in artificial intelligence is shrinking and called into question big tech’s massive spend on building AI models and data centers. 

In a set of third-party benchmark tests, DeepSeek’s model outperformed Meta‘s Llama 3.1, OpenAI’s GPT-4o and Anthropic’s Claude Sonnet 3.5 in accuracy ranging from complex problem-solving to math and coding. 

DeepSeek on Monday released r1, a reasoning model that also outperformed OpenAI’s latest o1 in many of those third-party tests.

“To see the DeepSeek new model, it’s super impressive in terms of both how they have really effectively done an open-source model that does this inference-time compute, and is super-compute efficient,” Microsoft CEO Satya Nadella said at the World Economic Forum in Davos, Switzerland, on Wednesday. “We should take the developments out of China very, very seriously.” 

DeepSeek also had to navigate the strict semiconductor restrictions that the U.S. government has imposed on China, cutting the country off from access to the most powerful chips, like Nvidia’s H100s. The latest advancements suggest DeepSeek either found a way to work around the rules, or that the export controls were not the chokehold Washington intended.

“They can take a really good, big model and use a process called distillation,” said Benchmark General Partner Chetan Puttagunta. “Basically you use a very large model to help your small model get smart at the thing you want it to get smart at. That’s actually very cost-efficient.”

Little is known about the lab and its founder, Liang WenFeng. DeepSeek was was born of a Chinese hedge fund called High-Flyer Quant that manages about $8 billion in assets, according to media reports.

But DeepSeek isn’t the only Chinese company making inroads. 

Leading AI researcher Kai-Fu Lee has said his startup 01.ai was trained using only $3 million. TikTok parent company ByteDance on Wednesday released an update to its model that claims to outperform OpenAI’s o1 in a key benchmark test. 

“Necessity is the mother of invention,” said Perplexity CEO Aravind Srinivas. ”Because they had to figure out work-arounds, they actually ended up building something a lot more efficient.”

Watch this video to learn more. 

Articles You May Like

Small Carnivores Were Crucial for Early Levant Diets, Claims New Study
Puerto Rico just got $1.2B in DOE financing to boost its grid with solar + storage
Chancellor indicates she will overrule environmental objections to a third runway at Heathrow
Sainsbury’s to cut over 3,000 jobs – with all cafes to close
Oxygen Production Discovered in Deep-Sea Metal Nodules, Researchers Investigate