Breaking news

DeepSeek Expands Open-Source AI Strategy With New Code Release

Chinese AI startup DeepSeek is doubling down on open-source innovation, announcing plans to publicly release five new code repositories next week. In a post on social media platform X, the company described the move as “small but sincere progress” toward greater transparency in AI development.

“These humble building blocks in our online service have been documented, deployed, and battle-tested in production,” the company stated.

DeepSeek made waves last month when it unveiled its open-source R1 reasoning model, a system that rivaled Western AI models in performance but was developed at a fraction of the cost. Unlike many AI firms in China and the U.S. that guard their proprietary models, DeepSeek has positioned itself as a leader in open-source AI.

The company’s elusive founder, Liang Wenfeng, reinforced this philosophy in a rare interview last July, emphasizing that commercialization was not DeepSeek’s primary focus. Instead, he framed open-source development as a cultural movement with strategic advantages.

“Having others follow your innovation gives a great sense of accomplishment,” Liang said. “In fact, open source is more of a cultural behavior than a commercial one, and contributing to it earns us respect.”

The newly released repositories will provide infrastructure support for DeepSeek’s existing open-source models, enhancing their capabilities and accessibility. This follows the company’s Tuesday launch of Native Sparse Attention (NSA), a new algorithm designed to optimize long-context training and inference.

DeepSeek’s influence is growing rapidly. Since last month, its user base has surged, making it China’s most popular chatbot service. As of January 11, the platform had 22.2 million daily active users, surpassing Douban’s 16.95 million, according to Aicpb.com, a Chinese analytics site.

With its latest commitment to transparency and collaboration, DeepSeek continues to challenge the AI industry’s dominant closed-source model, reshaping the future of artificial intelligence on a global scale.

The AI Agent Revolution: Can the Industry Handle the Compute Surge?

As AI agents evolve from simple chatbots into complex, autonomous assistants, the tech industry faces a new challenge: Is there enough computing power to support them? With AI agents poised to become integral in various industries, computational demands are rising rapidly.

A recent Barclays report forecasts that the AI industry can support between 1.5 billion and 22 billion AI agents, potentially revolutionizing white-collar work. However, the increase in AI’s capabilities comes at a cost. AI agents, unlike chatbots, generate significantly more tokens—up to 25 times more per query—requiring far greater computing power.

Tokens, the fundamental units of generative AI, represent fragmented parts of language to simplify processing. This increase in token generation is linked to reasoning models, like OpenAI’s o1 and DeepSeek’s R1, which break tasks into smaller, manageable chunks. As AI agents process more complex tasks, the tokens multiply, driving up the demand for AI chips and computational capacity.

Barclays analysts caution that while the current infrastructure can handle a significant volume of agents, the rise of these “super agents” might outpace available resources, requiring additional chips and servers to meet demand. OpenAI’s ChatGPT Pro, for example, generates around 9.4 million tokens annually per subscriber, highlighting just how computationally expensive these reasoning models can be.

In essence, the tech industry is at a critical juncture. While AI agents show immense potential, their expansion could strain the limits of current computing infrastructure. The question is, can the industry keep up with the demand?

Become a Speaker

Become a Speaker

Become a Partner

Subscribe for our weekly newsletter