NVIDIA CEO Jensen Huang Collaborates with DDN to Create Extreme Computing, DeepSeek R1 Model Accelerates AI Development

Nvidia founder Jensen Huang and Alex Bouzari, CEO of the world’s largest private storage system company DDN, held in-depth discussions on the future development of AI on 2/21. They explored the impact of AI in various fields, ranging from high-performance computing (HPC) to enterprise applications and digital twins.

In 2017, Nvidia aimed to create a new supercomputing architecture but needed a more efficient data processing method. Bouzari believed that the traditional data access mode was no longer viable and called for a new AI storage architecture that could be scalable, low-latency, distributed, multi-cloud, and even minimize data movement by processing information through metadata. This idea was initially considered crazy, but after seven years of effort, it finally became a reality.

With the explosion of AI applications, many companies are no longer solely focused on model training but also require quick access to information during practical applications. Huang pointed out that AI should not solely rely on massive training data but rather be able to obtain “useful information” during applications, rather than raw data. This is precisely the problem solved by DDN’s Infinia product, which can transform raw data into key information through its Data Intelligence Layer, enabling more efficient AI operations.

The key to this architecture lies in metadata, which refers to the labels and descriptions of the data. Huang explained that metadata has a high compression rate, can quickly move between different systems, and significantly reduces computational costs and storage space requirements. This not only allows for smoother AI operations but also enables companies to acquire data value more quickly, thereby improving competitiveness.

As Moore’s Law slows down, accelerated computing becomes crucial for AI development. Huang stated that Nvidia’s GPU parallel computing architecture enables extreme computing power, facilitating the large-scale development of AI. DDN’s Infinia combines accelerated computing with AI learning mechanisms, allowing data to be automatically learned and transformed into usable information. In fields such as healthcare, finance, and smart cities, this technology can help companies quickly obtain critical data and further enhance AI decision-making capabilities.

From high-performance computing (HPC) to enterprise AI, AI is now entering the stage of digital twins, thanks to Nvidia’s Omniverse platform. Huang gave an example of drug development, which previously required billions of dollars and years of time. Through Omniverse, scientists can create digital twins of drugs in a virtual world, simulating various formulas and effects, greatly reducing the research and development time. Additionally, Omniverse can be applied in manufacturing, smart cities, and other fields, allowing enterprises to conduct simulations and tests in the digital world, significantly improving efficiency and accuracy. Bouzari added that the key to Omniverse’s success lies in the smart data layer. Companies need to transform a large amount of data into valuable information through AI to fully leverage the advantages of digital twins.

Huang pointed out that in the future, companies will establish their own AI agents, which will become experts in various departments capable of analyzing data, providing recommendations, and even collaborating with each other. For example, the supply chain management AI can exchange information with the financial AI to ensure that cash flow and production plans are synchronized. DDN’s Infinia plays a crucial role in this era of AI agents by enabling AI to quickly access and analyze critical data, ensuring that AI agents can provide optimal decisions.

Recently, DeepSeek released the R1 open-source AI inference model, attracting significant market attention. Huang stated that this does not mean that the demand for AI computing will decrease but rather is a key driver for accelerating AI progress. In the past, AI training was divided into pre-training and inference, but now post-training has become more important, requiring a large amount of computing resources. The emergence of the R1 model allows AI to perform inference more efficiently and improve the decision-making capabilities of AI agents. Bouzari also mentioned that Nvidia’s CUDA and NIMs platforms are driving the application of AI in various industries, including life sciences, finance, and autonomous driving. In the future, AI agents will be ubiquitous.

The question of whether companies should build their own AI or use cloud-based AI has a dual answer. Huang pointed out that companies can start with cloud-based AI, but if they want to gain a competitive advantage in specific areas, they still need to develop their own AI. For example, Nvidia has built its own AI in chip design and supply chain management because the knowledge in these fields cannot be obtained directly from public cloud AI. This is also where DDN’s Infinia plays a critical role, allowing companies to establish their own AI intelligence layer and enhance AI decision-making capabilities.

Nvidia and DDN have been collaborating in the HPC field in the past, and now they are bringing AI into enterprise applications, and further promoting digital twin technology. Huang expressed his gratitude to DDN for their contribution to AI development, stating that without DDN, Nvidia’s supercomputers would not have been possible. Bouzari stated that Nvidia is leading AI into a new era, and the two companies will continue to deepen their cooperation to promote the application of AI in enterprises and digital twin fields.

Leave a Reply

Your email address will not be published. Required fields are marked *