Shaking up the AI industry, Yang Yuanqing and Huang Renxun joined forces to expand their moves and record the full conversation!

Zhitongcaijing · 10/16 02:49

The Zhitong Finance App learned that on October 16, Lenovo Group (00992) hosted the annual tech event Tech World in Seattle, USA. The conference brought together the world's top AI tech giants such as Huang Renxun, Su Zifeng, Zuckerberg, Satya, Kissinger, and Anmon. Among them, many collaborations between Lenovo and Nvidia attracted great attention from the industry. The two sides announced further expansion of the cooperation, and jointly released a set of hybrid AI advantages and the GB200 liquid-cooled AI server, which claims to be the most “cool” in the world.

The following is a transcript of the press conference conversation:

image.png

Yang Yuanqing: Good afternoon! Thank you for your continued attention and support for our enterprise AI solutions. As a quick recap, today we shared our understanding of the future defined by hybrid artificial intelligence, showcased our innovations in personal AI and enterprise AI, and most importantly, reaffirmed our vision of “Smarter AI for All.” And all of this is inseparable from a very important partner. Let's welcome NVIDIA CEO Hwang In-hoon!

Hwang In-hoon: Thank you, Yuanqing! I'm so happy to be here.

Yang Yuanqing: We discussed hybrid artificial intelligence at Tech World last year, and since then, both parties have made a lot of progress. The audience would definitely love to hear what you think. What do you think is next for hybrid artificial intelligence? How is customer adoption?

Huang Renxun: First, I'm very happy to be here again to announce a series of important new initiatives with our partner Lenovo. Yuan Qing and I have known each other for a long time, and it can even be said that we have been friends since childhood. Together, we have experienced several computing revolutions, first the PC revolution, then the Internet revolution, and then the mobile cloud revolution. Now, we're reshaping the entire architecture of computing on an unprecedented scale.

What we used to call “programming” is now “machine learning.” Programming is done with the CPU, while machine learning is done with the GPU. Amazingly, programming has spawned software and fueled an entire industry, and now, machine learning is creating artificial intelligence, which will be the biggest industrial revolution we've ever seen. Over the past 12 months, we've seen amazing progress in all walks of life. Every business, every industry, and every country realizes that their digital intelligence and data can be written and transformed into data intelligence for their country, business, or industry.

Of course, one of the big things that happened recently was the big model Llama 3 that Mark Zuckerberg just mentioned, which really changed the rules of the game. With the advent of Llama 3, every company now has access to AI, as long as they have the infrastructure and architecture they need. With AI computers and AI infrastructure, and a very critical software stack, we are able to transform large models into artificial intelligence. Artificial intelligence is closely related to big models. We need big language models, and we need very complex big language models and technology, but (remember) big language models are an important component of artificial intelligence. What I just saw in the announcement — the complete architecture we worked together as a company and partner to develop is precisely to establish the required infrastructure, software stack, and best practices and blueprints. In this way, we can turn big language models into intelligence that actually helps us complete our tasks.

Yang Yuanqing: So what are your views on intelligent AI and physical AI?

Huang Renxun: Regarding intelligent AI, when it comes to artificial intelligence, ladies and gentlemen, we hope to produce millions or even billions of such AI. At Nvidia, we call these “Little Jensen Dolls” (Toy Jensen). They will run around to help you complete various tasks. Whatever you need to do, it will meet your requirements. Broadly speaking, artificial intelligence is essentially a robot.

The future will have digital robots, which we call smart bodies. They have the ability to understand your instructions, understand the meaning of the instructions, break them down into specific actions, use tools, retrieve proprietary information or whatever information they can access, complete tasks, and act when necessary. Therefore, they can sense, reason, and perform actions. This basic cycle is the core cycle of robotics. As a result, we will have information robots, which we call intelligent bodies.

We'll also have physical robots. These physical robots are essentially AI that understand the physical world. Furthermore, we have intelligence that understands the world of information. These two types of artificial intelligence — intelligent AI and physical AI — will become the cornerstone of the global industry. We will have AI “colleagues”. These AI “colleagues” may be good at marketing, or in our company, can design chips, program software, verify, or build intelligent entities to assist us in supply chain management.

These smart bodies work collaboratively with all of our employees, thereby greatly increasing our productivity. What we're basically trying to achieve is “superhuman” productivity, right? As a result, all employees will receive timely support from these agents, thereby increasing productivity. Next, we'll do the same. Among these, the biggest opportunity is industrial AI, which is closely related to robotics technology, and we are cooperating with Lenovo in this field.

Yang Yuanqing: I've seen the same trend in smart AI and physical AI. To seize this opportunity, Lenovo and Nvidia will have a major launch today, the “Lenovo Hybrid Artificial Intelligence Advantage Set.”

It's an end-to-end AI platform for developing and deploying AI in a new era. It is based on industry-leading infrastructure, including AI devices, AI servers, storage, and edge computing, public clouds, and private clouds. It is a place for enterprises to store, clean, and organize data. Data algorithms and accelerated computing capabilities have jointly promoted the implementation of enterprise AI, thereby optimizing processes, making better decisions, and improving productivity.

In all of these elements, services play an important role — managing the entire life cycle of infrastructure, including design, deployment, expansion, and maintenance, improving data analysis and governance capabilities, consulting and tuning AI models, and expanding the AI software ecosystem for customers. Therefore, based on the hybrid artificial intelligence factory concept, we also provided the Lenovo AI application library. Our strategy is to combine modularization with customization to quickly respond to customer needs and tailor solutions for them.

Together, all of these fields form the complete system of the “hybrid artificial intelligence advantage set” of collaboration between Lenovo and Nvidia. We'll work with you to make the “Hybrid Artificial Intelligence Advantage Set” more mature and complete. So, would you like to talk more about how Nvidia's technology helped build this platform?

Huang Renxun: Yuan Qing, when you first met, it was during the PC revolution. At that time, the PC architecture was simple, with a CPU, operating system, and applications. Of course, that kind of calculation model was revolutionary back then, but compared to today, it was too simple. The birth of artificial intelligence technology has taken a very long time for the entire industry because it is extremely complex. Its entire architecture includes a true supercomputer as the computing infrastructure, running algorithms and distributed computation in computational structures including NVLink/Switch, InfiniBand, or Spectrum-X networks. This distributed computing power is realized through very complex software and made efficient.

So this is a computer. But on top of that is the big language model, which may be the new operating system for the world of artificial intelligence. On top of the big language model, there are many other application libraries. These app libraries, understood in a simple way, are like helping a new AI colleague get hired. He's an AI colleague to perform a task, and your task is to create data to correct and help the AI onboard. It's your AI agent, and you need to teach him very specific skills.

You'll provide them with information to enable them to do their work. You evaluate employees the same way you evaluate them. You deploy them, set up fences for them, protect them, and make sure they perform the functions they're taught to perform. And Blackwell makes smart bodies better and better over time. This complete library of applications has been fully accelerated and runs on GPUs.

This makes it possible for every company to introduce large language models and turn them into their own agents to perform the specific tasks you want them to perform, such as customer service, AI database searches, and many different types of applications.

Yang Yuanqing: In this regard, Nvidia is indeed providing a lot of value. There is a corresponding platform at every level. (We want to) meet the higher goals of our customers — they not only expect more accelerated computing capabilities, but are also trying to meet energy efficiency goals. So how can Nvidia help meet this kind of customer demand?

Huang Renxun: In the early stages of the new computer revolution, your best choice is to speed up your own roadmap. Here's why: When performance can be doubled or tripled — you know this is an annual increase — we can effectively reduce AI costs, reduce AI's workload and energy consumption, but at the same time increase AI's ability to generate revenue. These new computers are like efficient factories.

Unlike all data centers where files are stored, these data centers are AI factories that produce tokens. You'll want the tokens to be produced fast — these tokens are actually artificial intelligence — and you want them to be produced as fast as possible.

If we can double or triple the speed each year, we can help you reduce costs and increase revenue. So this is a very important thing. We have this ability because we have built an entire AI factory from end to end, from CPU, GPU, NVLink/SWTich, InfiniBand Switch, and networking chips to Ethernet switches and networking technology.

We have the ability to build an entire data center and AI factory from end to end, and we also have the ability to develop software from end to end. Because of this, we are able to build new AI factories every year to double performance while reducing costs to advance and democratize AI as quickly as possible.

Yang Yuanqing: All of our R&D teams are on a mission to design for sustainability.

Huang Renxun: Now, speed is sustainability, speed is expressiveness, and speed is energy utilization.

Yang Yuanqing: There's no doubt that Nvidia is really doing a great job in terms of sustainability. Lenovo has also brought core technology. We have been in the leading position in liquid cooling technology for ten years, and now we are in the sixth generation, which is our Neptune liquid cooling system technology. So, let's now show a product that uses the sixth-generation Neptune liquid cooling system.

Hwang In-hoon: Of course, please. Did you know, Yuan Qing, Lenovo's efforts to build high-performance computers over the years have been worth it. Isn't that right?

Yang Yuanqing: Of course. We have been researching water cooling technology for over a decade. This is a ThinkSystem SC777 system designed by Lenovo and equipped with the NVIDIA Grace Blackwell GB200. It's designed with 100% water cooling, so it doesn't require any fans or dedicated data center air conditioning. The system can be installed in a standard rack and uses a standard power supply, so customers can purchase only one tray at a time. It also includes NVIDIA NVLink interconnect technology and supports NVIDIA networking and NVIDIA AI enterprise software. What are your thoughts on this?

Hwang In-hoon: Yes, this is really beautiful. For an engineer, this is very sexy.

Yang Yuanqing: Thanks Jensen, thank you for our important partnership. In addition to collaborating on enterprise data centers, we have also partnered with Nvidia in the field of automotive computing to launch the NVIDIA DRIVE AGX Thor platform equipped with the new Blackwell architecture. Now let's join Jensen to showcase Lenovo's smarter, more powerful DCU for AI computing.

Hwang In-hoon: Yuan Qing, do you know how amazing this is? This computer has exactly the same architecture as that computer. That computer was used to create the “brain” that runs on this computer. Because cars are the most produced robots in the world.

Yang Yuanqing: Yes, that's right, a robot on wheels.

Huang Renxun: It can sense the world, think about action plans, and control the steering wheel like a robot. As a result, these will be the first robots with the highest production production in the world. But in the future, we'll have all kinds of robotic systems, and this special computer can be used for all of them. So yeah, it's very exciting. Thank you.

Yang Yuanqing: Anything else to add?

Hwang In-hoon: Go for Lenovo! thanks

Yang Yuanqing: We are excited to partner with Nvidia. Thank you, Jensen.