黄仁勋2024年6月在加州理工演讲(上)

黄仁勋2024年6月在加州理工演讲(上)

00:00
16:18

Jensen Huang's Speech at Caltech in June, 2024

It really makes me cringe listening to all that. Thank you for that kind introduction, but I hate hearing about myself. And just as a show, well, maybe if you could just applaud. How many of you know who NVIDIA is? And how many of you know what a GPU is? Okay, good, I don't have to change my speech.


Ladies and gentlemen, President Rosenbaum, esteemed faculty members, distinguished guests, proud parents, and above all, the 2024 graduating class of Caltech. This is a really happy day for you guys. You got to look more excited. You know you're graduating from Caltech. This is the school of the great Richard Feynman, Linus Pauling, and someone who's very influential to me and our industry, Carver Mead.


Yeah, this is a very big deal. Today is a day of immense pride and joy. It is a dream come true for all of you, but not just for you because your parents and families have made countless sacrifices to see you reach this milestone. So let's take this moment and congratulate them, thank them, and let them know you love them.


You don't want to forget that because you don't know how long you're going to be living at home. You want to be super grateful today. As a proud parent, I really loved it when my kids didn't move out, and it was great to see them every day, but now they've moved out, it makes me sad. So hopefully you guys get to spend some time with your parents.


Your journey here is a testament of your character, determination, willingness to make sacrifices for your dreams, and you should be proud. The ability to make sacrifices, endure pain and suffering, you will need these qualities in life. You and I share some things in common.


First, both chief scientists of NVIDIA were from Caltech. And one of the reasons why I'm giving this speech today is because I'm recruiting. And so I want to tell you that NVIDIA is a really great company, I'm a very nice boss, universally loved, come work at NVIDIA. You and I share a passion for science and engineering, and although we're separated by about 40 years, we are both at the peaks of our career.


For all of you who have been paying attention to NVIDIA and myself, you know what I mean. It's just that in your case, you'll have many, many more peaks to go. I just hope that today is not my peak. Not the peak. And so I'm working as hard as ever to make sure that I have many, many more peaks ahead.


Last year, I was honored to give the commencement address at Taiwan University, and I shared several stories about NVIDIA's journey and the lessons that we learned that might be valuable for graduates. I have to admit that I don't love giving advice, especially to other people's children. And so my advice today will largely be disguised in some stories that I've enjoyed, and some life experiences that I've enjoyed.


I'm the longest running tech CEO in the world today, I believe. Over the course of 31 years, I managed not to go out of business, not get bored, and not get fired. And so I have the great privilege of enjoying a lot of life's experiences, starting from creating NVIDIA, from nothing to what it is today.


And so I spoke about the long road of creating CUDA, a programming model. The programming model that we dedicated over 20 years to invent, and that is revolutionizing computing today. I spoke about a very, quite public, canceled Sega game console project we worked on, and where intellectual honesty, something that I know Richard Feynman cares very deeply about and spoke quite often about, where intellectual honesty and humility saved our company. And how a retreat, a strategic retreat, was one of our best strategies.


All of these are counterintuitive lessons that I spoke about at the commencement. But I encouraged the graduates to engage with AI, the most consequential technology of our time. And I'll speak a little bit more about that later, but all of you know about AI. It's hard not to be immersed in it and surrounded by it, and a great deal of discussion about it.


And of course, I hope that all of you are using it and playing with it, with surprising results and some magical, some disappointing, and some surprising. But you have to enjoy it, you have to engage it, because it's advancing so quickly. It is the only technology that I've known that is advancing on multiple exponentials at the same time. And so the technology is changing very, very quickly.


So I advised the students at the Taiwan University to run, don't walk, and engage the AI revolution. And yet, one year later, it's incredible how much it's changed. And so today, what I wanted to do is share with you my perspective, from my vantage point, of some of the important things that are happening that you're graduating into.


And these are extraordinary things that are happening that you should have an intuitive understanding for, because it's going to matter to you, it's going to matter to the industry. And hopefully, you take advantage of the opportunity ahead of you. The computer industry is transforming from its foundations, literally from studs. Everything is changing from studs on up. And across every layer, and soon, every industry will also be transformed.


The Importance of Computing
And the reason for that is quite obvious, because computers today are the single most important instrument of knowledge. And it's foundational to every single industry and every field of science. If we are transforming the computer so profoundly, it will, of course, have implications in every industry. And I'll talk about that in just a little bit.


And as you enter industry, it's important you know what's happening. Modern computing traces back to the IBM System 360. That was the architecture manual that I learned from. It's an architecture manual that you don't need to learn from. A lot better documentation and better descriptions of computers and architecture has been presented since.


But the System 360 was incredibly important at its time, and in fact, the basic ideas of the System 360, the architecture of it, the principal ideas and architecture and strategy of the System 360 are still governing the computer industry today. And it was introduced a year after my birth. In the 80s, I was among the first generation of VLSI engineers who learned to design chips from Mead and Conway's landmark textbook.


And I'm not sure if it's still being taught here. It should be in the introduction of VLSI systems. Based on Carver Mead's pioneering work here at Caltech on chip design methodologies and textbook that revolutionized IC design. And it enabled our generation to design supergiant chips and ultimately the CPU.


The CPU led to exponential growth in computing. The performance, the incredible technology advances, that's called Moore's Law, fueled the information technology revolution. The industrial revolution that we are part of, that my generation was part of, saw the mass production of something the world had never seen before. The mass production of something that was invisible, easy to copy, the mass production of software.


And it led to a $3 trillion industry. When I sat where you sat, the IT industry was minuscule. And the concept that you could make money selling software was a fantasy. And yet today, it's one of the most important commodities, most important technologies and product creations that our industry produces.


However, the limits of the NARD scaling, of transistor scaling, and instruction level parallelism have slowed CPU performance. And the slowed CPU performance gains is happening at a time when computing demand continues to grow exponentially. This exponentially growing gap between demand of computing and the capabilities of computers, if not addressed, computing energy consumption and cost, inflation, would eventually stifle every industry. We see very clear signs of computing inflation as we speak.


And after two decades of advancing NVIDIA's CUDA, NVIDIA's accelerated computing offers a path forward. That's the reason why I'm here. Because finally, the industry realized of the incredible effectiveness of accelerated computing at precisely the time that we're witnessing computing inflation after several decades. By offloading time-consuming algorithms to a GPU that specializes in parallel processing, we routinely achieve 10, 100, sometimes 1,000-fold speedups, saving money, cost, and energy.


We now accelerate application domains from computer graphics, ray tracing, of course, to gene sequencing, scientific computing, astronomy, quantum circuit simulations, SQL data processing, and even pandas, data science. Accelerated computing has reached a tipping point. That is our first great contribution to the computer industry, our first great contribution to society, accelerated computing.


It now gives us a path forward for sustainable computing where cost will continue to decline as computing requirement continues to grow. A hundred-fold, a hundred-fold of anything in time or cost or energy savings that accelerated computing opened surely would trigger a new development somewhere else. We just didn't know what it was until deep learning came to our consciousness.


A whole new world of computing emerged. Jeff Hinton, Alex Krzyzewski, and Ilya Sutskever used NVIDIA CUDA GPUs to train AlexNet and shocked the computer vision community by winning the 2012 ImageNet Challenge. This was the big moment, the big bang of deep learning, a pivotal moment that marked the beginning of the AI revolution.


Our decisions after AlexNet transformed our company is something that's worth taking note of. Our decisions after AlexNet transformed our company and likely everything else. We saw the potential of deep learning and believed, just believed through principle thinking, believed through our own analysis of the scalability of deep learning. We believed the approach could learn other valuable functions. That maybe deep learning is a universal function learner and how many problems are difficult or impossible to express using fundamental first principles.


And so when we saw this, when we saw this, we thought this is a technology we really have to pay attention to because its limits are potentially only limited by model and data scale. However, there were challenges at the time. This is 2012, shortly after 2012.


How could we explore the limits of deep learning without having to build these massive GPU clusters? At the time we were a rather small company, building these massive GPU clusters could cost hundreds and hundreds of millions of dollars. And if we didn't though, there was no assurance that it would be effective if we scaled.


However, no one knew how far deep learning could scale. And if we didn't build it, we'd never know. This is one of those, if you build it, will they come? Our logic is if we don't build it, they can't come.


And so we dedicated ourselves based on our first principled beliefs and our analysis. And we got ourselves to the point where we believed this was going to be so effective and when the company believes something, we should go act on it.


So we dove deep into deep learning, and over the next decade, systematically reinvented everything. We reinvented every computing layer, starting with the GPU itself. The invention of the modern GPU, which is very different than the GPU of the past that we invented in the first place, and we went on to invent just about every other aspect of computing, the interconnects, the systems, the networking, and of course, software.


We invested billions. We invested billions into the unknown. Thousands of engineers for a decade worked on deep learning and advancing and scaling deep learning without really knowing how far we could really take the technology. We invested billions. And we designed and built supercomputers to explore the limits of deep learning and AI.

以上内容来自专辑
用户评论

    还没有评论,快来发表第一个评论!