Back

2019 Keynote Discussion Sam Altman and Vinod Khosla

Transcript

[00:00:02] Host Create clip finished with opening. I I'd like to start video Gold soap in the eye. Great. Pushed for Sue. Tell people what you're doing. Some people may not be totally familiar with it. Okay,

[00:00:13] Host Create clip So, uh, unlike most companies trying to build artificial intelligence where people take basically deep learning and apply it to narrow ideas and it works sometimes astonishingly well, usually at least pretty well, we are trying to build one intelligence that is smarter and more capable than humans in every way. We're trying to use this to solve the biggest problems facing the world. We're trying to use this. I think we will enable the hundreds of billions trillions of dollars of new businesses because we'll have such powerful artificial intelligence on Dhe. Someday, I think we will sort of build the descendants of humanity and launch them off toe. Columnist Universe. We're trying to do this in a way where we make it go really well for humanity where the value would create a large part of it gets shared with the world. This is such a fundamental technology that we have to design a new corporate structure so that our investors could make a great return on our employees, too, but also that we figure out a way that the whole world wins. I think this is different if we're successful in this quest, and it's very hard. But if we're successful, I think it will be the most significant technological transformation in human history. I think it will eclipse the agriculture revolution, the Industrial revolution, the Internet revolution all put together. And so we're trying to think about how we want the world to go.

[00:01:32] Host Create clip Um, the fundamental driver of all of this has been all right, this incredible increase in computing power on a few big breakthroughs and algorithms every year for the last six, the biggest neural network, the industry can train his ground by a factor of 10. That'll continue for the next six, at least, at which point we'll be near one human brain.

[00:01:55] Host Create clip Great. Um, best guess. Where do you think will be in five years or 10 years?

[00:02:04] Host Create clip I think it's like 10 years. We will have systems that sound today to be impossible. Um, I think you'll be smarter than humans in every way. I think we will. Machines will actually feel to most people like they can think subjectively, I think that we will be able to create, I think, will be a number of businesses started, hopefully, some spun out of open the eye that are look like they're on a path to be bigger than any corporation that currently exists in the world. I think we will have systems that we can talk to a natural language that do complicated tasks. Um, an example, like like it really search strikes me is when I see a very small child like a one and 1/2 for a two year old, or even less sometimes pick up a magazine and try to treat it like an iPad with touch gestures, because that's how they think the world supposed to work. Um, and I think that shows how quickly sort of we adapt to a new world. And I think that 10 years from now, or are you sort of, you know, see, these kids try to, like, talk to everything like Alexa. I think 10 years from now Children that are born then will assume that all systems are actually intelligent. They will treat computers and humans very similarly.

[00:03:24] Host Create clip Well, that's exciting. Come, let's say you being optimistic? Um, and I say, let's narrow the definition off a d I So we no need in a g I philosopher r a g I Steven Pinker, you talk if you talk only about the economically valuable functions humans do work on assembly lines being oncologist, um, be a doctor. Uh, how much easier does the problem get? How much more certain you get will get there in 10 years.

[00:04:03] Host Create clip So I think more than 50% of work today is, uh, repetitive, non creative work that does not require a deep emotional connection like you want with in some scenarios. And in those cases that my confidence interval, let's say out 20 years is extremely high, they will be able to do all of that. That gets much simpler. We don't need any more technological breakthroughs for that. We just need systems scaling the work that we've already done opening I Justus. That sort of propagates out into the world and we figure out how we're gonna bill business models around that other people do, too. That alone, I think, is enough to get there. I think the easiest lay up right now on all of started investing is to take narrow II and apply it to every vertical and just do what humans do and generally being like you. And it's not the exciting part of work. It's not the creative fire better and human skin.

[00:05:04] Host Create clip Mmm. So

[00:05:06] Host Create clip I think this will be the biggest trend in start ups in the next decade. Let's say

[00:05:10] Host Create clip so soon. That bridge I said the last 10 years was about building everything mobile. First, the next 10 years will be about building everything. Eh? I pushed. Ah, Now very few people are actually doing that today. Everybody who talks about a I in their start up.

[00:05:31] Host Create clip It's almost always bullshit. It is. It's become the buzzword. There's always this, like buzz word that I think sort of mediocre. Founders used to think it's gonna like get them funding. So it was like, you know, Facebook app and then was Blockchain. That would be our big data and a whole bunch of other things. And I'm very skeptical when people say you know where a eye for expects that usually if they say if they really believe it, they don't tell anyone. And if they if they say that usually means they're not. But I always pay a lot of attention to, like, what the smartest like college freshmen are going to go spend their time learning and they're all very good. I they're all very good. It's sort of like applying machine learning to existing problems. And so I do think that this is you can see this over the horizon.

[00:06:14] Host Create clip So I'm gonna flash up a slide. Hopefully it pops up. Uh, somebody in the back there, um, I have a slide off the you can look in front of here the top 10 employer categories in the United States in 2017 I thought it's usual. Less retail cashier's office plugs, food crap

[00:06:44] Host Create clip yet no surprises there.

[00:06:46] Host Create clip No surprises there with relatively high certain sea reach of these categories will not be replaceable. In the next 10 years, I should have given you a heads up, but

[00:06:59] Host Create clip no problem. So a general statement First, the rate of technological job change is actually higher than most people into it. It's a little bit spiky, but averages out over the last few centuries to every 75 years. Roughly 50% of the jobs turnover, and we as a society, although we're always anxious about it. Always find a way around it now. There's like two cases with a I right. One is this is technology. It will eliminate existing jobs and we'll find new ones. On the other is this is a new life form and it will do to us what the industry evolution did the horses or whatever. And I'm sympathetic to both arguments, and I can't say anything with more certainty than I don't know. But I believe that human desire for status and feeling superior to each other seems to be endless. So I assume we will find new things to do. But they will look very different than work today, more different than work on one side or the other in the industry. Evolution. I think, um, we can handle job change over. The question is, if it all comes in like 10 or 20 years, can we handle that? That that has not been tested yet? Each major technological revolution has compressed in the time frame that it happens, but it's never compressed inside of one generation, and that feels like a something new. If people like then people actually in one lifetime in one career have to change what they do, not just society change. Um, I would guess that some version of teachers are pretty safe because there there is something about the human connection. And until we get to really G I that one feels pretty good.

[00:08:42] Host Create clip One that I would say feels like pretty bad as cashiers.

[00:08:46] Host Create clip Yes. Um, and for all, if you thinking about if most of the jobs in most of the cop 20 employment categories are replaceable or pleased to play possible if not already replaced within the next 10 years, the economic implications for our society is very large. To talk about your personal motivation for why you're working on opening. I sure given up all the other things. Look,

[00:09:16] Host Create clip when I was 18 I made this list of five problems I wanted toe sort of help contribute to in my life. And I was at the top then, and it's been a top for a while. But until more recently, I didn't believe that I actually was gonna be able to meaningfully solve it. And I had this sort of very great job at Y C. I always try toe. I always want to work on the most highly leveraged thing I could do where I feel like I contribute most of innovation and sort of improving the world. And for a long time, I thought that was why. See, which makes sense, right? Because like we get fund hundreds of companies a year, many of them go on to do great things. And we have a significant impact on startup movement in general. Um, but when I realized that I actually truly believe age eyes gonna happen in the not distant future, it was very difficult for me to be motivated toe spend the majority of my time on anything else because I think this is the platform through which we will solve all the problems that I care about. If you can truly invent a G I it's the last thing you need to invent, uh and so

[00:10:21] Host Create clip just elaborate on that. The last thing we need to remind

[00:10:24] Host Create clip you, because then it could do anything like I think

[00:10:26] Host Create clip we will everything out.

[00:10:28] Host Create clip We will use a G. I hope to solve healthcare, self climate, educate every kid on earth. Um,

[00:10:36] Host Create clip there's a There's a fallacy, in my view and I don't know whether you're green. Aye, aye. People say, Well, well, automate the lost skill jobs. I would argue. A judge in the eye Judge in the eye oncologist are far easier to do than a warehouse worker.

[00:10:56] Host Create clip Yeah, I think that I was help people that there was surprised. Buys something like 80% of the brain goes to processing sensory input in controlling the body. Only 20% is for thinking as we think of it. The hardware has had much longer in evolution. It's incredible, incredible. Our

[00:11:17] Host Create clip fingers consents a lot of things. Yeah, we'll come back to that. It's it's hard for people to imagine that you could have a i judge's personal lawyers for every person on the planet, in the eye. Primary care physician for every person on the planet, 24 7 No appointments needed, uh, the place where PR people get surprises. We have a start up doing artist, and last year a year ago, in the lobby we bought, we had five pieces that were done by artists that were purchased or being sold in galleries for more than $10,000.5 pieces done by the I nobody could guess, which was a I artists in which was a human artist.

[00:12:04] Host Create clip You know, one person. People love to say the example that, like art, is the thing that state that will stay human and I can't do that. We released recently released something called MusicNet. It took the same technology we used for our language model at opening I, uh, and and made it for music. And so we have this language model that's pretty impressive and getting more impressive every month. That can do unsupervised language modeling. And someone said, Well, uh, one person said, What if I do that for music? She put together, trained it on a bunch of music on the Internet and got incredible results, and we made it available for some period of time. And I heard from a number of people, uh, that they would rather listen to that than human music. It got really good, and it was endless. And, you know, if you love Rachmaninoff, you could hear its many Rachmaninoff contrivances you ever wanted. He never went out. It was new every time. And I think we're gonna learn something about these things that we sort of consider magically, humans.

[00:13:10] Host Create clip You know, it's 11 Things people don't realize is if you have 10 different styles of music, you like they I could figure out the features of music you like and actually custom synthesized music for personal musician for every person, because it's what their brain responds to emotionally. Would you agree? But yeah,

[00:13:32] Host Create clip I think everyone is gonna have customized. We've seen this already with some online service where everyone gets a customized version. But I think that trend is just gonna keep going.

[00:13:43] Host Create clip I suspect we're surprising a p a few, but let me go a little further in the surprise. Let's talk about really 80. I do far more than what humans can do. So the flip question What is it that humans cannot do that, eh? I will be our age. I will be able to do. Give me some examples.

[00:14:05] Host Create clip A medic. Comment first about how to think about intelligence. It's very hard to I think, about how much smarter we really are than other primates. It feels like a lot right, like it feels like, um, we walked out of the trees or the cement or whatever you want and Uh, and we're just, like, unbelievably more intelligent than are sort of nearest relatives because we can discover physics and all this stuff we built on. We start of Doug's stuff out of the ground and figured out what to do with it. And at some point we got computers and phones and buildings and everything. We've had this at this point. We have this sort of intelligence outside of biology, this everywhere where we kind of like have created a society in his body of knowledge and a set of tools that every generation gets to build on. And it's this incredibly exponential curve, and we feel so smart. I suspect we will learn that the limits of intelligence, although I expect they exist somewhere because of the speed of light and a computer system, if nothing else, are very far. And there will be like we feel like much smarter than a chicken or something like that, but probably relative to systems that we will build sort of the Children of humanity that we will someday build. We're probably not very smart at all in the same way that that chicken has a hard time about like thinking about what we're capable of.

[00:15:33] Host Create clip It's probably like, very difficult to explain to the chicken, like the concept of like leaving Earth and going to the moon. Um, I think it's very hard for us to sit here and talk about what the systems we build will be capable of. But it is my genuine belief that long after we sort of created incredible economic value and improve human lives that the system will someday become truly a week. Uh, and either we destroy humanity before we get there. Or this will be the moment where humans, biological evolution, successfully boatloads, digital intelligence, digital intelligence, leaves Earth on von Neumann probes and sort of colonize is the universe until the death to end. I don't can't quite articulate why I should care about that so much. But it does make me happy to think much happier to think that the universe will sort of continue to observe itself rather than the kind of light of consciousness going out.

[00:16:28] Host Create clip So the critical question in all of this is when and I'm gonna get another slide can be leave the slides up, please. Okay, so this is a chart years from 2016 and probability of high level machine intelligence and of is, is the experts prediction. They asked 100 supposed experts. You can see there's no agreement.

[00:17:00] Host Create clip Look, I think this stuff is like, always crap. I I could make a reasoned argument. I could make a lot of cases against it, but I think it's a dumb debate. I think it is a very small minded, short term debate. If we can accept that, there's a 75% chance of getting to this most important moment in human history in the next 100 years, that should be enough for worldwide effort and focus on this. I believe it's much shorter, but whether you think it's 10 years or 20 years like there's so much energy that goes into debating that, um, and I think if it's within a few decades, there's nothing more important in the world to work on.

[00:17:37] Host Create clip Well, not only that, I'd add. It depends on how much computing poverty put at the problem and ren. Certain breakthroughs happen that are not predictable.

[00:17:49] Host Create clip Yeah, so one way that we talk about it like let's say back propagation is a 10 like the quality of the importance of that idea and deep learning is a 10. And let's say something like the Transformers like a seven. We think my guess is that we need, like, one more 10 about 10 more sevens and algorithmic Lee. That might be it. We do need much more compute, which is why open it has to raise so much money. But we know how to do it. Yeah, there's none. If I mean there's no physics, there's no miracles required there.

[00:18:20] Host Create clip The flip side of a G I doing far more than humans can, is all the stuff today's. The eye does in a silly way, And I'm gonna put up some examples telling This is the old picture that's used often in a I to tell the difference between a muff it and a two hour. You

[00:18:40] Host Create clip know, when I wake up, sometimes when I wake up in the in the middle of the night, I will like, look up in my sailing in a sort of semi awake, semi asleep edge of consciousness state, and I will like, see, like the lights and the fire sprinklers and stuff on my CNN. They look like human faces, so even humans, Uh, like something at a very

[00:18:59] Host Create clip give you another example.

[00:19:02] Host Create clip Like we can get tricked if we're not. If we're even a little off my normal state like vision is tough. Optical illusions happen. People that are either really tired or like, you know, on some medicine in some sort of altered state people make mistakes like this, too. And honestly, if I look very quickly at that, I'm not sure I could tell you which is which. I have to take a second. It doesn't come in the first layer of the network or the second, um and so it is true that a I systems you can trick them and people love to talk with. You can trick humans, too, And I think a lot of the work that we're doing now as we make more progress with unsupervised learning for the first time, I think we're actually having systems get to some semblance of, um, conceptual understanding. And it is my hope that in the next few years we will have a system that never makes this mistake as with a visual classifier, and that I think will be a pretty that'll make truly I feel closer to people.

[00:19:58] Host Create clip Um, so back to this question of what Today's aye, aye does poorly. A more interesting. The kinds that the two or three things. The two or three technical breakthroughs other than just more computing power, I think that I could That would cause the switch from some level of stupidity in today's A I systems into, ah, more robust above or that at least matches human standards.

[00:20:28] Host Create clip So one year ago, I would have said the biggest, the most important piece in front of us that was missing was unsupervised learning. Um, and now, with our GPT to result from earlier this year, I believe we have something pretty important figured out there. We have longer to go, Um, but the fact that we can train these models, the same model can generate a story and then be state of the art in almost every text task without being specifically trained for it. It's the first time I felt like the machine is a little bit conscious. Uh, you know these systems that you don't train to do translation or even

[00:21:06] Host Create clip tell them about the DPD To sum up, you should look at it's public rights

[00:21:12] Host Create clip public. We haven't shared the latest versions, but there's, uh we haven't shown it later.

[00:21:17] Host Create clip Trained on all off the body of taxed in read it.

[00:21:20] Host Create clip Not all. No, no, uh, three basis points of reddit. So actually, not even that much

[00:21:26] Host Create clip in the answers. When I looked at the answers, they sounded like Fox TV experts, the same language, the same phrases just blew my mind. How I couldn't tell experts from

[00:21:41] Host Create clip Yeah, And then the downs

[00:21:42] Host Create clip are talking Had you see on TV

[00:21:43] Host Create clip that downstream performance using that same model to solve all the other language tasks that it wasn't even trained towards? Surprising? Ah, big thing in front of us now is reasoning. So can how can we teach a system to have some data and keep thinking, And the more it thinks the better it does. How can we build a system that can prove unproven mathematical theorems? Um, we're working a lot on that. No, we're also interested in how can we rerun evolution? So how can we build these very large simulations and have agents with long memories and a lot of autonomy that have to interact with each other and develop sort of social intelligence, Actually. Interesting question. Why humans white? Why evolution endowed humans with such big brains, Incredible waste of energy they make us in our very early months and years, sort of like easy prey for other animals. Huge parental investment on this ongoing tax of 25% of all the food you eat just to run. Uh, we don't need those to outrun a lion. We don't need those to run down an antelope. Um, we have them to deal with other humans. And I think this idea that you generate intelligence by interaction with other agents is gonna turn out to be quite important.

[00:22:57] Host Create clip So we're doing a lot of work

[00:22:58] Host Create clip agents learning from other agents and class of networks called again Networks. Has the big names of that began has a bigger

[00:23:07] Host Create clip Yeah, we have some amazing results there of watching the simulations agents, sort of because you have this continually escalating curriculum if you have to sort of deal with the other agents in an environment. So that's cool,

[00:23:19] Host Create clip bub. China seems to have so switching topics, and we are running out of time to be positioning for a global race in yeah, common ton that.

[00:23:37] Host Create clip I mean, I have much too much self confidence, but I think I think we're gonna do pretty well.

[00:23:45] Host Create clip On the flip side, One of the key tenants at opening I is safe here. Talk about safety and the need for regulation. And,

[00:23:56] Host Create clip um, safety means a lot of things to us that there's accidental misuse where the system just does something That's not what we meant. It, too. That is awful. There's intentional misuse where, um, a bad guy uses it, sort of conquer the world. There is a policy failure where it's not regulated, and we end up doing something that most of the world doesn't want most of what doesn't get input. So when we say safe age, I we really just mean beneficial age I where we sort of maximize human preference of happiness.

[00:24:32] Host Create clip We are out of time, but thank thank you very much.

[00:24:35] Host Create clip My pleasure