Back

Jeff Hawkins: Thousand Brains Theory of Intelligence | Artificial Intelligence (AI) Podcast

Transcript

[00:00:00] Lex Fridman Create clip The following is a conversation with Jeff Hawkins. He's the founder of the Red was Center for Theoretical Neuro Science in 2002 and New Manta in 2005. In this 2004 book titled On Intelligence and in the Research Before and After, he and his team have worked to reverse engineer the neuro cortex and propose artificial intelligence, architectures, approaches and ideas that are inspired by the human brain. These ideas include hierarchical Tupper memory htm from 2004 and New Work, The Thousands Brains Theory of Intelligence from 2017 18 and 19. Jeff's ideas have been an inspiration to many who have looked for progress beyond the current machine learning approaches, but they have also received criticism for lacking a body of empirical evidence supporting the models. This is always a challenge when seeking more than small, incremental steps forward in a I. Jeff is a brilliant mind, and many of the ideas he has developed an aggregated from your science at worth understanding and thinking about there are limits to deep learning as it is currently defined forward progress in a I is shrouded in mystery. My hope is that conversations like this can help provide inspiring spark for new ideas.

[00:01:11] Lex Fridman Create clip This is the artificial intelligence podcast. If you enjoy, subscribe on YouTube, iTunes or simply connect with me on Twitter at Lex Friedman spelled f R I D. And now here's my conversation with Jeff Hawkins. Are you more interested in understanding the human brain or in creating artificial systems that have many of the same qualities but don't necessarily require that you actually understand the underpinning workings of our mind?

[00:01:41] Jeff Hawkins Create clip So there's a clear answer, that question. My primary interest is understanding the human brain, no question about it. But, um, I also firmly believe that we will not be able to create fully intelligent machines until we understand how the human brain works. So I don't see there's a separate problems. I think there's limits to what can be done with machine intelligence if you don't understand the principles by which the brain works. And so I actually believe that studying the brain is actually the fresco the fastest way to get to machine intelligence.

[00:02:11] Lex Fridman Create clip And within that, let me ask the impossible question. How do you not define? But at least think about what it means to be intelligent,

[00:02:19] Jeff Hawkins Create clip so I didn't try to answer that question first. We said, Let's just talk about how the brain works. Let's figure out how to turn parts of the brain, mostly the New York cortex. But some other parts, too. The parts of the brain, most associative intelligence. And let's discover the principles by how they work because intelligence isn't just like some mechanism, and it's not just some capabilities. It's like, Okay, we don't even know where to begin on this stuff. And so now that we've made a lot of progress on this, after we've made a lot of progress and how the New York critics works and we can talk about that, I now have a very good idea what's going to be required to make intelligent machines. I could tell you today, you know, some of the things they're gonna be necessary. I believe creating told you machines,

[00:03:03] Lex Fridman Create clip we'll get there. We'll get to the neuro cortex in some of the theories of how the whole thing works. And you're saying, as we understand more and more, ah, about the neuro cortex, water on your mind will be able to start Thio more specifically define what it means to be intelligence. Not useful to really talk about that until

[00:03:21] Jeff Hawkins Create clip I don't know if it's not useful. You look, there's a long history of a I was, you know, and there's been different approaches taken to it. And who knows, Maybe they're all useful, right? Right. So, uh, you know, the good old fashioned A I the expert systems, current convolution, neural networks, they all have their utility. They all have, ah value in the world. But I would think almost everyone agreed that none of them are really intelligent and instead of a deep way that humans are. And so, um, it's just the question of how do you get from where those systems were or are today to where a lot of people think we're going to go and just big big gap there, huge gap. And I think the quickest way of a bridging that gap is to figure out how the brain does that. And then we could sit back and look and say, Oh, what's with these principles that the brain works on are necessary and which ones are not cool. If we don't

[00:04:15] Lex Fridman Create clip have to build this in telling machines aren't gonna be built out of, you know, organic living cells. But there's a lot of stuff that goes on the brain that's gonna be necessary. So let me ask me be before we get into the fund details. Ah, let me ask. Maybe a depressing or difficult question. Do you think it's possible that we will never be able to understand how our brain works? That maybe there's aspects to the human mind, like we ourselves cannot introspectively get to the core that there's a wall you eventually hit?

[00:04:48] Jeff Hawkins Create clip Yeah, I don't believe that. The case. I have never believe that's the case. There's not been a single thing we've ever humans have ever put their minds to it. We said, Oh, we reached the wall. We can't go any further. People keep saying that. People used to believe that about life, you know? Ah lan, vital right there's like What's the difference in living matter in non living matter, something special you never understand. We no longer think that. So there's there's no historical evidence to see justice the case, and I just never even considered that possibility. I would also say today we understand so much about the neocortex. We've made tremendous progress in the last few years that I no longer think of it as unopened question. The answers are very clear to me, and the pieces we know we don't know are clear to me. But the framework is all there, and it's like, Okay, we're gonna be able to do this. This is not a problem anymore. Just takes time and effort. But there's no mystery big mystery anymore.

[00:05:44] Lex Fridman Create clip So then let's get into it for people like myself or not very well versed in the human brain except my own, Uh, can you describe to me at the highest level what are the different parts of the human brain and then zooming in on the neo cortex, the part of the neocortex? And so on? A quick overview.

[00:06:05] Jeff Hawkins Create clip Yeah, sure. Human brain weaken. Divided roughly into two parts. There's the old parts, lots of pieces, and then there's a new part. The new partisan Your cortex. It's new because it didn't exist before mammals. The only mammals have in your cortex in humans and primates is very large in the human brain. The neocortex occupies about 78 to 75% of the volume of the brain. It's huge, and the old parts of the brain are our. There's lots of pieces there. There's a spinal cord, and there's the brain stem and the cerebellum and the different parts of the basic dying and so on. In the old parts of brain, you have autonomic regulation, like breathing and heart rate. You have basic behaviours still, like walking and running or controlled by the old part of the brain. All the emotional centers of the brain are in the old part of the brains. When you feel anger, hungry, lost with things like that. Those are all in all parts of the brain and on we associate with the neocortex. All the things we think about, a sort of high level perception and cognitive functions. Anything from seeing and hearing and touching things, Thio language to mathematics and engineering and science and so on.

[00:07:16] Jeff Hawkins Create clip Those are all social of the neocortex, and they're certainly correlated our abilities. And those regards are correlated with the relative size of our near cortex compared to other mammals. So that's like the rough division, and you obviously can't understand the New York cortex completely isolated but you can understand a lot of it with just a few interfaces, so they all parts of the brain. And so it gives you a system. The study. The other remarkable thing about the neocortex compared to the old part of the brain is the New York cortex is extremely uniform. It's not visibly or anatomically or, uh, it's very like I always like to say It's like the size of a dinner napkin about two and 1/2 millimeters thick, and it looks remarkably the same everywhere. Everywhere you look in that two and 1/2 millimetres is this detailed architecture, and it looks remarkably the same everywhere. And that's a cross species, the mouse versus a cat and a dog and human where if you look at the old parts of the brain, there's lots of little pieces do specific things, so it's like the old parts of the brain involved. Look, it is a part that controls heart rate, and this is the part that controls this, and this is this kind of thing, and that's this kind of thing. And he's evolved for eons a long, long time, and they have their specific functions and all sudden mammals come along. And they got this thing called the neocortex, and it got large by just replicating the same thing over and over and over again. This is like, Wow, this is incredible.

[00:08:42] Jeff Hawkins Create clip So all the evidence we have on Dhe This is an idea that was first articulated in a very cogent and beautiful argument by a guy named Vernon Mall Castle in 1978. Um, that the neocortex all works on the same principle. So language, hearing, touch, vision, engineering All these things are basically underlying all built in the same computational substrate. They're really all the same problem,

[00:09:11] Lex Fridman Create clip so that low love of the building blocks all look similar.

[00:09:14] Jeff Hawkins Create clip And they're not even that low level we're not talking about like like neurons were talking about. This very complex circuit that exists throughout the near cortex is remarkably similar. It's like, yes, you see variations of it here, and they're more of the cell left. That's I'll s so on. But what Manchester argued was that says, you know, if you take a section of neocortex, why is one of visual area and one is a auditory area or wise and his answer was. It's because one is connected to eyes and one is connected. Ears

[00:09:45] Lex Fridman Create clip literally. You mean justice? Most closest in terms of number of connections to

[00:09:49] Jeff Hawkins Create clip listen literally. If you took the optic nerve and attached it to a different part of the cortex, that part would become a visual region. This actually this experiment was actually done by Morgan Kosar. Avoid. And in developing, I think those levers I can't remember it was some animal. And there's a lot of evidence to this, You know, if you take a blind person, person is born blind at birth. They are born with the visual neocortex. It doesn't mean not get any input from the eyes because of some congenital defect or something. And that region become dust of something else. It picks up another task. So, uh, it it just is this very complex thing. It's not like, Oh, they're all built on your own. No, they're all built in this very complex circuit, and, uh, and somehow that circuit underlies everything. So this is the, uh, it's called the common cortical algorithm, If you will. Some scientists just find it hard to believe, and they just can't believe that's true, but the evidence is overwhelming in this case. And so ah, large part of what it means to figure out how the brain creates intelligence and what is intelligence in the brain is to understand what that circuit does. If you can figure out what that circuit does, amazing it is, Then you can then you Then you understand what all these other cognitive functions

[00:11:09] Lex Fridman Create clip are. If you were to send off, put in your cortex outside of your book on intelligence. You look, if you wrote a giant toma textbook on the neocortex and you look, uh, maybe a couple of centuries from now how much of what we know now would still be accurate to centuries from now. So how close are we in terms

[00:11:28] Jeff Hawkins Create clip of understanding? But I have to speak for my own particular experience here. So I run a small research lab here. It's like it's like any other research lab on the sort of Prince investigator. There's actually two of us, and there's a bunch of people and this is what we d'oh! We started the neocortex and we published our results in song. So about three years ago, we had a real breakthrough in this in this field, tremendous spectrum we started. We've now published, I think, three papers on it. And so I have. I have a pretty good understanding of all the pieces and what we're missing. I would say that almost all the empirical data we've collected about the brain, which is enormous if you don't know the neuroscience literature, it's just incredibly big. And it's for the most part, all correct. It's fax and and experimental results and measurements and all kinds of stuff. But none of that has been really assimilated into a theoretical framework. It's it's data without in the language of Thomas Kuhn's The Historian. It would be a sort of a pre paradigm science, lots of data but wayto fit in together. I think almost all of that correct just gonna be some mistakes in there, and for the most part, they aren't really good coach and theories about how to put it together. It's not like we have two or three competing good theories. Which one's the right? Which ones were wrong? It's like, yeah, people just like scratching their heads rolling. You know, some people have given up on trying to like figure out what the whole thing does, in fact, is very, very few laps that we that we do that focus really on theory on all this unassimilated data and trying to explain. So it's not like we have. We've got it wrong. It's just that we haven't got it all.

[00:13:11] Lex Fridman Create clip So it's really, I would say pretty early days in terms of understanding the fundamental theories forces of the way your mind

[00:13:20] Jeff Hawkins Create clip works. I don't think so. That I would have said, That's true five years ago. So as I said, we have some really big breaks is on this recently and we started publishing papers on this. So, um, uh, we'll get you there, but so I don't think it's I'm an optimist. And from where I sit today, most people would disagree with this. But from where I sit, City from what I know, uh, it's not super early days anymore. We're if you know, the way these things go is not a linear path, right? You don't just start accumulating and get better and better. Better now You got all the stuff you've collected. Never make sense. All these things were just starting around. And then you're gonna have some breaking points. All sudden. Oh, my God. Now we cut it right. That's how it goes. And science and I personally feel like we passed that little thing about a couple years ago. Um, well, that big thing a couple years ago so we can talk about that.

[00:14:09] Jeff Hawkins Create clip Time will tell if I'm right, but I feel very confident about it. That's one moment to say it on tape, like

[00:14:15] Lex Fridman Create clip a sleaze. Very optimistic. So let's before those few years ago, let's take that back, too. Htm The hierarchical temporal memory theory, which he first proposed on intelligence that went through a few different generation. You describe what it is, how it evolved through the three generations since you first put it

[00:14:34] Jeff Hawkins Create clip on paper. Yeah, so one of the things that neuroscientists just sort of missed for many, many years and especially people were thinking about theory was the nature of time in the brain. Brains process information through time. The information coming into the brain is constantly changing the patterns from my speech. Right now, if you're listening to it at normal, speed would be changing on years. About every 10 milliseconds of Soviet haven't changed this constant flow. When you look at the world, your eyes are moving constantly 3 to 5 times a second and imports complaints completely. If I were to touch something like a coffee cup, as may move my fingers, that temperature changes. This idea that the brain works on time changing patterns is almost completely. It was almost completely missing from a lot of the basic theories, like fears of vision. And so it's like, Oh, no, we're gonna put this image in front of you and flash it and say, What is it? Convolution of neural networks work that way today, right? You know, classify this picture.

[00:15:34] Jeff Hawkins Create clip But that's not what visions like. Vision is this sort of crazy, time based pattern that's going all over the place and so is touching. So is hearing so the first part of hirable temporal memory was the temple part. It is to say you won't understand the brain normally understand intelligent machines unless you're dealing with time based patterns. The second thing was, the memory component of itwas is to say that, um, we aren't just processing input. We learn a model of the world, and the memory stands for that model. We have the point of the brain, the part of the New York orjust it learns a model of the world. We have to store things that our experiences in a form that leads to a model the world so we can move around the world. We can pick things up and do things and navigate know how it's going on. So that's That's what the memory referred to and many people just they were thinking about, like certain processes without memory at all. It's just like processing things. And finally, the hierarchal component was reflection to that. The neocortex. Although it's just uniforms, sheet of cells, big parts of it project other parts which project other parts, and there is a sort of rough hierarchy in terms of them.

[00:16:43] Jeff Hawkins Create clip So the hydro temple memory is just saying, Look, we should be thinking about the brain as time based, you know, model memory base and hierarchical processing. And and that was a placeholder for a bunch of components that we would then play into that. We still believe all those things I just said, but we now know so much more that, um, I'm stopping the use the word higher. It'll jump. Remember yet? Because it's it's insufficient to capture the stuff we know. So again, it's not incorrect. But it's I know no more. And I would buy the describe it more accurately.

[00:17:16] Lex Fridman Create clip Yes, so you're basically we could think of a CM as emphasizing that there's three aspects of intelligence that important to think about, whatever the whatever the eventual theory of convergence. Yeah, So in terms of time, how do you think of ah, nature of time across different time? Scales you mentioned thinks changing. Ah, sensory inputs changing every 10 darling myself. What about it? Every few minutes every few months.

[00:17:41] Jeff Hawkins Create clip And, well, if you think about a neuroscience problem, the brain problem neurons themselves could stay active for certain person time. They part of the brain with this doctor for minutes. You know, you could hold up a certain perception activity for certain part of time, but not most of them don't last that long. And so if you think about your thoughts are the activity neurons if you're gonna wanna involve something that happened a long time ago, even just this morning, for example, the neurons haven't been active throughout that time. so you have to store that. So I ask you, what do you have for breakfast today? That is memory that you built into your model. The world that you remember that and that memory is in the in. The synapses is basically in the formation synapses. And so, um, it's your sliding into what you know is two different time scales. There's time scales of which we are like understanding my language and moving about and seeing things rapidly over time. That's the time scales of activities of neurons. But if you want to get longer timescales, then it's more memory. And we have to invoke those memories to say, Oh, yes, Well, now I can remember what I had for breakfast because I stored at some place I may forget it tomorrow, but I'd stored before now.

[00:18:58] Lex Fridman Create clip So this memory also need to have. So the hierarchical aspect of reality is not just about concepts. It's also bought time. Do you think of it that way?

[00:19:09] Jeff Hawkins Create clip Uh, yeah. Time is infused in everything. It's like you really can't separate it out. Um, if I ask you what is the what is your house? The brain. Learn a model of this coffee cup here. Coffee cup of coffee Cup, I said. Well, time. It's not an inherent property of this. Off the model 1/2 of this cup, whether it's a visual model or attack the model, I consented through time. But the model itself doesn't have much time. If I asked you if I said, Well, what is the model of my cell phone? My brain has learned a model of the cell phones if you have a smartphone like this and I said, Well, this has time aspects to it. I have expectations. When I turn it on. What's gonna happen? What water, How long it's gonna take to do certain things. Bring up a nap. What sequences? So I have instant. It's like melodies in the world. You know, Melanie has a sense of time. So many things in the world move and act, and there's a sense of time related to them.

[00:20:03] Jeff Hawkins Create clip Some don't, but most things do actually, so it sort of infused throughout the models of the world. You build a model the world, you're learning the structure of the objects in the world, and you're also learning how those things change through time.

[00:20:20] Lex Fridman Create clip Okay, so it's it really is just the fourth dimension that's infused deeply. And they have to make sure that your models of intelligence incorporated. So like you mentioned, the state of neuroscience is deeply empirical. A lot of data collection, it's ah, you know, that's that's where it is mentioned. Thomas Kuhn, right? Yeah. And then you're proposing a theory of intelligence and which is really the next step, the really important stuff to take. But why? Why is htm or what We'll talk about soon? Uh, the right Ah, theory. So is it more? This is what is it backed by intuition is backed by evidence is backed by a mixture of both is the kind of closer to war string theories in physics where there's, ah, mathematical components which show that you know what it seems that this fits together too well for not to be true, which is what we're string theory. Is that where

[00:21:29] Jeff Hawkins Create clip it is a picture of all those things, although definitely where we are right now is definitely much more on the empirical side than, let's say, string theory. The way this goes about worth theorists, right? So we look at all this data and we're trying to come up with some sort of model that explains it basically. And there's unlike strength there. There's this vast more amounts of empirical data here that, I think, than most officers this deal with. And so our challenge is to sort through that and figure out what kind of constructs would explain this. And when we have an idea, you come up with the fury of some sort. You have lots of ways of testing it. First of all, I am. You know, there are 100 years of assimilated, unassimilated empirical data from her size. So we go back and re papers. We So did someone find this already with you? We can predict X, y and Z, and maybe no one's even talked about it since 1972 or something. But we go back and find, and we say, Oh, either it can supports a theory or it can invalidate the theory. And okay, we have to start over again. Oh, no, What's supported? Let's keep going with that one.

[00:22:38] Jeff Hawkins Create clip So the way I kind of view it when we do our work, we come up, we we look at all this empirical data, and it's it's what I call it's a set of constraints. We're not interested in something that's biologically inspired. We're trying to figure out how the actual brain works, So every piece of empirical data is a constraint on a theory. If you have the correct theory and needs to explain every pin, right, so we have this huge number of constraints on the problem, which initially makes it very, very difficult. If you don't have many constraints, you could make up stuff all the day. You know, here's an answer and I could do this. You can do that. You could do this. But if you consider all biology as instead of constraints, all neurosciences constraints and even if you're working in one little part of the neocortex, for example, there are hundreds and hundreds of constraints. These air empirical constraints that it's very, very difficult initially to come up with a theoretical framework for that. But when you do and it solves all those constraints at once, you have a high confidence that you got something close to correct. It's just mathematically almost impossible not to be. So that's the curse in the advantage of what we have. The curse is we have to stop. We have to meet all these constraints, which is really hard. But when you do meet them, then you have, ah, great confidence that you discover something. In addition, then we work with scientific labs. So we'll say, Oh, there's something we can't find. We can predict something, but we can't find it anywhere in the literature. So we will. Then we have people we collaborated with will say. Sometimes they'll say, You know, I have some collected data which I didn't publish, but we could go back and look at it and see if we can find that which is much easier than designing a new experiment. You know, new neuroscience experiments take a long time years So although some people are doing that now too So, uh, but between all of these things, I think it's a reasonable, um actually very, very good approach. We were blessed with the fact that we can test our theories out the ying yang here because there's so much on a similar data and we can also falsify our theories very easily, which we do often.

[00:24:41] Lex Fridman Create clip That's kind of reminiscent Thio whenever wonder that was with Copernicus. Uh, you know, when you figure out that the sun's at the center of the solar system's supposed to earth, that pieces just fall into place.

[00:24:54] Jeff Hawkins Create clip Yeah, I think that's the general nature of ah ha moments, uh, is in its Copernicus. It could be you could say, same thing thing about Darwin. Um, you could say same thing about, you know, about the double helix that that people been working on a problem for so long and have all this data that can't make sense of it can make sense of it. But when the answer comes to you and everything falls into place is like, Oh, my gosh, that's it. Um, that's got to be right. I asked both Jim Watson and Francis Crick about this. Um, I asked him, you know, when you were working on trying to discover the structure of the double helix and when you came up with the sort of the structure that ended up being correct. Um, but it was sort of a guess, You know, it wasn't really verified. Yeah, I said, Did you know that it was right on both? Absolutely. We absolutely knew it was right. And it doesn't matter what other people didn't believe it or not, we knew was right. They get around the thinking and agreed with it eventually anyway.

[00:25:59] Jeff Hawkins Create clip And that's the kind of thing you hear a lot with scientists who you really are studying a difficult problem. And I feel that way too, about our work.

[00:26:07] Lex Fridman Create clip Have you talked to quicker Watson about the problem? You're trying to solve the off finding the d n a of the brain?

[00:26:15] Jeff Hawkins Create clip Yeah. In fact, Francis Crick was very interested in this in the latter part of his life. And in fact, I got interested in brains by reading an essay he wrote in 1979 called Thinking about the Brain. And that is when I decided I'm gonna leave my profession of computers and engineering and become a neuroscientist. Just reading that one estate from Francis Crick. I got to meet him later in life. Um, I got I spoke at the Salk Institute and he was in the audience, and then I had tea with him afterwards. Um uh, you know, he was interesting. Different problem. He was He was focused on consciousness and easy problem, right? Well, I think it's the red Herring. And so we weren't really overlapping a lot there. Jim Watson, who is still alive, is AJ's. Also interesting this problem. And he was when he was director of the Coast Harbor Laboratories. Um, he was really sort of behind moving in the direction of their science there, and so he had a personal interest in this field. Andi, I have met with him numerous times, and in fact, the last time was a little bit over a year ago. I gave a talk, calls me Harbor Labs about the progress we're making in, and I work.

[00:27:34] Jeff Hawkins Create clip And, um, it was a lot of fun because, um, he said, well, you wouldn't be coming here unless you had something important to say. So I'm gonna go tend your talk. So you start in the very front row. Next the most next time was the director of the lab, Bruce Stillman. So these guys were in the front row of this auditorium, right? So nobody else in the auditorium once it's in the front row because Jim Watson is the director, and, uh and I gave a talk and I had dinner with him afterwards. Um, but there's a great picture of my colleagues Super time hot took where I'm up there sort of screaming the basics of this new framework we have. And Jim Watson is on the edge of his chair. He's literally on the edge of a chair like intently staring up at the screen. And when he discovered the structure of DNA, the first public talk he gave was that Cold Spring Harbor laps. So and there's a picture of this famous picture Jim Watson standing up the whiteboard with over it thing, pointing out something with public of double helix of the Pointer, and it actually looks a lot like the picture of me. So there was a funny that I am talking about the brain and there's Jim Watson staring intently at it. Of course there was, you know, whatever, 60 years earlier, he was standing, you know, pointing of the double helix. And

[00:28:44] Lex Fridman Create clip it's one of the great discoveries in all of you know, whatever by our science, all science de eso. Is this the funny that there's echoes of that in your presentation? Do you think in terms of evolutionary timeline in history, the development of the neocortex was a big leap. Or is it just a ah, small step. So, like, if we ran the whole thing over again from the from the birth of life on Earth, how likely develop the mechanism in New York. Okay,

[00:29:15] Jeff Hawkins Create clip well, those are two separate questions. One, it wasn't a big leap, and one was how like it is. Okay, They're not necessarily related. Maybe correlated court. Maybe not. We don't really have enough data to make a judgment about that, I would say definitely is a big leap leap. And I can tell you why. I think I don't think it was just another incremental step. Get that moment. Um, I don't really have any idea how likely it is if we look att? Evolution. We have one data point, which is Earth, right? Life formed on earth billions of years ago. Whether it was introduced here or it created here, was someone introduced it, we don't really know, But it was here early. It took a long, long time to get to multi cellular life. And then for multi tallest, other life. Um uh, it took a long, long time to get busy neocortex, and we've only had the New York tactics for a few 100,000 years, so that's like nothing. Okay, so is it likely? Will Certainly isn't something that happened right away on Earth.

[00:30:13] Jeff Hawkins Create clip And there were multiple steps to get there. So I would say it's probably not going to something that happened instantaneous on other planets that might have life. It might take several 1,000,000,000 years on average. Is it likely? I don't know. But you talked to survive for several 1,000,000,000 years to find out. Probably isn't a big leap. Yeah, I think it's, uh it is Ah, qualitative difference than all other evolutionary steps. I can try to describe that feeling like you're in which, in which way. Uh, yeah, I can tell you how, um, pretty much there. Let's start a little preface. Many of the things that humans are able to do do not have obvious, um, survival, advantages, precedent, You know, we create music. Is that is there really survival advantage to that? Maybe, maybe not. What about mathematics? Is there a real survival advantage to mathematically? Yeah, it could stretch. You can try to figure these things out. Right? But up, but mostly evolutionary history everything had immediate survival advantages, too. Right? So I'll tell you a story which I like may not be true, but the story goes as follows.

[00:31:26] Jeff Hawkins Create clip Um, organisms have been evolving for since the beginning of life here on Earth, adding the subtle complexity onto that. This sort of complexity that and the brain itself, is evolved this way. In fact, there's a old parts, an older part, an older, older parts of the brain. That kind of just keeps calming on new things. And we keep adding capabilities. We got to the neocortex. Unusually. It had, ah, very clear survival advantage and that it produced better vision and better hearing about Todd. Maybe it's so long, but what I think happens is evolution. Discovered, took, took a mechanism on dis is in our recent theories, but it took a mechanism involved a long time ago for navigating in the world for knowing where you are. These the so called grid cells and place cells of the old part of the brain. And it took that mechanism for building maps of the world on knowing where you are on those maps and how to navigate this maps and turns it into a sort of slim down, idealized version of it.

[00:32:29] Jeff Hawkins Create clip And that idea is version could apply to building maps of other things maps, coffee cups, maps, the phones, maps of concepts, yes, and not just almost exactly. And so you And it just started replicating this stuff, right? You'd just think more, more, more So we went from being sort of dedicated purpose neural hardware to soft certain problems that are important to survival, too. A general purpose. Neural hardware that could be applied to all problems. And now it's Tze escaped the orbit of survival. It's we're now able to apply it the things which we find enjoyment, Um, you know, but aren't really clearly survival characteristics and that it seems to only have happened in humans Tiu the large extent. And so that's what's going on, where we sort of have we've sort of escaped the gravity of evolutionary pressure in some sense in the neocortex. And now those things which not that of really interesting discovery models of the universe which may not really help us doesn't matter. How is the help of surviving knowing that there might be multi verses or there might be you know the age of the universe are how do you know various stellar things that curry doesn't help us survive at all. But we enjoy it, and that's what happened.

[00:33:50] Lex Fridman Create clip Or at least not in the obvious way. Perhaps it is required. If you look at the entire universe in evolution away, it's required for us to do interplanetary travel and therefore survive past our own fun. But you know, that's not get too.

[00:34:04] Jeff Hawkins Create clip But evolution works at one time frame, and, well, it's survival. If you think of survival of the FINA type survival of the individual that you're talking about, there is spans well beyond that. So there's no genetic. I'm not transferring any genetic traits to my children that are gonna help them survive. Better it on Mars,

[00:34:25] Lex Fridman Create clip right? Totally different mechanism. That's so let's get into the new A CZ. You've mentioned the same idea. The I don't know if you have ah, nice name. 1000. Call it the 1000 brain theory until it I like it. So I can't talk about the this idea of, ah, spatial view of concepts and so on.

[00:34:44] Jeff Hawkins Create clip So can I just describe so that there's an underlying Corp discovery which then everything comes from that. It's a very simple This is really what happened. We were deep into problems about understanding how we build models of stuff in the world and how we make predictions about things. And I was holding a coffee cup just like this in my hand, and I had my finger was touching the side, my index finger, and I moved it to the top and I was gonna feel the rim at the top of the cup. And I asked myself, Very simple question. I said, Well, first of all, I have to say, I know that my brain predicts what it's gonna feel before it touches it. You can just think about it and imagine it, Andi. So we know that the brain's making pictures all the time. So the question is, what does it take to predict that right? And there's a very interesting answer that first of all, it says the brain has to know it's touching. A coffee copy has to have a model or a coffee cup and needs to know where the finger currently is on the cup relative to the cup, because when I make a movement and use. Nowhere is going to be on the cup after the movement is completed relative to the Cup.

[00:35:50] Jeff Hawkins Create clip And then it could make a prediction about what it's gonna sense. So this told me that the New York cortex, which is making this prediction, needs to know that it's sensing it's touching a cup, and it needs to know the location of my finger relatives that cup in a reference frame off the cup. It doesn't have the way of the cup results of my body. It doesn't have its orientation. None of that matters. It's where my finger is relative of the cup, which tells me then that the neocortex is has a reference frame that's anchored to the cup because otherwise I wouldn't be able to say the location and I wouldn't be able to protect my new location. And then we quickly very installation. Instantly, you could say, Well, every part of my skin could touch this cup and therefore every part of my skin's making predictions and every part of the skin must have a reference frame that it's using to make predictions. So the big idea is that throughout the neocortex there are everything is being eyes being stored and referenced in reference frame. You can think of it like X y Z referencing, but they're not like that. We know a lot about the neural mechanisms for this, but the brain thinks in reference frames, and it's an engineer. If you're an engineer, distance Not surprising, you say, If I wanted to build a ah cad model of the coffee couple, I would bring it up in some cat software, and I would sign some reference to him and say this features at this location sits on. But the fact that this the idea that this is occurring, outdone your cortex everywhere.

[00:37:12] Jeff Hawkins Create clip It was a novel idea, and, um, and then a zillion things fell in the place after them is doing so Now we think about the New York cortex is processing information quite differently than we used to do it. We used to think about than your cortex is processing sensory data and extracting features from that sensory data and then extracting futures from the features very much like a deep learning network does today. But that's not how the brain works. It'll the brain works by assigning everything every input everything to reference frames, and there are thousands, hundreds of thousands of active at once in your neocortex. It's a surprising thing to think about, but once you sort of internalize this, you understand that it explains almost every old almost all the mysteries we've had about this about the structure. So one of the consequences of that is that every small part of the neocortex I am a millimeter square, and there's 150,000 of those, so it's about 100,000 square millimeters. If you take every little square millimeter, the cortex, it's got some input coming into it, and it's gonna have reference frames, which assigned that input to and eat square millimeter could learn complete models of objects. So what I mean by that I'm touching the coffee cup? Well, if I just touched it in one place, I can't learn what this coffee cup is because I'm just feeling one part. But if I move it around the cup, it touched a different areas. I can build up a complete model, the cup, because I'm now filling in that three dimensional map, which is the coffee cup. I could tell what am I feeling in all these different locations? That's the basic idea. It's more complicated that.

[00:38:42] Jeff Hawkins Create clip But so through time, and we talked about time earlier, through time, even a single column, which is only looking at where a single part of the cortex is only looking at. A small part of the world can build up a complete model of an object into If you think about the part of the brain, which is getting input from all my fingers, so they're they're spread across the top ahead. Here, this is the somatic sensory cortex. There's columns associative, although from areas of my skin. And what we believe is happening is that all of our building models of this cup every one of them or things not not Billy. All not every column. Every part of the cortex builds models of everything, but they're all building models or something. And so you have it. So when I when I touched this cup with my hand, there are multiple models of the cupping and vote if I look at it with my eyes there again, many models of the cup being invoked cause each part of the visual system and the brain doesn't process an image. That's that's a misleading idea.

[00:39:38] Jeff Hawkins Create clip It's just like your fingers touching the cups of different parts of my right. Now we're looking at different parts of the cup and thousands and thousands of models of the copper being invoked at once, and they're all voting with each other, trying to figure out what's going on. That's what we call it 1000 brains theory of intelligence Because there isn't one model of the cup. There are thousands of miles to this cup there, thousands of models of your cell phone and about cameras and microphones, and so on. It's a distributed modeling system, which is very different than what people thought about

[00:40:04] Lex Fridman Create clip it. And so that's a really compelling and interesting. Ideas have to first questions the one on the ensemble, part of everything coming together. You have these 1000 brains. How do you know which one has done the best job of forming the

[00:40:18] Jeff Hawkins Create clip great question? Let me try. There's a problem that's known in neuroscience called the sensor fusion problem. Yes, and so is the ideas. There's something like, Oh, the image comes from the eye. There's a picture on the retina and that gets projected to the neocortex. Oh, by now it's all spread out all over the place, and it's kind of squirrelly and distorted, and pieces are all over that. And, you know, it doesn't look like a picture anymore. When does it all come back together again? Right? Or you might say, Well, yes, but I also I also have sounds or touches associate of the cup, so I'm seeing the cup and touching the cup. How do they get combined together again? So this is called the sensor fusion problem. As if all these disparate parts have to be brought together into one model someplace. That's the wrong idea. The right idea is that you got all these guys voting. There's auditory models of the cup visual models, the cup, the tactile models of the cup.

[00:41:09] Jeff Hawkins Create clip There want the individual system that might be ones that are more focused on black and white months version of color. It doesn't really matter. There's just thousands and thousands of models of this cup, and they vote. They don't actually come together in one spot. It just literally think of it this way. Imagine you have ive columns, like about the size of a little piece of spaghetti. Okay, like a two and 1/2 millimeters tall in about a millimeter in mind. They're not physical like, but you could think of him that way. And each one's trying to guess what this thing is. They're touching now. They they could do pretty good job if they're allowed to move over. Touched. I could reach my hand into a black box and move my finger around an object and very touch enough spaces like OK, I don't know what it is, but often we don't do that often. I could just reach and grab something with my handle that once I get it, or if I had to look through the world through a straw. So I'm warning you, invoking one little column. I could only see a part of something. I have to move the straw around. But if I open my eyes to see the whole thing it wants, so what we think is going on. It's all these little pieces of spaghetti if you all these little columns in the cortex or all trying to guess what it is that they're sensing.

[00:42:08] Jeff Hawkins Create clip They'll do a better guest if they have time and could move over times if I move my eyes with my fingers. But if they don't, they have. They have, ah, poor guest. It's a It's a probabilistic s of what they might be touching. Now imagine they can post there, probably at the top of little piece of spaghetti. Each one of us is. I think it's not really a probability institution. It's more like a set of possibilities in the brain. It doesn't work as a probability. Division Works is more like what we call the union. You could say in one column says, I think it could be a coffee cop, a soda can or a water bottle. Another com says, I think it could be a coffee cop or, you know, telephone or camera right, And all these guys were saying what they think it might be. And there's these long range connections in certain layers in the cortex. So there's some layers in some cells. Types in each column send the projections across the brain, and that's the voting occurs. And so there's a simple associative memory mechanism. We've described this in recent paper, and we've modeled this, um that says they can all quickly settle on the only or the one best answer for all of them.

[00:43:14] Jeff Hawkins Create clip If there is a single best answer, they all vote. Yeah, it's gotta be the Coffee Cup, and at that point, they all know it's a coffin. And at that point, everyone acts as if it's the coffee cup that we know it's coffee, even though I've only seen one little piece of this world, I know it's coffee cup. I'm touching or I'm seeing whatever. And so you could think of all these columns are looking at different parts in different places. Different sensory input, different locations. They're all different. But this layer that's doing the voting, Um, that's it solidifies its just like it crystallizes and says, Oh, we all know what we're doing And so you don't bring these models together in one model, you just vote and there's a crystallization of the vote.

[00:43:49] Lex Fridman Create clip Great, that's, Ah, at least a compelling way to think about aboutthe way you Ah, former model of the world. Now you talk about a coffee cup? Do you see this? As far as I understand, you're proposing this as well that this extends to much more than coffee cups. Yeah, that does the or at least the physical world. It expands to the world of concepts.

[00:44:14] Jeff Hawkins Create clip Yeah, it does. And, well, first, the primer face every evidence for that. Is that the regions of the neocortex that are so stupid with language or high level thought or mathematics or things like that? They look like the regions of the neocortex that process, vision, hearing and touch their They don't look any different. Oh, they look only marginally different. Um, and so one would say, Well, if Vernon Mountcastle who proposed it all all the parts of the questions do the same thing. If he's right, then the parts that during language of mathematics or physics are working on the same principle. They must be working on the principle of reference frames. So that's a little odd thought, um, some, but course we had No, I had no prior idea how these things happen. So let's let's go with that. Um, and we in our recent paper, we talked a little bit about that. I've been working on it more since. I have better ideas about it. Now I'm sitting here. I'm very confident that that's what's happening, and I can give you some examples that help you think about that. Um, it's not. We understand it completely, but I understand it better than I've described it in any paper so far. So what we did put that idea out there says Okay, this is it.

[00:45:20] Jeff Hawkins Create clip It's a good place to start, you know, and the evidence would suggest this how it's happening. And then we can start tackling that problem one piece of time, like what does it mean to do high level thought? What it mean to do language? How would that fit into a reference frame framework?

[00:45:34] Lex Fridman Create clip Yes, So there's Ah, I wonder if you could tell me if there's a connection. But there's an app called Yankee that helps you remember different concepts. And they talk about like a memory palace that helps you remember completely random concepts by trying to put them in the physical space in your mind and putting them next to each other. The method of local okay, yeah, for some reason, that seems to work really well. Now that's a very narrow kind of application of just remembering, in fact, but

[00:46:00] Jeff Hawkins Create clip that's no, but that's a very very telling one. Okay?

[00:46:03] Lex Fridman Create clip Yeah, exactly. So it seems like you're describing a mechanism. Why this seems to

[00:46:08] Jeff Hawkins Create clip work. So? So basically the way what we think is going on is all things, you know, all concept, all ideas, words, everything. You know, um, are stored in reference frames. And and so, if you want to remember something, you have to basically navigate through a reference frame the same way a rat navigates were made in the same way. My finger. Right now he gets to this coffee cup, you are moving through some space. And so you have a random list of things you were asked to remember by assigning him to a reference fame. You've already know very well to see your house, right? Idea the method localizer just okay. In my lobby, I'm gonna put this thing. And then and then the bedroom, I put this one. I go down the hole and put this thing. And then you want to recall those facts and recall the things you just walk mentally. You walk through your house, you're mentally moving through a reference frame that you already had, and that tells you there's two things that really important about tells us the brain prefers the store things in reference frames and that the method of recalling things or thinking, if you will, is to move mentally through those reference frames. You could move physically through some reference fame's like I could physically move to the reference aim of this coffee cup. I can also mentally moved to the reference from the coffee Cup, imagining me touching it. But I can also mentally move my house. And so now we can ask yourself, Are all concept store this way? There's some recent research using human subjects an effort. Mariah and I won't apologize for not knowing the name of the scientist into this.

[00:47:36] Jeff Hawkins Create clip But what they did is they put humans in this F M r. I machine itself in these images machines, and they gave the humans tasked to think about birds So they have different types of birds and big and small and long next along, like things like that and what they could tell from the f m R. I. It was very clever. Experiment, um, get to tell when humans were thinking about the birds that the birds that the knowledge of birds was arranged in a reference frame similar to the ones that you used when you navigate in the room that these are called grid cells and their grid cell like patterns of activity in the neocortex, when they do this so that it's very clever experiment, you know, and what it basically says that even when you're thinking about something abstract and you're not really thinking about is a reference frame, it tells us the brain is actually using a reference frame and is using the same neural mechanisms. These grid cells are the basic same neural mechanism that we proposed that grid cells, which exists in the in the old part of the brain, the antibiotic cortex. That that mechanism is now similar mechanisms used throughout the neocortex. It's the same nature preserve this interesting way of creating reference frames. And so now they have empirical evidence that when you think about concepts like birds that you're using reference frames that are built on grid cells, so that's that's similar to method of locating. In this case, the birds are related, so that makes they create their own reference frame, which is consistent with bird space.

[00:49:01] Jeff Hawkins Create clip And when you think about something you go through that you could make the same example. Let's take a math. Mathematics, right. Let's say you wanna prove a conjecture. Okay, What is a conjecture? Conjecture is a statement you believe to be true, but you haven't proven it. And so it might be an equation. I want to show that this is equal to that. And you have a You have some places you start with. You said, Well, I know this is true, and I know this is true and I think that maybe to get to the final proof, I need to go through some intermediate results. But I believe it's happening is literally these equations were these points are assigned to a reference frame, a mathematical reference. Right. And when you do mathematical operations, a simple one might be multiply or divide. But you might be a little plus transform or something else that is like a movement in the reference frame of the math. And so you're literally trying to discover a path from one location to another location in a space of mathematics. And if you can get to these intermediate results, then you know your map is pretty good. And you know you're using the right operations.

[00:50:02] Jeff Hawkins Create clip Much of what we think about this solving heart problems is designing the correct reference frame for that problem. Figure out how to organize the information and what behaviors I want to use in that space to get me there.

[00:50:16] Lex Fridman Create clip Yes, so if you dig in a nayda of this reference frame, whether it's the math, you start a set of axioms to try to get to proving the conjecture. Uh, can you try to describe, maybe take a step back? How you think of the reference room? That context is. Is it the reference frame that the axioms air happy in? Is it the reference frame that might contain everything? Is that a changing things

[00:50:40] Jeff Hawkins Create clip so that, you know, you have many, many reference frames? I mean, the way. The theory, the 1000 Syrian intelligence says that every single thing in the world has his own reference frame. So every word has it on referenced names, and we can talk about this. The mathematics work out. This is no problem from their rooms to do this,

[00:50:55] Lex Fridman Create clip but how many reference changes the coffee cup have? Well, it's on a table

[00:51:00] Jeff Hawkins Create clip it's a you asked how many reference? Same could a column in my finger that's touching the coffee cup half because there are many, many copies there, many, many models of coffee cups. So the coffee there is no want model the coffee cup there many miles of a coffee cup. And you could say, Well, how many different things come my finger learn? This is the question you wanna ask Imagine. I say Every concept, every idea, Everything you've ever know about that you could say I know that thing. It has a reference time associated with him and what we do when we build composite objects, we can. We assign reference frames to point another reference frame. So my coffee cop has multiple components to it. It's got a room, it's got a cylinder. It's got a handle and those things that have their own reference frames. And they're assigned to a master reference frame where we just call this cup, and now I have this fundamental logo on it. Well, that's something that exists elsewhere in the world. It's its own thing. So his own reference time. So we have to stay here. How can I sign the the memento bogo reference frame onto the cylinder or gone to the coffee cup. So it's all we talked about this in the paper that, um, um came out in December of this last year.

[00:52:06] Jeff Hawkins Create clip The idea of how you consign reference times the reference in town neurons could do this.

[00:52:10] Lex Fridman Create clip So what? My question is, even though you mentioned reference friends a lot, I almost feel it's really useful to dig into how you I think of what a reference frame is. FBI was already helpful for me to understand that you think of reference frames is something there is. Ah, lot of Okay,

[00:52:26] Jeff Hawkins Create clip so let's just say that we're gonna have some neurons in the brain. Not many, actually, 30,000 are gonna create a whole bunch of reference frames. What does it mean? What is a reference from this? First of all, these reference aims are different than the ones you've might be used to let you know lots of references. For example, we know the Cartesian coordinates, X y Z. That's a type of reference, Frank. We know longitude and latitude. That's a different type of reference frame. If I look at a printed map. It might have columns a through M and rows one through 20. That's a different type of reference frame. It's kind of a Cartesian Corrine reference. Um um, the interesting about the reference frames in the brain. We know this because these have been established through neuroscience studying the and Toronto cortex. So I'm not speculating here. Okay, this is known neuroscience in an old part of the brain. The way these cells create reference fame's, they have no origin. So what? It's more like you have. You have a point appointment in some space and you give it a particular movement. You can then tell what the next point should be on Deacon Dental what the next point would be and so on.

[00:53:35] Jeff Hawkins Create clip You can use this thio to calculate how to get from one point to another. So how did I get from being around my house to my home? Or how do I get my finger from the side of my cup to the top of the camp?

[00:53:46] Lex Fridman Create clip How do we get

[00:53:47] Jeff Hawkins Create clip from the the axioms toe Dylan conjecture? So it's a different type of reference frame, and I can, if you want, I can describe in more detail. I can paint a picture how you might want to think about

[00:53:59] Lex Fridman Create clip that Really helpful to think it's something you can move through. Yeah, but is there is it Ah, is it helpful to think of it a spatial in some sense, Or is there something

[00:54:09] Jeff Hawkins Create clip special? It's special into mathematical

[00:54:13] Lex Fridman Create clip sense. How many dimensions gonna be crazy? Number of them?

[00:54:16] Jeff Hawkins Create clip Well, that's an interesting question in the old part of the brain and Serena cortex, they studied rats, and initially, it looks like, Oh, this is just two dimensional. It's like the rat is in some box and a maze or whatever, and they know where the rat is using these two dimensional reference frames and know where it is that the mais oui, So Okay, But what about what about bats? That's a mammal and they fly in three dimensional space. How did they do that? They seem to know where they are, right? So this is a current area of active research, and it seems like somehow the neurons in the in Toronto cortex ob can learn three dimensional space. We just two members of our team, along with it. Lafayette from M I t just released a paper this literally last week. It's on by archive, where they show that you can if you know the way these things work and I will get. Unless you want to know, I won't get into the details. But, um, grid cells can represent any n dimensional space. It's not inherently limited. You can think of it this way. If you had two dimensional, the way it works is you have a bunch of two dimensional slices. That's the way these things work. There's a whole bunch of two dimensional models, and you can just you can slice up any n dimensional space in with Judi Mental projections so and you could have one dimensional models. So there's There's nothing inherent about the mathematics about the way the nerds do this, which which constrained the dimensionality of the space, which I think was important.

[00:55:41] Jeff Hawkins Create clip So obviously I have a three dimensional map of this cup. Maybe even more than that, I don't know, but it's clearly three dimensional map of the cup. I don't just have a projection of the cup, um, but when I think about birds or when I think about mathematics, perhaps It's more than three dimensions are

[00:55:56] Lex Fridman Create clip so in terms of each individual column building up more and more information over time. Do you think that mechanism is well understood in your mind? You've proposed? Ah, lot of architectures there. Is that a key piece or is it, um, is the big piece the 1000 brain theory of intelligent, The ensemble of it all?

[00:56:17] Jeff Hawkins Create clip Well, I think they're both big. I mean, clearly the concept as a theorist, the concept is most exciting, right? We had a little contact, high concept. This is a totally new way of thinking about other narcotics work. So that is appealing and has all these ramifications. And with that as a framework for how the brain works, you can make all kinds of predictions and solve all kinds of problems. Now, we're trying to work through many of these details right now. Okay. How did the North actually do this? Well, turns out, if you think about grid cells and place cells in the old part of the brain, there's a lot that's known about it. But there's still some mysteries. There's a lot of debate about exactly the details how these work and one of the signs, and we have that still that same level of detail, the same level concern. What we spend here most of our time doing is trying to make a very good list of the things we don't understand yet.

[00:57:02] Jeff Hawkins Create clip That's the key part here. What are the constraints? It's not like, Oh, this seems, seems work. We're done. No, it's like, OK, it kind of works, But these are other things. We know what has to do, and it's not doing those yet. Um, I would say we're well on the way here. We're not done yet. There's a lot of trickiness to the system, but the basic principles about how different layers in the near cortex are doing much of this, we understand. But there's some fundamental

[00:57:28] Lex Fridman Create clip parts that we don't understand it. Well, so what would you say is, Ah, one of the harder open problems of one of the ones that have been bothering you? Uh, keeping up a night the most?

[00:57:38] Jeff Hawkins Create clip Well, right now, this is a detailed thing that wouldn't apply to most people. Okay, But you want Yeah, Please. We've talked about a Zippo to predict what you're going to send some this coffee cup. I need to know where my finger is gonna be on the coffee cup. That is true, but it's insufficient. Um, think about my finger touches the edge of the coffee cup. My finger can touch the two different orientations. I can rotate my finger around here. Um, and that doesn't change. I can make that prediction and somehow so it's not just the location. There's an orientation component of this as well. This is known in the old part of the brain to these things called head direction cells. Which, which way the rat is facing its the same kind of basic idea. My finger. A rat, you know, in three dimensions. I have a three dimensional orientation and I have a three dimensional location. If I was a rat, I would have a thing. I think of it a two dimensional location, a two dimensional, a one dimensional orientation.

[00:58:33] Jeff Hawkins Create clip Which way is it facing? So how the two components worked together, How it is that I I combine orientation Might the orientation of my sensor as well as the the location is a tricky problem, and I think I've made progress on it.

[00:58:52] Lex Fridman Create clip So at a bigger version of That's a perspective super interesting but super specific. I really good, but there's a more general version of that. Do you think context matters? The fact that we're in a building in North America, that's, Ah that we in the day and age where we have mugs. I mean, there's all this extra information that you bring to the table about everything else in the room that's outside of just the coffee cup. How does it get?

[00:59:25] Jeff Hawkins Create clip Yeah, so Kanaan Yeah, and that is another really interesting question. I'm going to throw that under the rubric or the name of attentional problems. First of what we have. This model. I have many, many models.

[00:59:37] Lex Fridman Create clip And also the question doesn't matter, because

[00:59:40] Jeff Hawkins Create clip what matters for a certain things, Of course it does. Maybe what we think of as a coffee cup in another part of the world this beauty, something different. Maybe our logo, which is very benign in this part of the world, it means something very different. Another parlor world. So those things do matter. I think the thing that way to think about is the following One way to think about it is we have all these models of the world, okay? And we've modeled remodeled everything. And as I said earlier, I snuck it in. There are models are actually we build composite structure. So every object is composed of other objects which are composed of other objects, and they become members of other objects. So this room is chairs and a table in a room and walls and still on. Now we can just arranging things a certain way. Go. That's the demented conference room. So and what we do is when we go around the world when we experience the world, I walk into a room, for example, the first thing I did like Oh, I'm in this room. Do I recognise room? Then I could say, Oh, look, there's a There's a table here by attending to the table. I'm then assigning this table in a context of room. That's all on the table. There's a coffee cup. Oh, on the table, there's a logo and in the logo there's the word momentum on looking the logo. There's a letter E. On Look, it has an unusual surf and doesn't actually but pretended this.

[01:00:59] Jeff Hawkins Create clip So the point is, you your attention is kind of drilling deep in and out of these nested structures, and I can pop back up when it popped back down and pop back up like a pop back down. So when I attend to the coffee cup, I haven't lost the context of everything else, but it sort of is a nested structure.

[01:01:18] Lex Fridman Create clip So the attention filters the reference for information for that particular period

[01:01:24] Jeff Hawkins Create clip of time. It basically a moment to moment. You attend the sub components and then you contend to stop components to some components,

[01:01:30] Lex Fridman Create clip it could move up and down, up

[01:01:31] Jeff Hawkins Create clip and down that we do that all the time. You're not even now that I'm aware of it. I'm very conscious of it. But until But most people don't want to think about this. You know, you don't just walk in a room and you don't say, Oh, I looked at the chair and I looked at the board and looked at that word on the board and I looked over here. What's going on right

[01:01:47] Lex Fridman Create clip the wood percent of your day? Are you deeply aware of this? In what part can you actually relax and just be Jeff be

[01:01:52] Jeff Hawkins Create clip personally like my personal day. Yeah, unfortunate. I'm afflicted with too much of the former. Um, I fortunately or unfortunately, yeah, I

[01:02:03] Lex Fridman Create clip don't think it's useful.

[01:02:04] Jeff Hawkins Create clip It is useful, totally useful. I think about this stuff almost all the time, and I met. One of my primary ways of thinking is when I'm asleep at night. I always wake up in the middle of night, and then I stay awake for at least an hour with my eyes shut in sort of 1/2 sleep state Thinking about these things, I come up with answers to problems very often in that sort of half sleeping state I think about in my bike, right? Thing about walks. I'm just constantly think about this. I have to almost a scheduled time to not think about the stuff because it's very it's mentally taxing.

[01:02:37] Lex Fridman Create clip Are you? Ah, when you think about the stuff you think introspectively, like almost taking a step outside of yourself and trying to figure out what is your mind doing?

[01:02:45] Jeff Hawkins Create clip I do that all the time, but that's not all I do. I'm constantly observing myself. So, students I started thinking about grid cells, for example, on getting into that I started saying, Oh, well, grid cells can place a sense in the world, you know, that's where you know where you are and is entering. You know, we always have a sense of where we are in this were lost. And so I started at night. When I got up to go to the bathroom, I would start trying to do a complete with my eyes closed all the time, and I would test my sense of self. I would walk five feet and say, Okay, I think I'm here. Am I really there? What's my error? And then I can't cut my hair again and see how the errors accumulate. So even something as simple as getting up in the night to go the bathroom. I'm testing these theories out. It's kind of fun, and the coffee cup is an example of that, too. So I think I I find that the sort of everyday introspection czar actually quite helpful.

[01:03:32] Jeff Hawkins Create clip It doesn't mean you can ignore the science. I mean, I spend hours every day reading ridiculously complex papers. That's not nearly as much fun, but you have to sort of build up those constraints knowledge about the field and who's doing what and what exactly they think is happening here. And then you can sit back. Okay, let's try toe pieces all together. Let's come up with some. You know, I I'm very in this group here. People. They know they just do this. All this time I come in with the introspective ideas and said, Well, have you ever thought about this? Now watch this all do this together, Andi. It's helpful. It's not if you long as you do, if you don't. All you did was that, then you're just making up stuff, all right, but if you're constraining it by the reality of the neuroscience than it's really helpful.

[01:04:17] Lex Fridman Create clip So let's talk a little bit about deep learning. And, ah, the successes in the applies space of ah, neural networks, the ideas of training model on data and these simple competition units neuron artificial neurons that with back propagation of statistical ways of being able to generalize from the training set into data that's similar to that training self. So where do you think are the limitations of those approaches? What do you think? Their strengths relative to your major efforts of constructing a theory of human intelligence.

[01:04:55] Jeff Hawkins Create clip Well, I'm not an expert in this field. I'm somewhat knowledgeable. So But

[01:05:00] Lex Fridman Create clip it is just your intuition. What Here?

[01:05:02] Jeff Hawkins Create clip Well, I have a little bit more than intuition, but you can't stay like, you know, one of the things that you asked me. Do I spend all my time thing about neuroscience? Ideo. That's to the exclusion of thinking about things like convolution, neural networks. But I try to stay current. So look, I think it's great the progress they've made. It's a fantastic and as I mentioned earlier, is very highly useful for many things. The models that we have today are actually derived from a lot of North science principles. There are distributed processing systems and distributed memory systems, and that's how the brain works. They use things that we might call them neurons, but they're really not nervous at all. So we could just not really interested distributed processing systems and on the nature of hierarchy that came also from neuroscience. And so there's a lot of things with learning rules, basically something out back prop, but other, you know, having time, I'd

[01:05:52] Lex Fridman Create clip be I'd be curious to say they're not neurons at all. He described in which way. I mean, some of it is obvious, but I'd be curious if if you have specific ways in which you think the biggest difference. Yeah,

[01:06:02] Jeff Hawkins Create clip We had a paper in 2016 called y neurons of thousands of synapses and it. And if you read that paper, you don't know what I'm talking about here. A real neuron in the brain is a complex thing. It let's just start with the synapses on it, which is a connection between the rise. Real neurons can everywhere from 5 to 30,000 synapses on, um, the ones near the cell body. The winds are too close to the Somare of the cell body. Those were like the ones of people model in artificial neurons. There is a few 100 those Maybe they can affect the cell that could make this will become active 95% of the synapses. I can't do that. They're too far away. So you acted one of those synapses. It just doesn't affect the so body enough to make any difference.

[01:06:48] Lex Fridman Create clip And you one of them individually,

[01:06:50] Jeff Hawkins Create clip anyone of injury or even if you do a mass of him what? What what real neurons do is the following. If you activate or they you know you get 10 to 20 of them active at the same time, meaning they're all receiving an input at the same time. And those 10 to 20 synapses are 40 cents of within a very short distance on the dendrite. Like 40 Mike runs a very small area, So if you activate a bunch of these right next to each other at some distant place, what happens is it creates what's called the dendritic spike and then drinks bike, travels through the den trains and can reach the Soma or the cell body. Now, when it gets there, it changes the voltage, which is sort of like going to make the cell fire, but never enough to make the self fire. It's sort of what we call it, says we de polarize the cell. You raise the voltage a little bit, but not enough to do anything. It's like, well, good is that and then it goes back down again.

[01:07:44] Jeff Hawkins Create clip So we proposed a theory which I'm very confident basics are. Is that what's happening? The heiress, those 95% of the synapses are recognizing dozens to hundreds of unique patterns they can write in about 10 20. Nervous synapse is a time, and they're acting like predictions, So the neuron actually is a predictive engine on its own. It can fire when it gets enough what they call approximately put from those ones near the cell fire. But it can get ready to fire from dozens to hundreds of patterns that recognizes from the other guys. And the advantage of this to the neuron is that when it actually does produce a spike in action potential, it does so slightly sooner than it would have otherwise. And so what good is slightly soon? Well, the slightly sooner part is it. There's a all the neurons in the excited throw. Neurons in the brain are surrounded by these inhibitory neurons, and they're very fast, the inhibitory neurons, Ms basketball. And if I get my spike out a little bit sooner than someone else, I inhabit all my neighbors around right, and what you end up with is a different representation. You end up with a reputation that matches your prediction. It's a it's a sparse representation, meaning of few hours, no, no interactive but is much more specific.

[01:08:57] Jeff Hawkins Create clip And so we showed how networks of these neurons could do very sophisticated temporal prediction. Basically, so So this. Summarize this. Real neurons in the brain are time based prediction engines and and a and there's no concept of this at all in artificial political point neurons. I don't think you can mail the brain without him. I don't think Bill intelligence about because it's it's the It's where a large part of the time comes from. It's It's these are predictive models, and the time is there is a prior in a prediction than in action and its inherent through every neuron in the neocortex. So So I would say that permitting around sort of model a piece of that and not very well with that, either. But you know, like, for example, synapses very unreliable, and you cannot assign any precision to them. So even one digital position is not possible. So the way real Duran's work is they don't add these. They don't change these weights accurately, like artificial neural networks do they basically form new synapses? And so what you're trying to always do is is detect the presence of some 10 to 20 active synapses at the same time as opposed, and they're almost binary. It's like because you can't really represent anything much finer than that. So these are the kind of dish, and I think that's actually another essential component because the brain works on sparse patterns and all about all that mechanism is based on sports patterns, and I don't actually think you could build our real brains. Are machine intelligence about incorporating some of those ideas?

[01:10:30] Lex Fridman Create clip It's hard to even think about the complex that emerges from the fact that the timing of the firing matters in the brain, the fact that you form new new synapses and the I mean everything you just mentioned in the past.

[01:10:44] Jeff Hawkins Create clip You can trust me. If you spend time on it, you can get your mind around it. It's not like it's no longer a mystery to me.

[01:10:49] Lex Fridman Create clip No, but but sorry is a function in a mathematical way. It's Can you get it? Start getting an intuition about what gets it excited. We're not representing

[01:11:00] Jeff Hawkins Create clip easy as uh, um, there's many other types of neural networks are that are more amenable to pure analysis, you know, especially very simple networks. You know, I have four neurons, and they're doing this can be described mathematically what they're doing. Everything, even the complexity of competition. Neural numbers. Today, it's sort of a mystery that can't really describe the whole system. Um, and so it's different. My colleague. Super time I'm on. He did a nice, um, paper on this. You can get all the stuff on our Web site if you're interested talking about mathematical properties, the sparse representations and so we can't But we can do is weak until mathematically. For example, why 10 to 20 synapses to recognize the pattern is the correct number is the right number you'd want to use? And by the way that matches biology, we can show mathematically some of these concepts about the show. Why the brain is so robust. Um, the noise and error and fall out selling. We can show that mathematically as well as empirically in simulations, but the system can't be analyzed completely. Any complex system can't that's out of the realm, but there is There is mathematical benefits and intuitions that could be drive from mathematics on. We try to do that as well. Most, most of our papers have the section about.

[01:12:23] Lex Fridman Create clip So I think it's refreshing and useful for me to be talking to you about in deep neural networks because your intuition basically says that we can't achieve anything like intelligence with artificial neuron that works

[01:12:35] Jeff Hawkins Create clip well, not in their current form 90 come from working with in the ultimate form. Sure,

[01:12:40] Lex Fridman Create clip so let me dig into it and see what your thoughts are. They're a little bit so I'm not sure if you read this little block post called Bitter Lesson by Rich Sutton recently. Recently, he's, ah, reinforcement learning Pioneer. I'm not sure if you're familiar with him. His basic idea is that all the stuff we've done in a I in the past 70 years, he's one of the old school guys. Ah, the biggest lesson learned is that all the tricky things we've done don't you know they benefit in the short term? But in the long term, what wins out is a simple general method that just relies on Moore's law on on computation, getting faster and faster. So that's what he's saying. This is what

[01:13:21] Jeff Hawkins Create clip has worked up to now

[01:13:23] Lex Fridman Create clip this what has worked up to now if you're trying to build the system if we're talking about is not concerned about intelligence, is concerned about a system that works in terms of, ah, making predictions and applied narrow ai ai problems, right? That's what there's a discussion is about that you just tried to goes general as possible and wait years or decades for the computation. Thio make it Actually,

[01:13:50] Jeff Hawkins Create clip the thing that is a criticism or is he saying this is the prescription of what we ought to be doing?

[01:13:54] Lex Fridman Create clip Well, it's very difficult. He's saying this is what has worked and, yes, a prescription but the difficult prescription because it says all the fun things you guys are trying to D'oh, we are trying to D'oh! He's part of the community is saying it's It's only going to be short term gains. So this all leads up to ah question, I guess on artificial neural networks and maybe our own biological neuron networks is Ah, Do you think if we just scale things up significantly, eso take thes dumb artificial neurons? The point you're like that term. Ah, if we just have all lot more of them, Do you think some of the elements that we're seeing the brain may start emerging.

[01:14:38] Jeff Hawkins Create clip No, I don't think so. We can do bigger problems and of the same type. I mean, it's been pointed out by many people that today's controversial. No, networks aren't really much different than the ones we had quite a while ago. We just They're bigger and trained more, and we have a label data and still on. Um uh, But I don't think you can get to the kind of things I know the brain can do and that we think about its intelligence by just scaling it up so that maybe it's a good description of what's happened in the past. What's happened recently with the reemergence of artificial neural networks. It may be a good prescription for what's gonna happen in the short term, Um, but I don't think that's the path I've said that earlier. There's an alternate path I should mention to you, by the way, that we made sufficient progress on our whole cortical theory in the last few years that last year we decided to start actively pursuing. How do we get these ideas embedded into machine learning about that again, being led by my colleague super time on and he's more of a machine learning diamorphine. Never science guy.

[01:15:46] Jeff Hawkins Create clip So, um, this is now our new This is I want our focus. But it is now unequal. Focus here because way need toe proselytize what we've learned and we need to show how it's beneficial. Thio team were in there. So we're putting We have a plan in place right now. In fact, we just did our first paper on this. I could tell you about that, but, you know, one of the reasons I want to talk to you is because I'm trying to get more people in the machine learning that the community like I need to learn about this stuff. And maybe we should just think about this a bit more about what we've learned about the brain. And what are those? Teammate Memento. What have they done? Is that useful for us?

[01:16:24] Lex Fridman Create clip Yes, it is their elements of all the cortical theory that things we've been talking about that may be useful in the short term. But in the short term, asked, This is the, uh, sorry to interrupt the open question. Is it? It certainly feels from my perspective that in the long term, some of the ideas were talking about will be extremely useful. Question is whether in the short term,

[01:16:46] Jeff Hawkins Create clip well, this is a always what I would call the entrepreneurs dilemma. So you have this long term vision. Oh, we're gonna all be driving Electric cars are we're all gonna computers are all gonna whatever. And And you're at some point in time and you say I can see that long division I'm sure it's gonna happen. How do I get there without killing myself? You know what? I'm going out of business, right? That's the challenge. That's the deadline. That's the really difficult thing to do. So we're facing that right now. So ideally, what you're going to do is find some steps along the way that you can get there incrementally. You don't have to, like, throw it all out and start over again. The first thing that we've done is we focus on the sparse representations, so just just in case you don't know what that means, or some of the listeners don't know what that means in the brain. And I have, like, 10,000 neurons. What you would see is maybe 2% of them active at a time. You don't see 50%. You don't think 30% you might see 2%.

[01:17:41] Jeff Hawkins Create clip And it's always like that

[01:17:42] Lex Fridman Create clip for any set of sensory input.

[01:17:44] Jeff Hawkins Create clip Doesn't matter anything just about any part of the brain.

[01:17:47] Lex Fridman Create clip But which neurons differs, which neurons active?

[01:17:52] Jeff Hawkins Create clip Yeah. Still, this'll take $10,000 that are representing something sitting irritable, block together. It's a teeny little blocking around 10,000 and they're representing a location there, representing a cop there representing input from the sensors. I don't know. It doesn't matter. It's representing something. The way the representations occur, it's always a sparse representation. It's a population coat, which 200 cells are active. Tells me what's going on. It's not. Individual cells are not important at all. It's the population code that matters. And when you have sparse population codes, then all kinds of beautiful properties come out of. So the brain used the sparse population codes that we've. We've written and described these benefits in some of our papers, so they give this tremendous robustness to the systems. Brains are incredibly robust. Neurons are dying all the time and spasming and synapses falling apart and you know that all the time and it keeps working. So what simple Tie and Louise, one of our other engineers here have done. I've shown they're introducing sparseness into compositional number of other people thinking along those lines. But we're going about it in a more principled way, I think.

[01:19:00] Jeff Hawkins Create clip And we're showing that with you in four sparseness throughout these composition of neural networks in both the the which sort of with neurons are active and the connections between them that you get some very desirable properties. So one of the current hot topics in deep learning right now, our seas adversarial examples. So, you know, if you give me any deep learning network and I can give you a picture that looks perfect and you're going to call it, you know, you're going to say the monkey is, you know, an airplane. That's a problem. And Dr Justin out some big thing. We're tryingto have some contest for this. But if you, uh if you enforce sparse representation, Z or many of these problems go away, they're much more robust, and they're not easy to fool. So we've already shown some of those results just literally in January or February, just like last month. Me do that? Um, And you can I think it's on bio archive right now, or I cry if you can read about it. But, um, so that's like a baby step. Okay, let's take something for the brain. We know we know about sparseness. We know why it's important. We know what it gives the brain. So let's try to enforce out onto

[01:20:09] Lex Fridman Create clip this. What's your intuition? Was sparse Italy least two robustness because it feels like it would be less robust.

[01:20:14] Jeff Hawkins Create clip Why? Why would you feel the rest of us to you? Eso

[01:20:19] Lex Fridman Create clip it? Ah, it's just like if the fury neurons involved, the more fragile the representative.

[01:20:26] Jeff Hawkins Create clip But I didn't say there was lots of feuding. I said, just a 200. That's a lot.

[01:20:31] Lex Fridman Create clip A lot is

[01:20:31] Jeff Hawkins Create clip yes. So here's an intuition for it. Uh, this is a bit technical. So for, you know, for engineers, machine learning people, this be easy. But God listens. Maybe not. Um, if you're trying to classify something, you're trying to divide some very high dimensional space into different pieces and being, and you're trying to create some point where you say all these points in this high dimensional space or a and all these points inside a mental states or be. And if you have points that are close to that line, it's not for a robust. It works all the points you know about, but it's it's not very bus because you just move a little bit and you've crossed over the line. When you have sparse representations. Imagine. I pick, I'm gonna pick 200 cells active out of out of 10,000. Okay, so I have to honor cells active. Now, this I picked randomly another a different reputation. 200. The overlap between nose is gonna be very small. Just a few.

[01:21:27] Jeff Hawkins Create clip I can pick millions of samples randomly of 200 neurons, and not one of them were over at more than just a few. So one way think about is if I want him fool one of these representations to look like one of those other representations. I can't move just one cell or to sales of three cells of four cells. I have to move 100 cells, and that makes them robust

[01:21:52] Lex Fridman Create clip in terms of ah further. So you mention Spar City

[01:21:56] Jeff Hawkins Create clip will be the next thing. Yeah. Okay, so we have we picked one. We don't know if it's gonna work well yet again. We're trying to come up incremental ways to moving from brain theory to add pieces to machine learning, current machine learning world and one step at a time. So the next thing we're going to try to do is is sort of incorporate some of the ideas of the 1000 brains theory that you have many, many models and that you're voting now that idea is not new. There's a mixture of models has been around for a long time, but the way the brain does is a little different. And, um, on the way it votes is different, and the kind of way it represents uncertainty is different. So we're just starting this work. But we're going to try to see if we can sort of incorporate some of the principles of voting or principles of 1000 brain theory, like lots of simple models that talk to each other in a very, ah certain way. Um, and can we build more machines? Systems that learn faster and also well, mostly, um, are multimodal

[01:23:03] Lex Fridman Create clip and robust. Thio Multimodal talk of issues. So one of the challenges there is, Ah, you know, the machine learning computer vision community has certain sets of benchmarks, says the tests were based on which they compete. And I would argue, especially from your perspective, that those benchmarks aren't that useful for testing the aspects that the brain is good at or intelligent. They're not only testing in directions that very fine. Yeah, it has been extremely useful for developing specific mathematical models, but it's not useful in the long term for creating intelligence. You think you also have a role in proposing better tests?

[01:23:46] Jeff Hawkins Create clip Yeah, this is a very you've identified a very serious problem. First of all, the tests that they have or the test that they want, not the test of the other things that we're trying to do, right? You know, what are the so on? The second thing is, sometimes these could be competitive. In these tests, you have to have a huge data sets and huge computing power. If there, you know, and we don't have that here, we don't have a zealous other big teams that companies do. So there's numerous issues there, you know, we come out you know where our approach to this is all based On some sense, you might argue elegance. We're coming at it from like a theoretical base that we think, Oh my God, this is so clearly elegant This home brains work this way, intelligences. But the machine learning world has gotten in this phase where they think it doesn't matter what you think as long as you do, you know 0.1% better on this benchmark. That's what that's all that matters. And and that's a problem, Um, way have to figure how to get around that. That's that's a challenge for us. That's That's one of the challenges that we have to deal with. S. O. I agree. You've identified big issue. It's difficult for those reasons. But, you know, you know, part of reasons I'm talking to you today is I hope I'm gonna get some machine learning people to say you read those papers. Those might be some interesting ideas. I'm sure I'm tired of doing this 0.1% improvement stuff, you know?

[01:25:08] Lex Fridman Create clip Well, that's that's why I'm here as well, because I think machine learning now is the community's at a place where the next step is needs to be orthogonal to what has received success in the best

[01:25:21] Jeff Hawkins Create clip that you see other leaders saying this machine learning leaders, you know, Geoff Hinton with his capsules idea Many people have gotten up Say, you know, we're gonna head road, right? Well, maybe we should look at the brain, you know, things like that. So hopefully that thinking cur organically. And then then we're in a nice position for people to come and look at our work and say, Well, welcome from these guys.

[01:25:43] Lex Fridman Create clip Yeah, I'm mighty just launching. Ah, $1,000,000,000 Computing College. The center on this idea. So

[01:25:49] Jeff Hawkins Create clip on this idea of what of? Ah,

[01:25:50] Lex Fridman Create clip well, the idea that you know the humanity, psychology, neuroscience have to work all together too. Get to a building If

[01:25:58] Jeff Hawkins Create clip I stand for I just did this human centered a I said, Yeah, I'm a little disappointed in these initiatives because, yeah, you know, their focus sort of the human side of it. And it could very easily slip into how humans interact with intelligent machine interest, which is nothing wrong with that. But that's not that is with agonal to what we're trying to do. We're trying to say, Like, what is the essence of intelligence? I don't care. I want to build intelligent machines that aren't emotional. That don't smile at you that, you know they aren't trying toe tuck you in at night? Yeah.

[01:26:32] Lex Fridman Create clip There is that pattern that you, when you talk about, uh, understand the humans is important for understanding intelligence. You start slipping into topics of ethics or, um, yeah, like you said, the interactive elements is supposed to No, no, no. Let's zoom in on the brain study. Say what? The human brain, the baby, the

[01:26:51] Jeff Hawkins Create clip study, what the brain does does. And then we can decide which part of that we want to recreate in some system. But do you have that theory about what the brain does? What's the point? You know, just you're gonna be wasting time

[01:27:03] Lex Fridman Create clip just to break it down on the artificial neural network side. Maybe you could speak to this on the on the biological, you know, outside the process of learning versus the process of inference. Maybe you can explain to me what is there a difference between, you know, an artificial neural networks? There's a difference between the learning stage and inference stage. Do you see the brain is something different? One of the one of the big distinctions that people often say. I don't know how correct it is is artificial Neural networks need a lot of data. They're very inefficient learning. Do you see that is a correct distinction from the the biologic, the human brain, that the human brain is very efficient, Or is that just something we deceive ourselves?

[01:27:44] Jeff Hawkins Create clip Know it is efficient. Obviously we can learn new things almost instantly.

[01:27:47] Lex Fridman Create clip And so what elements do you think

[01:27:50] Jeff Hawkins Create clip I can talk about that? You brought up two issues there. So remember I talked earlier about the constraints way. Always feel well. One of those constraints is the fact that brains are continually learning. That's not something we said. Oh, we can add that later. That's something that was upfront, had to be there from the start and made our problems harder. But we showed going back to the 2016 paper on sequence memory. We showed how that happens, how the brains in for and learn at the same time, and our models do that they're not two separate phases or to stop its sets of time. I think that's a big, big problem in a I at least for many applications, not for all. So I can talk about that. There are some It gets detailed. There are some parts of the New York cortex of the brain where, actually, what's going on with the cycles? Ah, like cycles of activity in the brain. And, um, there's very strong evidence that you're doing more of inference on one part of the phase and more of learning on the other part of the face so the brain can actually sort of step with different populations of cells, are going back and forth like this. But in general, I would say that's an important problem we have, you know, all of our network. So we've come up with do both.

[01:29:06] Jeff Hawkins Create clip They're learning continuous learning networks, and you mentioned benchmarks earlier. Well, there are no benchmarks about that exact. So So we you know, we have to, like, you know, begin our little soapbox. Hey, by the way, you know this is important, you know, and here's mechanize for doing that, But then you know. But until you condone, prove it to someone in some commercial system or something

[01:29:25] Lex Fridman Create clip harder. So one of things I had to linger on that is in some ways to learn the concept of a coffee cup. You on Lee need this one coffee cup and maybe some time alone in a room with it.

[01:29:37] Jeff Hawkins Create clip Well, the first exists. I imagine I reached my hand into a black box and I'm reaching trying to touch something I don't know upfront. If it's something I already know, or if it's a new thing and I have, I'm doing both at the same time. I don't say, Oh, let's see if it's a new thing. Oh, let's see if it's another thing. I don't do that as I go, my brain says, Oh, it's new or it's not new. And if it's new, I start learning what it is and by the way it starts learning from the get go, even if it's going to recognize it. So they're they're not separate problems. So that's the finger. The other thing you mentioned was the fast learning. So I was just on my continuous learning. But there's also fascinating. Literally, I could show you this coffee cup, and I said, Here's a new coffee cup has got the logo on it. Take a look at it. Done. You're done.

[01:30:23] Jeff Hawkins Create clip You can predict what it's gonna look like, you know, in different positions. So I could talk about that too. In the brain. Um, the way learning occurs I mentioned this earlier, but I'll mention again, the way learning occurs, I imagine I am a section of the dendrite, every neuron and I want to learn. I'm gonna learn something new. I'm just doesn't matter what it is. I'm just gonna learn something new. I need to recognize a new pattern. So what I'm gonna do, I'm gonna form new synapses, new synapses? We rewire the brain onto that section of the dendrite once I've done that, everything else that neuron has learned is not affected by it. Now it's because it's isolated to that small section of the dendrite. They're not all being added together like appointment. So if I learned something new on this second in here, it doesn't change any of the learning curve anywhere else in that room. So I can add something without affecting previous learning, and I could do it quickly.

[01:31:20] Jeff Hawkins Create clip Um, now let's talk. We can talk about the quickness how it's done in real neurons, you might say, Well, it doesn't take time to form synapses. Yes, it could take maybe an hour to form into synapse. We conform memories quicker than that. And I can explain that Albums, too, if you want, But it's getting a bit Neuroscience E. That's

[01:31:39] Lex Fridman Create clip great. But is there an understanding of these mechanisms that every level so from the short term memories in the forming Ah,

[01:31:47] Jeff Hawkins Create clip wait S so this idea, sir, not the genesis. The growth of new synapses that's well described as well understood. And that's an essential part of learning that is learning that is learning. Okay, um, you know, back, you know, going back many, many years, people you know it was which is named the psychologist proposed heavy, heavy Donald him. He proposed that learning was the modification of the strength of a connection between two neurons. People interpreted that as the modification of the strength of a synapse. He didn't say that. He just said there's a modification between the effect of wonder on another. So synaptic genesis is totally consistent with Donald, Hebb said. But anyway, there's these mechanisms that growth of new stamps you go online, you can watch a video of a synapse growing in real time. It's little you can see this little thing going. It's pretty impressive. So that those mechanism no. Now there's another thing that we have speculated and we've written about, which is consistent with no neuroscience, but it's less proven. And this is the idea. How do I form a memory really, really quickly, like instantaneous if it takes an hour to go, synapse like that's not instantaneous.

[01:32:56] Jeff Hawkins Create clip So there are. There are types of synapses called silent synapses. They look like a synapse, but they don't do anything. They're just sitting there. It's like they have a actual potential. It comes in. It doesn't release any neurotransmitter. Some parts of the brain have more of these and others. For example, the hippocampus has a lot of them, which is, uh, where we associate most short term memory with, um, so what way speculated again in that 2016 paper, we proposed that the way we formed very quick memories, very short term memories or quick memories is that we convert silent since synapses into active synapses. It's going. It's like seeing a synapse zero wait in a one way, But the long term memory has to be formed by synaptic genesis, so you can remember something really quickly by just flipping a bunch of these guys from silent active. It's not like it's not from 0.1. The 0.15 it's like, doesn't do anything to it. Releases Transmitter. If I do that over a bunch of these, I've got a very quick, short term memory.

[01:33:56] Jeff Hawkins Create clip So I guess the lesson behind this is that most neural networks today are fully connected. Every neuron connects every other nerd from layer to layer. That's not correct in the brain. We don't want that. We actually don't want it bad. You want a very sparse connectivity so that any neuron connections some subset of the neurons in the other layer, and it does so on a dendrite by dendrite segment basis. So it's a very parse, elated, out type of thing, and that then learning is not adjusting all these ways, but learning. Is this saying okay? Connected these 10 cells here right now

[01:34:30] Lex Fridman Create clip in that that process, you know, with artificial neural networks. It's a very simple across the back propagation that adjust the ways the process of synaptic yo Genesis is not. The China sent up the genesis.

[01:34:42] Jeff Hawkins Create clip It's even easier.

[01:34:42] Lex Fridman Create clip It's even easier

[01:34:43] Jeff Hawkins Create clip is even here. Back propagation requires something We really can't happen in brains. This back propagation of this every signal really can't happen. People trying to make it happen and brand it doesn't happen. This is This is pure heavy and learning what snap The genesis is pure, happy learning. It's basically saying there's a population cells over here that are active right now, and there's a population cells over here active right now. How do I form connections between those active cells? And it's literally saying, this guy became active. This these 100 neurons here became active before this knowing became active. So form connections to those ones. That's it. There's no propagation of error. Nothing. All the networks we do all models we have work on almost completely on heavy and learning, but in in undead riddick segments and multiple synapses at the same time.

[01:35:33] Lex Fridman Create clip Now let's have turned the question that you already answered, and maybe you can answer it again. If you look at the history of artificial intelligence, where do you think we stand? How far are we from solving intelligence? You said you were very optimistic. Can you elaborate on that?

[01:35:48] Jeff Hawkins Create clip Yeah. You know, it's always the crazy question to ask because, you know, no one can predict the future. Absolutely. So I'll tell you a story. I used Thio. I used to run a different neuroscientists. Call the Redwood Neuroscience that way would hold these symposiums. And we got, like, 35 scientists from around the world to come together. And I used to ask him all the same question. I would say, Well, how long do you think it'll be before we We understand how the New York cortex works. And everyone went around the room and they had introduced the name and they have to answer that question. So I got the typical answer was 5200 years. Some people take 500 years. Some people said Never. I said, What do you what? Your inner science. It's good pay. It's interesting. Um, so you know. But it doesn't work like that. As I mentioned earlier these air, not these air step functions things happen, and then bingo. That happened. You can't predict that.

[01:36:43] Jeff Hawkins Create clip I feel I've already passed a step from. So if I can do my job correctly over the next five years, um then meaning I can proselytize these ideas, I could convince other people that right, we can show that other people machine learning people should pay attention to these ideas. Then we're definitely in an under 20 year time frame. If I could do those things. If I'm not successful in that and this is the last time anyone talks to me and no one reads our papers and you know I'm wrong or something like that, then then I don't know. But it's not 50 years. It's it. You know it a little. You know the same thing about electric cars, How quickly are they gonna populate the world? Let's probably takes a lot of pointing your span. Um, it'll be something like that. But I think if I could do what I said, we're starting it.

[01:37:31] Lex Fridman Create clip And of course, there could be other Eastern step functions. It could be, uh, everybody gives up on your ideas for 20 years, and then all of sudden, somebody picks it up again. Wait, that guy was on to something,

[01:37:43] Jeff Hawkins Create clip so that would be That would be a failure on my part. Right? You know, think about Charles Babbage. You know Babou Ceesay. I have invented the computer back in 18 something. Yeah, 18 hundreds on. Everyone forgot about it until you know, 100 years later. Hey, this figure this stuff out a long time ago. But he was ahead of his time. I don't think, you know, like a zoo. I said, I recognize this is part of any entrepreneurs challenge. I use the Entre Minute. Broadly. In this case, I'm not meaning. Like I'm building a business trying to sell something. I mean, I come trying to sell ideas, and this is the challenge is to how you get people to pay attention to you. How do you get them? Thio give you positive or negative feedback? How do you get people act differently based on your ideas? So, um, you know, we'll see how what we do on them.

[01:38:30] Lex Fridman Create clip So you know that there's a lot of hype behind artificial intelligence currently, do you? Ah, as as you look to spread the ideas that air of Ah, your cortical theory of the things you were working on. Do you think there's some possibility will hit an A I winter once again.

[01:38:47] Jeff Hawkins Create clip Yeah. It's certainly a possibility. No, don't worry about. Yeah, well, I just Do I worry about it? Um, I haven't decided yet. If that's good or bad for my mission, that's true. That's very

[01:38:59] Lex Fridman Create clip true, because it's almost like you need the winter to refresh the palate.

[01:39:04] Jeff Hawkins Create clip Yeah, So it's like I want Here's what you wanna have. It is you want, like, extended. Everyone is so thrilled about the current state of machine learning and I and they don't imagine they need anything else that makes my job harder. If if everything crashed completely and every student left the field and there's no money for anybody to do anything and it became an embarrassment to talk about machine intelligence and a I, that wouldn't be good for us, either. You want you want sort of the soft landing approach, right? You want enough people in the senior people in the eye and Miss England and say, you know, we need other approaches. We really need other approaches. But damn, we need approaches. Maybe we should look to the brain. Okay, let's look the man who's got the brain ideas. Okay, Let's start a little project on the side here trying to do brain. I did related stuff. That's the ideal outcome we would want. I don't want a total winter, and yet I don't want it to be sunny all the time

[01:39:56] Lex Fridman Create clip either. Would you think it takes to build a system with human level intelligence? Where once demonstrated, you would be very impressed that does it have to have a body? Does it have to have the C word we used before? Consciousness? Ah, a Zen entirety as a holistic sense.

[01:40:18] Jeff Hawkins Create clip First, I don't think the goal is to create a machine that human level intelligence. I think it's a false goal. It back deterring, I think, was a false statement. We want to understand what intelligence is, and then we can build intelligent machines of all different scales, all dedicate abilities. You know, a dog is intelligent. I don't need, You know, I've been pretty good to have a dog, you know. But what about something that doesn't look like an animal at all in different spaces? So my thinking about this is that we wanted to find what Ellen, who says agree upon what makes an intelligence system we could then say Okay, we're not gonna build systems that work on those principles or some subset of, um and we can climb all the types of problems and the kind of the idea. It's not computing. We don't ask if I take a little, you know, little one ship computer. I don't say, Well, that's not a computer because it's not as a powerful Is this, you know, big server over here? No. No, because we know that what? The principles of computing on and I can apply this principle so small problem into a big problem and same intelligence seems to get there. We have to say these are the principal. I could make a small one. A big one. I could make him distributed. I can put them on different sensors. They don't have to be human like it all. Now, you did bring up a very interesting question about embodiment. Doesn't have to have my body.

[01:41:27] Jeff Hawkins Create clip It has to have some concept of movement. It has to be able to move through these reference frames I talked about earlier. Whether it's physically moving like I need. If I'm going to have a I that understands coffee cups. It's gonna have to pick up the coffee cup in touch it and look at it with its with its eyes and hands or something. Equipment that if I have a mathematical, aye, aye, maybe it's needs to move through mathematical spaces. I could have a virtual aye aye, that lives in the Internet. And it's It's movements are traversing links and digging into files, but it's got a location that its pants traveling through some space. You can't have an ad to just take some flash thing. Input we call the flash in front seat is, here's a pattern done. No, it's movement. Moving powder movement, pattern, moving pat Attention digging, building, building structure. Just figure out the model of the world. So some sort of embodiment embodiment, whether it's physical or not, has to be part of

[01:42:25] Lex Fridman Create clip it. So self awareness in the way to be able to answer where am I gonna

[01:42:29] Jeff Hawkins Create clip bring himself? I was two different topic self awareness.

[01:42:31] Lex Fridman Create clip You know, the very narrow definitions that meeting, knowing a sense of self enough to know where am I On this base? One

[01:42:40] Jeff Hawkins Create clip base of the system. The system needs to know its location where each component of the system needs to know where it is in the world at that point in time.

[01:42:48] Lex Fridman Create clip So self awareness and consciousness do you think one from the perspective in your science and your cortex? These are interesting topics. Solvable topics. Have any ideas? What? Why the heck it is that we have a subjective experience at all? Yeah, I belong. Is it useful, or is it just a side

[01:43:07] Jeff Hawkins Create clip effect of us? It's interesting to think about. I don't think it's useful as a means to figure out howto build intelligent machines. It's It's something that systems do, and we can talk about what it is that are like, Well, if I build a system like this, then it would be self aware Or and if a building like this it wouldn't be self aware. So that's just choice I can have. It's not like, Oh my God, it's tough aware. You know, I can't I I heard um, interview recently with this philosopher from Yale. I can't remember his name apologize for that. But he was talking about Well, if these computers were self aware than it would be a crime done, plug him and I'm like, Come on. You know, I implied myself every night. Go to sleep. What is that, a crime? You know, e plug myself in again in the morning. I'm there. I am. So, uh, you know, people get kind of bent out of shape about this. I have very definite, very detailed understanding or opinions about what it means to be conscious and what it means to be self aware. I don't think it's that interesting a problem you've talked about Crystal Cox. You know, he thinks that's the only problem.

[01:44:10] Jeff Hawkins Create clip I didn't actually listen to your interview with him, but I know him, and I know that's

[01:44:15] Lex Fridman Create clip thing. But he also thinks intelligence and caution in this joint. So I mean, it's not I don't know that one or the other. So he disagree with that? I just told you that. So where's your your thoughts of caution? The word is that emerged from because it is

[01:44:28] Jeff Hawkins Create clip so then we have to break it down to the two parts. Okay, because consciousness isn't one thing that's part of the problem. That term is it means different things to different people, and there's different components of it. There is a concept of self awareness. Okay, that it can be very easily explained. Um, you have a model of your own body, the neocortex models, things in the world. And it also models your own body. And and then it has a memory. It can remember what you've done. Okay, so it can remember what you did this morning. Remember, Jed for breakfast and so on, and so I could say to you. Okay, Lex, um uh, were you conscious this morning when you hide your bagel? And you said yes, I was conscious. Now what if I could take your brain and revert all the synapses back to the state? They were this morning. And then I said to you, Lex, were you conscious when you ate? The big one is knowing. I wasn't just I said, Here's a video eating a bagel and saying I wasn't there. I have no, that's not possible, because I was I must have been unconscious at that time, so we can just make this 1 to 1 correlation between memory of your body's trajectory through the world over some period of time. A memory that and the building recall that memory is what you would call conscious. I was conscious of that to self awareness.

[01:45:38] Jeff Hawkins Create clip And any system that can recall memorize what is done recently and bring that back and invoke it again would say, Yeah, I'm aware. I remember what I did. All right, I got it. That's an easy one. Although some people think that the harbor, the more challenging part of consciousness is this is one that sometimes you just go by the word of kuala, which is, you know, lying with another team red or what is pain and why Just pain feel like something. Why do I feel readiness? So what? I feel a little pain ist in the way. And then I could say, Well, why the site seems different than to hearing. You know, it's the same problem. It's really these are all just neurons. And so how is it that why does looking at you feel different than you know? I'm hearing you. It feels different, but this is not my head. They're all doing the same thing. So that's interesting question. Um, the best treatise I've read about this is by a guy named Reagan Oregon. He wrote a book called why red doesn't sound like a bill. Um, it's a little, um, it's not a trade book. Easy read, but it and it's an interesting question.

[01:46:46] Jeff Hawkins Create clip Take something like color. Color really doesn't exist in the world. It's not a property of the world. Properly world that exists is light frequency, and that gets turned into. We have certain cells in the retina that respond to different frequencies different than others. And so when they enter the brain yourself a bunch of accents that were firing at different rates and from that we perceive color the original color on the brain. I mean, there's no color coming in on those synapses. It's just a correlation between some some some accidents and some property of frequency on DDE that isn't even color itself. Frequency doesn't have a color. It's just a that's that's what it is. So then the question is, Well, why doesn't even appear to have a call it all

[01:47:27] Lex Fridman Create clip just as you're describing it? There's seems to be a connection of the those ideas of reference frames, and it just feels like consciousness having the subject, assigning the feeling of red to the actual color, or to the wavelength is useful. Foreign

[01:47:47] Jeff Hawkins Create clip Tony. Yeah, I think that's a good way of putting it. It's useful as a predictive mechanism or useful. There's a generalization. I did it the way of grouping things together to say it's useful to have a model like this. Um, I think about the well known syndrome that people who have lost a limb experience called phantom limbs and what they claim is they can have. Their arm is removed, but they feel the arm that not only feel it, they know it's there. They if they're I can I know it's there. The swear to you that it's there, and then they can feel pain in the arm and the fiend in their finger. And if they move their, they move their non existent arm behind your back. Then they feel the pain behind their back. So this whole idea that your arm exists is a model of your brain. It may or may not really exist and just like, but it's useful to have a model of something that sort of correlates to things in the world so you could make predictions about what would happen when those things occur. It's a little bit of fun, but I think you're getting right toward the answer there. It's It's useful for the model of expressing certain ways that we can then map them into these reference frames and make predictions about him.

[01:48:55] Jeff Hawkins Create clip I need to spend more time on this topic. It doesn't bother me.

[01:48:58] Lex Fridman Create clip Do you really need to spend more time? Yeah, it does feel special that we have subjective experience, but I'm yet to know why.

[01:49:07] Jeff Hawkins Create clip I'm just I'm just personally curious. You know, it's not necessary for the work we're doing here. I don't think I need to solve that problem to build intelligent machines at all. Not at all.

[01:49:15] Lex Fridman Create clip But there is so that the silly notion that you describe briefly that doesn't seem so silly toes humans is. You know, if you're successful building intelligent machines, it feels wrong to then turn them off, because if you're able to build a lot of them, it feels wrong to then be able to, you know, to turn off the

[01:49:38] Jeff Hawkins Create clip white wig. Just let's let's break it down a bit as humans. Why do we fear death? There's There's two reasons we for death. Well, first of all, state when you're dead does matter all okay your day. So why do we fear death? We fear death for two reasons. One is because we're are programmed genetically to fear death. That's a that's a survival and propagating the jeans thing. Um, and we also a program to feel sad when people we know we don't feel sad for someone we don't know dies people doing right now. There always comes a time for the bad about because I don't know them. But I knew that might feel really bad. So again, this is these air old brain, genetically embedded things that we feared up. There's ups outside of those those uncomfortable feelings. There's nothing else to worry about.

[01:50:25] Lex Fridman Create clip Wait a second. Don't you know the denial of death by Becker? Don't know. You know, there's a thought that death is, you know, our whole conception of our world model kind of assumes immortality and then death is this terror that underlies it also like, Well,

[01:50:47] Jeff Hawkins Create clip some people's world might not mine,

[01:50:50] Lex Fridman Create clip but OK, so what? What Becker would say is that you're just living an illusion. You've constructed illusion for yourself because it's such a terrible terror. The fact that. What is illusion? The illusion that that doesn't matter. You still not coming to grips with

[01:51:04] Jeff Hawkins Create clip the illusion of what that death is going to happen? Like it's not gonna

[01:51:09] Lex Fridman Create clip You're You're actually operating. You haven't. Even though you said you've accepted it, you haven't really accepted notion dies what he was saying. So it sounds like it sounds like you disagree with that notion.

[01:51:21] Jeff Hawkins Create clip Yeah, totally. I like that. Every night I go to bed, it's like dying with a little death's little. And And if I didn't wake up, it wouldn't matter to me. If only I knew that was gonna happen would be bothersome. Nobody knows gonna have. How would I know? No one. Then I would worry about my wife. So imagine. Imagine, I was a loner, and I lived in Alaska, and I lived them out there and there's no animals. Nobody knew I existed. I was just eating these routes all the time, and nobody knew was there. And one day I didn't wake up. What? What pain in the world would there exist? Well,

[01:51:57] Lex Fridman Create clip so most people that think about this problem would say that you're just deeply enlightened or are completely delusional, But I would say I would say That's very enlightened, Enlightened way to see the world is that that's the rational one. I didn't rational. That's right. But the fact is, we don't, um I mean, we really don't. I have an understanding Why the heck it is We're born and why we die and what happens after.

[01:52:25] Jeff Hawkins Create clip Well, maybe there isn't a reason. Maybe there is. So I'm interesting. Big problems too, right? You know, you you interviewed Max Techmark. You know, there's people like that, right? I'm missing those big problems as well. And, um, in fact, when I was young, I made a list of the biggest problems I could think of. First Wise, anything exist. Second, why did we have the laws of physics that we have? Third is life inevitable. And why is it here forthe Is intelligence inevitable and wise here? I stopped there because I figured if you could make a truly intelligence system, well, be, that would be the quickest way to answer the 1st 3 questions. I'm serious. Yeah, and and so I said my mission. You asked me earlier my first missions understand the brain, but I felt that is the shortest way to get the truth machine intelligence on. I want to get the two machine tells us because even if it doesn't occur in my lifetime, other people benefit from it because I think it will come in my lifetime. But 20 years and you never know. And, uh, But that would be the quickest way for us to, you know, we can make super mathematicians. We can make super space explorers. We can make super physicists brains that do these things. And, um, that can run experiments that we can't run. We don't have the buildings to manipulate things and so on. But we can build on, tells a machine to do all those things. And with the ultimate goal of finding out the answers to the other questions,

[01:53:48] Lex Fridman Create clip let me ask you another depressing and difficult question, which is, once we achieve that goal do of creating over no off understanding intelligence, do you think we would be happier, more fulfilled as a species? The understand intelligent? I understand the answers

[01:54:06] Jeff Hawkins Create clip to the big questions understanding, intelligence, all are totally, totally for more fun place to live. You think so? Yeah, Why not? I mean You know, just put aside these, you know, Terminator nonsense. And and just think about you can think about We can talk about the risk of a if you want left. So that's I think the world before better knowing things, we always better than no things. Do you think it's better? Is it a better place? The world, the living that I know that our planet is one of many in the solar system in this. So this is one of many of the colleges. I think it's more I dread. I just I sometimes think like, what would be like the last 300 years ago, I'd be looking at the sky. God, I can't understand anything. Oh, my God. I'd be like going to bed. I'm not going. What's going on here?

[01:54:50] Lex Fridman Create clip Well, I mean, in some sense, I agree with you, but I'm not exactly sure, But I'm also a scientist, so I have I share your views, but I'm not We're like, rolling down the hill together. Uh,

[01:55:02] Jeff Hawkins Create clip what's down the hill? I feel for climbing a hill. Whatever you think we're getting closer to enlightenment and whatever. Don't tell

[01:55:10] Lex Fridman Create clip climbing. We're getting pulled up a hill

[01:55:12] Jeff Hawkins Create clip by your hair Put the ark Polio city is pulling, Pulling ourselves up the hill by

[01:55:16] Lex Fridman Create clip curiosity. Yeah, sis officers doing the same thing with the rock. Yeah. Ah, but okay. Our happiness aside, do you have concerns about you know, you talk about Sam Harris in a musk of existential threats of intelligence?

[01:55:31] Jeff Hawkins Create clip No, I'm not worried about existential festival. There are. There are some things we really do need to worry about. Even today's A I have things we have to worry about. We have to worry about privacy and about how impacts false beliefs in the world. And we have real problems that on things to worry about with today's A I, and that will continue as we create more intelligent systems. There's no question, you know, the whole issue about, you know, making intelligent armament and weapons is something that really we have to think about carefully. I don't think of those existential threats. I think those are the kind of threats we always face, and we'll have to face them here and help to deal with them. Theo Way could talk about what people think are the existential threats, but when I hear people talking about him. They all sound hollow to me. They're based on ideas. They're based on people who really have no idea what intelligence is. And and if, If they knew what intelligence waas, they wouldn't say those things. So those are not experts in the field

[01:56:28] Lex Fridman Create clip in the home. So yes, so there's two right. There's no one is like super intelligence. So a system that becomes far, far superior in reasoning ability, then as humans

[01:56:43] Jeff Hawkins Create clip How is that an extra central fen

[01:56:46] Lex Fridman Create clip then? So there's a lot of ways in which you could be one way is us humans air actually irrational, inefficient and get in the way of, ah, off, not happiness. But whatever the objective function is of maximizing that objective function, superintelligent, paperclip problem and things like that. So the paper co problem, but with a super intelligent

[01:57:09] Jeff Hawkins Create clip Yeah, so we already face the threat. In some sense, they're called bacteria. These are organisms in the world that would like to turn everything into bacteria, and they're constantly morphing. They're constantly changing. Thio evade our protections, and in the past they have killed huge swaths of populations of humans on this planet. So if you want to worry about something that's gonna multiply endlessly, we have it, and I'm far more worried in that regard. I'm far more worried that some scientists and laboratory will create a super virus or a super bacteria that we cannot control. That is a more existential strep putting. Putting an intelligent thing on top of that actually seems to make it less existential. To me. It's like it limits. Its power has limits where it can go and limits the number of things that could do. In many ways, that bacteria is something you can't you can't even see s. Oh, that's only one of those problem.

[01:58:04] Lex Fridman Create clip Yes, exactly. So the other one, just in your intuition about intelligent and you think about intelligence, all humans do you think of that is something if you look at intelligence is on a spectrum from zero to us humans. Do you think you could scale that to something far superior? Yeah, all the mechanisms would

[01:58:24] Jeff Hawkins Create clip let me. I want to make another point here, Elect, before you get there. Intelligence is the neocortex. It is not the entire brain. If I the goal is not to be make a human The goal is not to make an emotional system. The goal is not to make a system that wants to have sex and we produced. Why would I build that? If I want to have a system that wants to repeat, just have sex, make bacteria, make computer viruses? Those are bad things. Don't do that. Those are really bad. Don't do those things. Regulate those. But if I just say I want him Intelligence system, why doesn't have tohave any the human like emotions? Why couldn't I just even care if it lives? Why does it even care if it has food? It doesn't care about those things. It's just, you know, it's just in a trance thinking about mathematics, or it's out there just trying to build the space plant. You know, Fort on Mars. It's a well, that's a choice we make. Don't make human like things don't make replicating. Things, don't make things would have emotions. Just stick to the neocortex.

[01:59:20] Lex Fridman Create clip So that's That's a view, actually, that I shared. But not everybody shares in the sense that you have faith and optimism about us as engineers of systems humans as builders of systems, T O t o Do not put in stoop nut

[01:59:34] Jeff Hawkins Create clip so it doesn't like this is why I mentioned the bacteria one. Because you might say, Well, some person's going to do that Well, sometimes and today could create a bacteria that's resistant to all the non antibacterial agents. So we already have that threat. We already knows this is going on. It's not a new threat, so just accept that and then we have to deal with it, right? Yeah, My point has nothing. Do intelligence it. Intelligence is a separate component that you might apply to a system that wants to reproduce and do stupid things. Let's not do that.

[02:00:07] Lex Fridman Create clip In fact, it is a mystery why people haven't done that yet. Uh, my my dad is a physicist, believes that the reason uses for some nuclear weapons haven't proliferated amongst evil people. So want us one belief that I shares that there's not that many evil people in the world that would that Ah, that would you respect where there's bacteria and you're the weapons? Or maybe the future? A. I systems to do bad, So the faction is small and the second is that it's actually really hard technically so. The intersection between evil and competent is small in terms

[02:00:44] Jeff Hawkins Create clip and really annihilate humanity. You'd have to have, Ah, you know, sort of the nuclear winter phenomenon, which is not one person shooting or even 10 bombs. You'd have to have some automated system that, you know, detonates a 1,000,000 bombs or whatever. Many thousands We have

[02:01:00] Lex Fridman Create clip extreme evil, combined with extreme competence

[02:01:03] Jeff Hawkins Create clip and building some stupid system that would automatically, you know, Dr Strangelove type of thing. You know? I mean, look, we could have some look a bomb Go offense. The major city in the world. I think that's actually quite likely, even in my lifetime. I don't think that's what I like to think it would be a tragedy. Um, but it won't be an existential threat on, but it's the same as, you know, the virus of 1917 whenever it was three. Influence. Um, these bad things can happen, and the plague and so on way can't always prevented. We always toe always dry, but we can't. But they're not existential threats until we combine all those crazy things together a month.

[02:01:41] Lex Fridman Create clip So on the on the spectrum of intelligence, from zero to human. Do you have a sense of whether it's possible to create several orders of magnitude, or at least double that of human intelligence type on your cortex?

[02:01:55] Jeff Hawkins Create clip I think it's the wrong thing to say. Double the intelligence. You break it down into different components. Can I make something at the 1,000,000 times fast on a human brain? Yes, I could do that. Could I make something that is has a lot more storage than human brain? Yes, more calm, more copies of Can I make something? And attach is two different senses than human brain. Yes, I could do that. Could I make something that's distributed? So these people, we talked earlier about the departure in your cortex voting. They don't have to be cold located, you know. They could be all around the places. I could do that, too. Those are the lovers are half, but is it more intelligent? What depends what I trained it on? What is it doing? Ifit's?

[02:02:35] Lex Fridman Create clip So here's the thing S. So let's say larger and your cortex and or whatever size that allows for higher and higher hierarchies, I'm to form. We're talking about trains in

[02:02:49] Jeff Hawkins Create clip Congo. I could could I have something to super physicist or super mathematician?

[02:02:52] Lex Fridman Create clip Yes. And the question is, once you have a super physicist, will they be able to understand something? Uh, you a sense that it will be orders the mat like us. Compared

[02:03:03] Jeff Hawkins Create clip way ever understand it. Most people cannot understand general relativity. It's a really hard thing to get, you know, with the big pain in the fucking picture. Stretchy space, you know? Yeah, but the field equations to do that. And then the deep intuitions are really, really hard. And, um, I've tried I unable to do it. I need to get, you know, it's easy to get special relativity, general legit, man. That's too much. Um, and so we already live with this. To some extent, the vast majority people can't understand. Actually, what the vast majority of the people actually know. We're just Either we don't have the effort to or we can't read on time or just not smart enough. Whatever. So, um, but we have ways of communicating. Einstein has spoken in a way that I can understand. He's given me analogies that are useful. I can use those analogies for my own work and think about, you know, concepts that are similar.

[02:04:00] Jeff Hawkins Create clip Um, it's not stupid. It's not like he's existed. Some of the plane has no connection with my plane in the world here, So that will occur. It already has occurred. That's when my point, if this story is it already has occurred. We live with every day. Um, one could argue that with we create machine intelligence that think a 1,000,000 times faster than us that it will be so far, we can't make the connections. But, you know, at the moment, everything that seems really, really hard to figure out in the world when you actually figured out it's not that hard, you know you can ever almost everyone can understand the multiverse is that most everyone can understand quantum physics. Almost everyone can understand these basic things, even though hardly anybody people could figure those things up.

[02:04:39] Lex Fridman Create clip Yeah, but really understand.

[02:04:40] Jeff Hawkins Create clip So you need to really only a few people. Really,

[02:04:43] Lex Fridman Create clip Dunstan, you need to only understand the, uh, the projections, the sprinkles of the

[02:04:48] Jeff Hawkins Create clip useful insight. That was my example of Einstein, right? His general theory of relativity is one thing that very, very, very few people can get And what if we just said those are the few people are also artificial intelligences. How bad is that? In some sense, they say already mean Einstein wasn't a really normal person. He had a lot of where the quirks and so the other people who worked with him. So, you know, maybe they already were sort of this astral plane of intelligence that we live with it already. It's not a problem. It's still useful. And, you know,

[02:05:20] Lex Fridman Create clip so do you think we are the only intelligent life out there in the universe?

[02:05:24] Jeff Hawkins Create clip I would say that intelligent life has and will exist elsewhere in the universe. I'll say that there's a question about contemporaneous intelligence life, which is hard to even answer. When we think about relativity in the nature of space time, I can't say what exactly is this time someplace else in the world? But I think it's it's, you know, I do worry a lot about the, um, the filter idea, which is that perhaps intelligent species don't last very long, and so we haven't been around that long. And as a technological species, we've been around for almost nothing. You know what, 200 years like that and we don't have any data. Good data point on whether it's likely that we'll survive or not. S o do I think that there have been intelligent life elsewhere in the universe, Almost certain, of course, in the past. In the future? Yes. Does it survive for a long time? I don't know. This is another reason I'm excited about our work. Is our work meaning that general world A I, um I think we can build intelligent machines. Um, that outlast us.

[02:06:31] Jeff Hawkins Create clip You know, they don't have to be tied Earth. They don't have to. You know, I'm not saying that recreating, you know? You know, Williams, I'm just saying, if I asked myself and this might be a good point to end on here if I ask myself, you know what's special about our species were not particularly interesting. Physically, we're not. We don't fly, we're not good swimmers were not very fast, very strong. You know, it's our brain. That's the only thing. And we're the only species on this planet. It's built a model of the world that extends beyond what we can actually sense. We're the only people who know about the far side of the moon and the other universes and all the Galaxies and other stars, and on about what happens in the Adam that know what. That knowledge doesn't exist anywhere else. Only in our heads. Cats don't do it. Dogs into a monkeys don't do it, and that is what we've created that's unique, not our genes. It's knowledge. And if I asked me, what is the legacy of humanity? What what? What should our legacy be? It should be not. We should preserve our knowledge in a way that it can exist beyond us.

[02:07:30] Jeff Hawkins Create clip And I think the best way of doing that, in fact you have to do it is that has to go along with intelligent machines to understand that knowledge. Um, it's a very broad idea, but we should be thinking I call it a state planning for humanity. We should be thinking about what we want to leave behind when as a species, we're no longer here. Um, and that will happen sometime through now or later. It's gonna happen,

[02:07:52] Lex Fridman Create clip and understanding intelligence and creating intelligence gives us a better chance to prolong.

[02:07:58] Jeff Hawkins Create clip It does give us a better chance for long life. Yes, it gives us a chance to live on other planets. But even beyond that, I mean, our solar system will disappear one day. Just give enough time. So I don't know. I doubt we'll ever be able to travel to other things. But we could tell the stars, but we could send Intel's machines to do that.

[02:08:17] Lex Fridman Create clip So you have. Ah, you have an optimistic, a hopeful view of our knowledge of the echoes of human civilisation living through the intelligence systems we create.

[02:08:29] Jeff Hawkins Create clip Oh, well, I think the telephone systems the creator in some sense the vessel for Bring him beyond Earth are making him last beyond humans themselves. So how

[02:08:40] Lex Fridman Create clip do you feel about that? That they won't be human? Quote

[02:08:43] Jeff Hawkins Create clip unquote. Okay, It's not human. What is human? Our species are changing all the time. Human today is not the famous human. Just 50 years ago, it's what is human? Do we care about our genetics? Why is that important? As I point out, our genetics and no more interesting than a bacterium's genetic says no more interesting than you know, monkeys, genetics, what we have, what's unique and it's fairly bright. Star is our knowledge of what we've learned about the world, and that is the rare thing. That's the thing we want to preserve its kids, better genes, knowledge, the knowledge

[02:09:16] Lex Fridman Create clip That's a really good place to end. Thank you so much for talking. It's fun.