No, no, no. There’s a definite threshold. If you look at a system and all it does is stay constant over time. You started in some state and just stays that way. Nothing exciting is going on there. There are plenty of systems where, for example, it will just repeat. What it does is just repeat predictably over and over again. Or, you know, it makes some elaborate nested pattern but it’s a very predictable pattern. As you look at different kinds of systems there’s this definite threshold that gets passed and it’s related to this thing I call ‘principle of computational equivalence’, which is basically the statement that, beyond some very low level of structural complexity of the system, the system will typically be capable of a certain level of sophisticated computation, and all systems are capable of that same level of sophisticated computation. One facet of that is the idea of universal computation, that everything can be emulated by a Turing machine, and can emulate a Turing machine. But that’s a little bit more to this principle of computational equivalence than the specific feature of universal computation but, basically, the idea is it could have been the case.
If we’d been having this conversation over 100 years ago, people had mechanical calculators at that time. They had ones that did one operation, did another kind of operation. We might be having a discussion along the lines of, “Oh, look at all these different kinds of computers that exist. They’ll always be different kinds of computers that one needs.” Turns out that’s not true. It turns out all one needs is this one kind of thing that’s a universal computer, and that one kind of computer covers all possible forms of computation. And so then the question is if you look at other kinds of systems do they do computation at the same level as things like universal computers, or are there many different levels, many different, incoherent kinds of computation that get done. And the thing that has emerged from, both general discoveries that have been made and specifically a lot of stuff I’ve done, is that, no, anything that we seriously imagine could be made in our universe, seems to have this one kind of computation, this one level of computation that it can do. There are things that are below that level of computation and whose behavior is readily predictable, for example, by a thing like a brain that is this kind of uniform sophisticated level of computation. But once you reach that sophisticated level of computation, everything is kind of equal. And, in fact, if that wasn’t the case, then we would expect that, if for example there was a whole spectrum of different levels of computation, then the top computer, so to speak, we could expect will be able to say, “Oh, you lower, lesser computers, you’re wasting your time. You don’t need to go through and do all those computations. I, the top computer, can immediately tell you what’s going to happen. The end result of everything you’re doing is going to be this.” It could be that things work that way but it isn’t, in fact, the case. Instead, what seems to be the case, is that there’s this one, kind of, uniform level of computation and the sense it’s that uniformity of level of computation that has a lot of consequences that we’re very familiar with. For example, it could be the case if nature mostly consisted of things whose level of computational sophistication was lower than the computational sophistication of our brains, we would readily be able to work out what was going to happen in the natural world, sort of, all of the time. And when we look at some complicated weather pattern or something, we would immediately say, “Oh, no, we’re a smarter computer, we could just figure out what’s going to happen here. We don’t need to let the computation of the weather take its course.” What I think happens is this, sort of, equality of computation that leads to a lot of things that we know are true. For example, that’s why it seems to us that the weather has a mind of its own. The weather almost seems to be acting with free will. We can’t predict it. If the system is readily predictable by us, then it will not seem to be, kind of, free in its will. It will not seem to be free in the actions it takes. It will seem to be just something that is following some definite rules, like a 1950s sci-fi robot type thing.
This whole area of, what is purpose, how do we identify what purpose is, I think, in the end, this is a very critical thing to discuss in terms of the fundamentals of AI. One of the things people ask is, “OK, we’ve got AI, we’ve got increasing automation of lots of kinds of things. Where will that end?” And I think one of the key places that it will end is, purpose is not something that is available to be automated, so to speak. It doesn’t make sense to think about automating purpose. It’s for the same reason that it doesn’t make sense – the same reason I’m saying, it’s this question that you can’t distinguish these different things and say, “That’s a purpose. That’s not a purpose,” is the same reason that purpose is this kind of thing that is, in some sense, tied to the bearer of that purpose, in this case humans, for example.
When I read your writings, or when I talk to you, you frequently say this thing that people keep thinking that there’s something special about them, they keep coming up with things a machine can’t do, they don’t want to give the machine intelligence because… you come across as being really down on people. I would almost reverse it to say, surely there isn’t some kind of an equivalence between a hurricane, an iPhone, and a human being. Or is there? And if there isn’t, what is special about people?
What’s special about people is all of our detailed history.
That’s just different than other things. The iPhone has a detailed history and the hurricane. That isn’t special, that’s just unique.
Well, what’s the difference between special and unique? It’s kind of ironic because, as you know, I’m very much a person who’s interested in people.
That’s what I’m curious about, like, why is that? Because you seem to take this perverse pride in saying, “Oh, people used to think computers can never do this and now they do it. And then they said it can never do this, and they do it.” I just kind of wonder, I try to reconcile that with the other part of you which is clearly a humanist. It’s almost bifurcated like half of your brain, intellectually has constructed this model of moral equivalence between hurricanes and people, and then the other half of you kind of doesn’t believe it.
You know, one of the things about doing science is, if you try to do it well, you kind of have to go where the science leads. I didn’t come into this believing that that will be the conclusion. In fact, I didn’t expect that to be the conclusion. I expected that I would be able to find some sort of magnificent bright line. In fact, I expected that these simple cellular automata I studied would be too simple for physics, too simple for brains, and so on. And it took me many years actually to come to terms with the idea that that wasn’t true. It was a big surprise to me. Insofar as I might feel good about my efforts in science, it’s that I actually have tried to follow what the science actually says, rather than what my personal prejudices might be. It is certainly true that personally, I find people interesting; I’m a big people enthusiast so to speak.
Now in fact, what I think is that the way that things work in terms of the nature of computational intelligence in AI is actually not anti-people in the end. In fact, in some sense it’s more pro-people than you might think. Because what I’m really saying is that, because computational intelligence is sort of generic, it’s not like we have the AI, which is a competitor. “There’s not just going to be one intelligence around, there are going to be two.” No, that’s not the way it is. There’s an infinite number of these intelligences around. And so, in a sense, the non-human intelligence we can think of as almost a generic mirror that we imprint in some way with the particulars of our intelligence. In other words, what I’m saying is, eventually, we will be able to make the universe, through computation and so on, do our bidding more and more. So then the question is, “What is that bidding?” And, in a sense, what we’re seeing here is more, if anything, is in some ways an amplification of the role of the human condition, rather than its diminution, so to speak. In other words, we can imprint human will on lots and lots of kinds of things. Is that human will somehow special? Well, it’s certainly special to us. Is it the case if we’re going into a competition, who’s more purposeful than who? That degenerates into a meaningless question of definition which, as I say, I think to us, we will certainly seem to be the most purposeful because we’re the only things where we can actually tell that whole story about purpose. In other words, I actually don’t think it’s an interesting question. It maybe was not intended this way, but my own personal trajectory in these things is I’ve tried to follow the science to where the science leads. I’ve also tried to some extent to follow the technology to where the technology leads. You know I’m a big enthusiast of personal analytics and storing all kinds of data about myself and about all kinds of things that I do, and so on. I certainly hope and expect one day to increasingly make the bot of myself, so to speak. My staff claims, maybe flattering me, that my attempt to make the SW email responder will be one of the last things that gets successfully turned into a purely automated system, but we will see.
But the question is, to what extent when one is looking at all this data about oneself and turning what one might think of as a purely human existence, so to speak, into something that’s full of gigabytes of data and so on — is that a dehumanizing act? I don’t think so. One of the things one learns from that is that, in a sense, it makes the human more important rather than less. Because, there are all these little quirks of, “What was the precise way that I was typing keystrokes on this day as opposed to that day?” Well, it might be “who cares?” but when one actually has that data, there’s a way in which one can understand more about those detailed human quirks and recognize more about those in a way that one couldn’t, without that data, if one was just, sort of, acting like an ordinary human, so to speak.
So, presumably you want your email responder to live on after you. People will still be able to email you in a hundred years, or a thousand years and get a real Stephen Wolfram reply?
I know that you have this absolute lack of patience anytime somebody seems to talk about something that tries to look at these issues in any way other than just really scientifically.
I always think of myself as a very patient person, but I don’t know, that may look different from the outside.
But, I will say, you do believe consciousness is a physical phenomenon, like, it exists. Correct?
What on Earth does that mean?
So, alright. Fair enough. See, that’s what I mean exactly.
Let me ask you a question along the same line. Does computation exist?
What on Earth does the word ‘exist’ mean to you?
Is that what it is? It’s not the ‘consciousness’ you were objecting to, it’s the word ‘exist’.
I guess I am; I’m wondering what you mean by the word ‘exist’.
OK, I will instead just rephrase the question. You could put a heat sensor on a computer and you could program it that if you hold a match to the heat sensor, the computer plays an audio file that screams. And yet we know people, if you burn your finger, it’s something different. You experience it, you have a first-person experience of a burned finger, in a way the computer can only sense it, you feel it.