Voices in AI – Episode 45: A Conversation with Stephen Wolfram

No. There’s a question of how many neurons, how much accuracy, what’s the cycle time etc. But we’re probably coming fairly close and we will, in coming years, get decently close to being able to emulate, with digital electronics, the important parts of what happens in brains. You might always argue, “Oh, it’s the microtubules on every neuron that are really carrying the information” Maybe that’s true, I doubt it. And that’s many orders of magnitude further than what we can readily get to with digital electronics over the next few years.

But, either you can model a brain and know what I’m going to say next, and know that I felt pain, or you can’t, and you can preserve some semblance of free will.

No, no, no. Both things are true. You can absolutely have free will even if I can model your brain at the level of knowing what you will say next. If I do the same level of computation that your brain is doing, then I can work out what you will say next. But the fact is, to do that, I effectively have to have a brain that’s as powerful as your brain, and I have to be just following along with your brain. Now, there is a detail here which is this question of levels of modeling and so on, and how much do you have to capture, and do you have to go all the way down to the atoms, or is it sufficient to just say, “Does this neuron fire or not?” And yes, you’re right, that sort of footnote to this whole thing, when I say, “How much free will?” Well, free enough will that it takes a billion logical machine operations to work out whether you will say true or false. If it takes a billion operations to tell when you are going to say true or false, should one say that you are behaving as if you have free will, even though were you to do those billion operations you can deterministically tell you’re going to say true, or you’re going to say false? As a practical matter, in interacting with you, I would say you’re behaving as if you have free will because I can’t immediately do those billion operations to figure out the answer. In a future world where we are capable of doing more computation more efficiently, for example, we may eventually have ways to do computation that are much more efficient than brains. And, at that point, we have our simulated brains, and we have our top of the line computers made at an atomic scale or whatever else. And, yes, it may very well be the case, that as a practical matter, the atomic scale computers out-compute simulated brains by factors of trillions.

I’ll only ask one more question along this line, because I must be somewhat obtuse. I’m not a very good chess player. If I download a program on my iPad, I play at level four out of ten, or something. So, say I flip it up to level five. I don’t know what move the computer is going to make next because it’s going to beat me. I don’t have a clue what it’s going to move next, that’s the whole point. And yet, I never think, “Oh, because I don’t know, it, therefore, must have free will.”

That’s true. You probably don’t think that. Depends on what the computer is doing. There’s enough of a background of chess playing that that’s not an immediate question for you. If the computer was having a conversation with you, if suddenly, in 2017, the computer was able to have a sort of Turing Test complete conversation with you, I think you would be less certain. I think that, again, there is a progression of — an awful lot of what people believe and how people feel about different things, does the computer have consciousness, does it blah blah? An awful lot of that, I think, ends up coming about because of the historical thread of development that leads to a particular thing.

In other words, imagine — it’s an interesting exercise — imagine that you took a laptop of today back to Pythagoras or something. What on earth would he think of it? What would he think it was? How would he describe it? I wondered about this at some point. My conclusion is they’d start talking about, “What is this thing? It’s like disembodied human souls.” Then you explain, “Well, no it’s not really disembodied human soul.” They say, “Well, where did all these things that are on the computer come from?” “Well, they were actually put there by programmers but then the computer does more stuff.” And it gets very complicated. I think it’s an interesting thought experiment to imagine at a different time in history. Pythagoras is a particularly interesting case because of his thinking about souls and his thinking about mathematics. But, it’s to imagine what, at a different time in history, would somebody imagine the technology of today was actually like? And that helps us to understand the extent to which we are prisoners of our particular time in history. To take the thought experiment of, what if we have a super, in some sense, computer that can predict what our brains do, a trillion times faster than our brains actually do it. How will that affect our view of the world? My guess is that what will actually happen if that happens, and it presumably will in some sense, we will have by that time long outsourced much of our thinking to machines that just do it faster than we do. Just like we could decide that we’re going to walk everywhere we want to go. But actually, we outsource much of our transportation to cars and airplanes and things like that, that do it much faster than we do it. You could say, “Well, you’re outsourcing your humanity by driving in a car.” Well, we don’t think that anymore because of the particular thread of history by which we ended up with cars. Similarly, you might say, “Oh my gosh, you’re outsourcing your humanity by having a computer think for you.” In fact, that argument comes up when people use the tools we’ve built to do their homework or whatever else. But, in fact, as a practical matter, people will increasingly outsource their thinking processes to machines.

And then the question is, and that sort of relates to what I think you are going to ask about, should humans be afraid of AIs, and so on. That sort of relates to, well, where does that leave us, humans, when all these things, including the things that you still seem to believe, are unique and special for humans but I’m sure they’re not, when all of those things have been long overtaken by machines, where does that leave us? I think the answer is that you can have a computer sitting on your desk, doing the fanciest computation you can imagine. And it’s working out the evolution of Turing machine number blah blah blah, and it’s doing it for a year. Why is it doing that? Well, it doesn’t really have a story about why it’s doing it. It can’t explain its purpose because, if it could explain it, it would be explaining it in terms of some kind of history, in terms of some kind of past culture of the computer so to speak. The way I see it is, computers on their own simply don’t have this notion of purpose is something that is, in a sense, one can imagine that the weather has a purpose that it has for itself. But this notion of purpose that is connected to what we humans do, that is a specific human kind of thing, that’s something that nobody gets to automate. It doesn’t mean anything to automate that. It doesn’t mean anything to say, “Let’s just invent a new purpose.” We could pick a random purpose. We can have something where we say, “OK, there are a bunch of machines and they all have random purposes.” If you look at different humans, in some sense there’s a certain degree of randomness and there are different purposes. Not all humans have the same purposes. Not all humans believe the same things; have the same goals, etc. But if you say, “Is there something intrinsic about the purpose for the machines?” I don’t think that question really means anything. It ultimately reflects back on the thing I keep on saying about the thread of history that leads humans to have and think about purposes in the ways that they do.

But if that AI is alive, you began by taking my question about what is life and if you get to a point where you say, “It’s alive” then we do know that, living things, their first purpose is to survive. So, presumably, the AI would want to survive, and then their second purpose is to reproduce, their third purpose is to grow. They all naturally just flow out of the quintessence of what it means to be alive. “Well, what does it mean for me to be alive?” It means for me to have a power source. “OK, I need a power source. Ok, I need mobility.” And so it just creates all of those just from the simple fact of being alive.

I don’t think so. I think that you’re projecting that onto what you define as being alive. I mean, it is correct, there is, in a sense, one 0th level purpose, which is, you have to exist if you want to have any purpose at all. If you don’t exist then everything is off the table. The question of whether a machine, a program or whatever else has a desire, in some sense, to exist. That’s a complicated question. I mean it’s like saying, “Are there going to be suicidal programs?” Of course. There are right now. Many programs, their purpose is to finish, terminate and disappear. And that’s much rarer, perhaps, fortunately, for humans.

So, what is the net of all of this to you then? You hear certain luminaries in the world say we should be afraid of these systems, you hear dystopian views about the world of the future. You’ve talked about a lot of things that are possible and how you think everything operates but what do you think the future is going to be like, in 10 years, 20, 50, 100?

What we will see is an increasing mirror on human condition, so to speak. That is, what we are building are things that essentially amplify any aspect of the human condition. Then it, sort of, reflects back on us. What do we want? What are the goals that we want to have achieved? It is a complicated thing because certainly AIs will in some sense run many aspects of the world. Many kinds of systems, there’s no point in having people run them. They’re going to be automated in some way or another. Saying it’s an AI is really just a fancy way of saying it’s going to be automated. Another question is, well what are the overall principles that those automated systems should follow? For example, one principle that we believe is important right now, is the ‘be nice to humans’ principle. That seems like a good one given that we’re in charge right now, better to set things up so that it’s like, “Be nice to humans.” Even defining what it means to be nice to humans is really complicated. I’ve been much involved in trying to use Wolfram language as a way of describing lots of computational things and an increasing number of things about the world. I also want it to be able to describe things like legal contracts and, sort of, desires that people have. Part of the purpose of that is to provide a language that is understandable both to humans and to machines that can say what it is we want to have happen, globally with AIs. What principles, what general ethical principles, and philosophical principles should AIs operate under? We had the Asimov’s Laws of Robotics, which are a very simple version of that. I think what we’re going to realize is, we need to define a Constitution for the AIs. And there won’t be just one because there aren’t just one set of people. Different people want different kinds of things. And we get thrown into all kinds of political philosophy issues about, should you have an infinite number of countries, effectively, in the world, each with their own AI constitution? How should that work?

One of the fun things I was thinking about recently is, in current democracies, one just has people vote on things. It’s like a multiple-choice answer. One could imagine a situation in which, and I take this mostly as a thought experiment because there are all kinds of practical issues with it, in a world where we’re not just natural language literate but also computer language literate, and where we have languages, like Wolfram language which can actually represent real things in the world, one could imagine not just voting, I want A, B, or C, but effectively submitting a program that represents what one wants to see happen in the world. And then the election consists of taking X number of millions of programs and saying “OK, given these X number of millions of programs, let’s apply our AI Constitution to figure out, given these programs how do we want the best things to happen in the world.” Of course, you’re thrown into the precise issues of the moral philosophers and so on, of what you then want to have happen and whether you want the average happiness of the world to be higher or whether you want the minimum happiness to be at least something or whatever else. There will be an increasing pressure on what should the law-like things, which are really going to be effectively the programs for the AIs, what should they look like. What aspects of the human condition and human preferences should they reflect? How will that work across however many billions of people there are in the world? How does that work when, for example, a lot of the thinking in the world is not done in brains but is done in some more digital form? How does it work when there is no longer… the notion of a single person, right now that’s a very clear notion. That won’t be such a clear notion when more of the thinking is done in digital form. There’s a lot to say about this.

That is probably a great place to leave it. I want to thank you, Stephen. Needless to say, that was mind-expanding, would be the most humble way to describe it. Thank you for taking the time and chatting with us today.

Sure. Happy to.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Leave a Reply

Your email address will not be published. Required fields are marked *