Voices in AI – Episode 45: A Conversation with Stephen Wolfram

Well, what do you mean by ‘can’?

Well, ‘can’ as in, I mean, surely we can coerce but, I mean, ethically ‘can’.

I don’t know the answer to that. Ethics is defined by the way people feel about things. In other words, it’s not the case, there is no absolute ethics.

Well, OK. Fair enough. I’ll rephrase the question. I assume your ethics preclude you from coercing other entities into doing your bidding. At what point do you decide to stop programming computers to do your bidding.

And at what point do I let them do what they want, so to speak?

Right.

When do I feel that there is a moral need to let my computer do something just because? Well, let me give you an example. I’ve often have computers do complicated searches for things that take months of CPU time. How do I feel about cutting the thing off moments before it might have finished? Well, I usually don’t feel like I want to do that. Now, do I not want to do that purely because I want to get the result? Or do I feel some kind of feeling, “Oh my gosh, the computer has done so much work? I don’t just want to cut it off.” I’m not sure actually.

Do you still say thank you to the automatic ticket thing when you leave the parking garage?

Yes, to my children’s great amusement. I have made a principle of doing that for a long time.

Stephen, I don’t know how to say this, but I think maybe you’ve been surrounded by the computers so much that you kind of have Stockholm Syndrome and identify with them.

More to the point you might say I’ve spent so much time thinking about computation, maybe I’ve become computation myself as a result. Well, in a certain sense, yes, absolutely, that’s happened to me, in the following sense. We think about things and how do we form our thoughts? Well, some philosophers think that we use language to form our thoughts. Some think thoughts are somewhat independent of language. One thing I can say for sure, I’ve spent some large part of my life doing computer language design, building Wolfram language system and so on, and absolutely, I think in patterns that are determined by that language. That is, if I try to solve a problem, I am, both consciously and subconsciously, trying to structure that problem in such a way that I can express it in that language and so that I can use the structure of that language as a way to help me understand the problem.

And so absolutely it’s the case that as a result of basically learning to speak computer, as a result of the fact that I formulate my thoughts in no small way, using Wolfram language, and using this computational language, absolutely, I probably think about things in a different way than I would if I had not been exposed to computers. Undoubtedly, that kind of structuring of my thinking is something that affects me, probably more than I know, so to speak. But, in terms of whether I think about people, for example, like I think about computational systems. Actually, most of my thinking about people is probably based on gut instinct and heuristics. I think that the main thing that I might have learned from my study of computational things is that there aren’t simple principles when it comes to looking at the overall behavior of something like people. If you dig down you say, “How do the neurons work?” We may be able to answer that. But the very fact that this phenomenon of computational irreducibility happens, it’s almost a denial of the possibility that there is going to be a simple overall theory of, for example, people’s actions, or certain kinds of things that happen in the world so to speak. People used to think that when we applied science to things, that it would make them very cut and dried. I think computational irreducibility shows that that’s just not true, that there can be an underlying science one can understand how the components work and so on, but it may not be that the overall behavior is cut and dried. It’s not like the kind of 1950s science fiction robots where the thing would kind of start having smoke come out of its ears if there were some logical inconsistency that it detected in the world. This kind of simple view of what can happen in computational systems is just not right. Probably, if there’s one thing that’s come out of my science, in terms of my view of people at that level, it’s that, no, I doubt that I’m really going to be able to say, “OK, if this then that”, you know, kind of apply very simple rules to the way that people work.

But, hold on a second, I thought the whole way you got to something that looked like free will, I thought your thinking was, “No, there isn’t, but the thing is the number of calculations you would have to do to predict the action is so many you can’t do it, so it’s effectively freewill but it isn’t really.” Do you still think that?

That’s correct. Absolutely, that’s what I think.

But the same would apply to people.

Absolutely.

In a sufficiently large enough computer you would be able to…

Yes, but the whole point is, as a practical matter in leading one’s life, one isn’t doing that, that’s the whole point.

But to apply that back to your Byron’s brain feeling pain, couldn’t that be the same sort of thing it’s like, “Well, yeah, maybe that’s just calculation, but the amount of calculation that would have to happen for a computer to feel pain is just not calculable.”

Leave a Reply

Your email address will not be published. Required fields are marked *