AI, Man & God | Prof. John Lennox

Transcript of an Interview Between John Anderson and Professor John Lennox, Emiratus Fellow and Professor of Mathematics at Oxford University

Wrighter
29 min readNov 13, 2023

As a keen learner about all things Industry 4.0 related, I wanted to do my part in helping raise awareness of the ethical aspects. So, I created a word-for-word transcript of this fascinating interview with Professor John Lennox, Emiratus Fellow and Professor of Mathematics at the University of Oxford.

It took a while to eliminate all the errors in the automated transcript. But it is my hope that others interested in learning about the relationship between AI, God and human beings.

The source video is here:

Transcript

Introduction: This is, perhaps, one of the scariest aspects of it. What we’re talking about here is facial recognition by closed-circuit television. Well, it starts with facial recognition, but we’ve now got to the stage where, in China in particular, they can recognize you from the back, by your gait, by all kinds of things.

John Anderson: It’s an extraordinary privilege for me to be in Oxford and able to talk personally to Professor John Lennox, Emeritus Fellow and Professor of Mathematics at the University of Oxford. For years, a professor of mathematics at the University of Wales in Cardiff. He’s lectured extensively all over the world. He’s written widely. Interestingly, he’s spent a lot of time in Russia and Ukraine after the collapse of Communism, and it’s deeply aggrieved to see what is happening there. And the idea that young men on both sides, that he and others have taught and mentored, may now be fighting one another into the dust, in these dangerous times in which we live. But amongst these many writings, he’s gifted us a very useful book — he tells me he’s already updating it — on artificial intelligence and the future of humanity, called ‘2084’, which says a lot in the sense that we all know about ‘1984’. I think you’re telling us that there are some troubling things coming up. John, thank you so much for your time.

Professor John Lennox: It’s my pleasure to be with you.

John Anderson: Can we begin. Over the past two years, during the COVID pandemic, but also with climate change? We hear this phrase a lot in Australia, and it seems internationally, ‘trust the science’. It strikes me that in our allegedly secular age, trust and faith are still seen as pretty important. We haven’t walked away from them. Do you think those who are accused of not trusting the science are frequently seen as somehow rationally and even morally deficient? In an age of crisis, is science becoming a new “saviour”?

Professor John Lennox: Well, trusting the sciences is fine if it’s kept to the things at which science is competent. But unfortunately, over the past few years, there has developed a trust in science that we now call scientism, where science is regarded essentially as the only way to truth, the only option for a rational thinking person, and everything else is fairy stories and all the rest of it. And I take a great exception to that because it’s plainly false. It’s false logically because the very statement ‘science is the only way to truth’ is not a statement of science, and so if it’s true, it’s false. So, it’s logically incoherent to start with. But going a little bit more into it, it has had huge influence because of people like the late Stephen Hawking, for example, who wrote in one of his books that philosophy is dead and it seems now as if scientists are holding the torch of truth, and that’s scientism. The irony of it is, of course, that he wrote it in a book where it’s all about philosophy of science. And it’s pretty clear that Hawking, brilliant as he was as a mathematical physicist, really is a classic exemplar of what Albert Einstein once said: ‘The scientist is a poor philosopher.’ And my response to would very much be couched in the kind of attitude that Sir Peter Medawar, a Nobel Prize winner in Oxford here, once wrote. He said, ‘It’s so very easy to see that science, meaning the natural sciences, are limited in that they cannot answer the simple questions of a child: Where do I come from? Where am I going to? And what is the meaning of life?’ And it seems to me immensely important that we recover that. And what Medawar went on to say is we need literature, we need philosophy — and we need theology as well, in my view, in order to answer the bigger questions. Now, the late Lord Sacks, a brilliant philosopher, he was the Chief Rabbi of the UK and the Commonwealth and so on.

John Anderson: And one of the guests on this series.

Professor John Lennox: And one of the guests on this series? Well, I’m delighted to hear it. He once wrote a very pithy statement that I found very helpful. He said, Science takes things apart to understand how they work,” and I suppose to understand what they’re made of. “Religion puts them together to see what they mean.” And I think that encapsulates the danger in which we’re standing. Science has spawned technology, we’ve become addicted to technology, particularly the more advanced forms of it like AI, in my book like virtual reality, the metaverse, all this kind of stuff. We’ve come addicted to it, but we’ve lost a sense of real meaning, and in particular, we’ve lost our moral compass. Einstein, again to quote him, made the point long ago. He said, “You can speak of the ethical foundations of science, but you cannot speak of the scientific foundations of ethics.” Science doesn’t tell you what you ought to do. It will tell you, of course, if you put strychnine in your granny’s tea, it will give her a very hard time; in fact, it’s a killer. But it can’t tell you whether you ought to do it or not to get your hands on her property. And so, we’re left in a scientistic moral vacuum. And therefore, I feel very strongly that as a scientist of sorts, I need to challenge this. Science is marvelous, but it’s limited to the questions it can handle. And let’s realize, it does not deal with the most important questions of life, and they’re the questions of ‘Who am I?’ ‘What can life and does life mean?’ and ‘Where do we get a moral compass?’

John Anderson: Before we come to artificial intelligence then, I’d just like to explore what you’ve been talking about a little bit with reference to Britain. I love history. I’ve always massively admired Britain, and I know Britain seems to be under self-flagellation on just about every issue you can think of at the moment, the decrying of its own cultural roots. But to my way of thinking, I think in many ways, Britain’s been a force for unbelievable good in the world. I really do. I mean, as an Australian, I would not live in a free country if it hadn’t been for the Prime Minister of this country standing up when no one else did in 1939, just one minor example. But I come here now, and I wonder just what the British people believe in. So massively shaped by Christian faith, arguments, sometimes very ugly, over a long period of time, but nonetheless, profoundly shaped. The Times reported just a couple of years ago that we’ve reached the point where 27% of Britons believe in God, with an additional 16% believing in a higher power. Among the British as a whole, 41% say they believe there is neither a God nor a higher power. Interestingly, those in the UK, young people, the number who said they believe in God, rose a little. And nonetheless, what you’ve got here is one of the most secular societies on Earth, which not so very long ago was one of the more Christian. What’s responsible? Is it tied to a sort of false faith in science, amongst other things, or is it just that’s it’s too hard? Or is it that the wars have seen people convinced that you saw two Christian nations fighting, people praying to the same God for victory? How did it morph so badly to a state of unbelief, do you think, in the country that you’ve lived in your life?

Professor John Lennox: I find this a complex and difficult issue because I see different strands in it. If you pick up on the science side, you go back to Isaac Newton, and he gave us a picture of the universe that was very much what’s called a Clockwork picture, the universe running on fixed laws that were, according to Newton, originally set in place by God. But it was a universe that essentially now ran on its own. And you can see that in the 18th century, they particularly favored what’s called deism, that is, there is a God, but He’s hands-off. He started it running, and now it runs, and it runs very well. And you can see, with that in the collective psyche, particularly in the academy, it very rapidly led to questions of ‘Is God really necessary?’ Now, you add that to what was happening on the continent with the Enlightenment and the corrupt Church, professing Christianity, utterly corrupt, and the reaction against that, which was fuel to the fire of a rising secularism and atheism. And then you add to that what was happening in the days around the time of Charles Darwin, where you had Huxley, who was an atheist, and he resented these clergymen who were actually some of them very good natural philosophers. Like Wilberforce, actually, was a much brighter man than many people think, as Darwin pointed out. But Huxley, in the UK, he wanted a church scientific; he wanted to turn the churches into the temples to the goddess Sophia of wisdom, that kind of idea. So, you’ve got all of that. And then you add to that the vitriolic anti-God sentiments that are not just atheism but anti-God feeling, led for quite a long time by Richard Dawkins and other people. And that’s had huge influence on young people’s minds. One of the reasons I entered the fray, actually, because the media then come into this, as even more complicated. Because within the media, the dominant view, and I think the BBC actually stated this at one time, is that they favored naturalism, the philosophy that Nature’s all there is, and there’s no outside, there’s no transcendence, there’s no God. So, you’ve got all of that. And against it, you have a group of people who are often cowed into letting their faith in God become private. This is the tragedy of secularism. And you get into that the cancel culture, the culture wars, all this kind of stuff, where I’ve got to affirm everything, everything’s equally valid. You’ve got relativism and post-modernism, at least, and things that people think don’t matter. You never meet a postmodern businessperson who goes to a bank manager and says, ‘I’ve got USD 5,000 in the bank,’ and the bank manager says, ‘Well, actually, you owe the bank USD 10,000.’ ‘Oh, that’s only your truth.’ No, that doesn’t work in the business world. But still, you’ve got this pressure of relativism. And so, you end up, as Michael Burke put it a few years ago, talking about faith in God and Britain, with the first generation that doesn’t have a shared worldview. Now, there’s still a Christian influence, as even atheists recognize, but we’ve gone a long way in rejecting God and abandoning God. And then there’s the entertainment industry that will fill everybody’s vacuum with noise, and we entertain ourselves to death. So, your question is extremely complex, and it would need a more observant person than me to give you a full answer. It’s a huge mix of stuff, and any individual person may be affected by this in completely different ways.

John Anderson: The reason that it’s important, I think, to set that up is, we now come to what I really wanted to hear your views on: artificial intelligence. Because science is giving us extraordinary capabilities, but will we simply be seduced by it in the sense that artificial intelligence is rapidly creating things that are marvelous, that we want to enjoy, that may satiate us, may dull us, while aspects of the emergence of AI could be very dangerous. But before we start to explore that, for ordinary people in the street like me, who are not living with this stuff — well, I am living with this stuff, but don’t know where it might go — we need to define some terms. What is AI? What’s narrow AI, of the sort that we’re quite familiar with, limited intelligence but highly focused on narrow areas? What is artificial general intelligence, and where might that go? There’s a whole number of issues. Then there’s the whole issue of transhumanism. So, can we start with, very broadly, AI is what, John? How would you explain it to a layman? We’ve all heard the term.

Professor John Lennox: No, sure. Well, the first thing to realize is that the word ‘artificial’ in the phrase artificial intelligence is real. And that’s not due to me; it’s due to one of the pioneers of the subject, who happens to be a Christian. And the point is, I will take a narrow AI system first, because it’s much easier to explain. A narrow AI system is a system involving a high-powered computer, a huge database, and an algorithm that does some picking and choosing, whose output is something that normally requires human intelligence to do. That is, if you look at the output, you would say normally that it’s taken an intelligent person to do that. So, let’s take an example that is very important these days in medicine, and that’s interpreting x-rays. So, we have a database, let’s say it has one million X-rays of lungs that are infected with various diseases, say related to COVID-19. They are then labelled in the database by the world’s top experts. Then, they take an X-ray of your lungs or my lungs, and the algorithm compares the X-ray of your lungs with the million, very rapidly, and it produces an output which says, ‘John Anderson has got that disease.’ Now, at the moment, that kind of thing, which is being rolled out not only in radiology but all over the place, will generally give you a better result than your local hospital will, and that’s hugely important and hugely valuable. But the point is, the machine is not intelligent; it’s only doing what it’s programmed to do. The database is not intelligence; the intelligence is the intelligence of the people that designed the computer, know about X-rays, and know about medicine. But the output is what you would expect from an intelligent doctor. So, it’s in that sense artificial. It’s a system, narrow in the sense it only deals with one thing. And endless kinds of systems are being rolled out around the world, and some of them, as you mentioned, are extremely beneficial. Narrow AI has been used in the development of vaccines, and the spin-off from that technology is enormous in drug development, and on and on it goes. And I can give you dozens of examples, and they’re in my book. So, that’s where we start. Now, we are familiar with it, and it’s worth giving a second example of it, because most of us, voluntarily, are wearing a tracker. It’s called a smartphone. It knows where we are. It could be even recording what we’re saying, but what it does do, of which we’re all aware, is if we, for example, buy a book on Amazon, we very soon get little pop-ups that say, ‘People that bought that book are usually interested in this book.’ And what’s happening there is, the AI system is creating a database of your preferences, your interests, your likes, your purchases, and is using that to compare with its vast database of available things for sale, so that it predicts what you might like. So, this is of huge commercial value, and it leads to something else which most of us don’t know about, and we can come to that later, but I’ll mention it now, which is called surveillance capitalism. And there’s a book by an Emerita professor at MIT called Shoshana Zuboff, and it’s regarded as a very serious book because the point she’s making is, global corporations are using your data, and without your permission, are selling it off to third parties and making a lot of money out of it. And that raises deep privacy issues. So now, you’re straight into the ethics. So, that’s narrow AI.

John Anderson: Okay, so let’s stay on narrow AI and extend our road a little bit further down towards broader use. You’ve just talked about us being unaware of how we’re being surveilled. Yes, and it was right here in Oxford, I think it may have been you who made the point in a talk that I heard, where the point was made that what’s happening in China, using artificial intelligence to surveil people, is astonishing. But in many ways, all that information has been collected in the West as well; it’s just not collated in the same way.

Professor John Lennox: That’s correct. And this is, perhaps, one of the scariest aspects of it. What we’re talking about here is facial recognition by closed-circuit television. Well, it starts with facial recognition, but we’ve now got to the stage where, in China, in particular, they can recognize you from the back, by your gait, by all kinds of things. And what has happened is, and you can see the positive benefit, police want to arrest criminals, or thugs, or rowdies, even in a football crowd, and so, using facial recognition technology, they can pick a person out and arrest them. Well, okay. But while it can be used for good purposes in that sense, in keeping law and order, it can also become, particularly in an autocratic state, a common instrument of control. And here’s the huge dilemma which people try to solve: How much of your privacy are you prepared to sacrifice for security? There’s a tension between those two things. Now, in China, you mentioned, and you’re probably thinking about Xinjiang, where you’ve got a Muslim minority of Uyghur people. The surveillance level on them is unbelievable. Every few hundred meters down the street, they have to stop; they have to hand in their smartphones. The smartphones are loaded with all kinds of stuff by the government. Their houses have QR codes outside them, as to how many people live there, and all this kind of thing. And I don’t know how many — it’s way over a million, I believe — are being held as a result of what is being picked up by artificial intelligence systems in re-education centres. And the suspicion is that the culture is being destroyed and eradicated. That’s the one hand, that’s in one particular province. But elsewhere in China, we have now the social credit system, that apparently will be rolled out in the entire country. Say you and I were given to start with, let’s say, 300 social credit points. And we’re being trailed. If we fail to put our trash can out at night, there’ll be marks against us. If we go to some dubious place or mix with someone whose political loyalties are suspect, we’ll get more negative points. On the other hand, if we pay our debts on time and go green, so to speak, and all this kind of thing, we will amass more credit points. And then, if we are going negative, the penalties kick in. We’ll discover we can’t get into our favorite restaurant. We’ll discover we don’t get that promotion, or don’t even get that job we apply for. Or that we can’t travel, or that we can’t even have a credit card. And this is being rolled out, and the list of penalties and things that have actually been recorded is just very serious. Now, what amazed me when I first came across this was the fact that many people welcome it. They think it’s wonderful. ‘Oh, I’ve got a thousand points, so how many have you got?’ And they don’t realize that the whole of life is becoming controlled, in the interest, ostensibly, of having a healthy society. So, talk about ‘1984.’ Now, this is not futuristic speculation; this is already happening. George Orwell, you mentioned him, he wrote ‘1984.’ He talked about Big Brother watching you, and that technology would eventually — it is doing it, this is narrow AI. This is not futuristic in any way; it’s what’s actually happening at the moment. And you mentioned briefly the fact that all this stuff exists in the West, except — and the point has been made forcibly — it’s not quite yet under one central authority and control, but it is coming. We have credit searches, we have all kinds of stuff that is beginning to creep in, in the US and in the UK, and I presume also in Australia. And also, we have even police forces here, I believe, who want the whole caboodle in here and want to be able to exert a much more serious level of control. And it is frightening because what it does for human rights is, well…

John Anderson: So, it occurs to me that, you know, I love history, as I’ve mentioned. Authoritarian regimes have collapsed under their own weight. Typically the people who’ve risen up one way or another, and there’s been an overturning. We’ve never had autocratic regimes that have had this surveillance capacity. There’s, you know, an estimated 400 million CCTV sets in China. That’s one for about every three people. I mean, it’s mind-boggling.

Professor John Lennox: Oh, it is mind-boggling. And even here in the UK, what I’m told is that you’re on a closed-circuit TV camera every five minutes when you’re moving around. So, it is very serious. And of course, the irony is, as I hinted at earlier, here we are with our smartphones that have got all these capacities, certainly at the audio level, and we’re voluntarily wearing them. So, we’re voluntarily seeding part of our autonomy and our rights really to these machines, when we don’t really know what has been done with all the information. So, we have a huge problem. And someone has said we’re sleepwalking into all of this, so that we’re captured by it, we’re imprisoned by it, and we wake up too late because the central authority has got so much control that we cannot escape anymore.

John Anderson: So, let’s go back to where I started. Science is blessing us because they are fantastic, a lot of these things, with incredible technology and capabilities. You’ve alluded to some of the useful things. I mean, I love the way in which I can, in my car, say, ‘Hey Siri, call my wife.’ Yeah, they’re just fantastic. But my question about what we now believe goes to the heart of who do we think we are? What is our status? On what basis will we be alert enough to recognize we need to make tough decisions? And then, on rapid AI development, on what basis will we make the ethical decisions around how far this goes? I know it’s a complicated question, but there’s another element to it because we haven’t even got into General Artificial Intelligence yet. We’re still talking, as I understand it, about Narrow Artificial Intelligence, just masses of it. Those surveillance cameras, and the people in their desks in Beijing, are collating the information and what have you. There might be a lot of information and a lot of capability, but those cameras can’t think of another task, you know, like how to go and bring my boss a cup of coffee. It’s still narrow.

Professor John Lennox: That’s absolutely right. And before we’ve got to Artificial General Intelligence. And we’ve got to realize several things. First of all, the speed of technological development outpaces ethical understanding by a huge factor, an exponential factor. Secondly, some people are becoming acutely aware that they need to think about ethics. And some of the global players, to be fair, do think about this because they find the whole development scary. Is it going to get out of control? And someone made a very interesting point. I think it was a mathematician who works in artificial intelligence, and she was referring to the Book of Genesis in the Bible. She said, ‘God created something, and it got out of control: us.’ We are now concerned that our creations may get out of control. And I suppose, in particular, one major concern is autonomous or self-guiding weapons. And that’s a huge ethical field. Here’s a man sitting in a trailer in the Nevada desert, and he’s controlling a drone in the Middle East. It fires a rocket and destroys a group of people. And of course, he just sees a puff of smoke on his screen, and that’s it, done. There’s a huge distance between the operation of that lethal mechanism. And we only go up one more from that, where these lethal flying bombs, so to speak, control themselves. We’ve got swarming drones, and we’ve got all kinds of stuff. Who’s going to police that? And of course, every country wants them because they want to have a military advantage. So, we’re trying to police that and to get international agreement, which some people are trying to do now. I don’t think we must be too negative about this, and I’m cautious here, but we did manage, at least temporarily — who knows what’s going to happen now — to get nuclear weapons at least controlled and partly banned. So, some success. But whether, with what’s happening to Ukraine at the moment, with Putin and so on, he could shoot a nuclear tactical weapon, or it could be controlled autonomously, make its own decision. And then where do we go from there? And these things are exercising people at a much lower level, but it’s still the same. How do you write an ethical program for self-driving cars?

John Anderson: Yeah, so that if there’s an accident that can’t be avoided.

Professor John Lennox: Yes, it’s the switched tracks dilemma for students of ethics, it was very interesting to see how people respond. The switched tracks dilemma is simply that you have a train hurtling down a track, and there’s a point that it can be directed down the left-hand or the right-hand side. Down the left-hand side, there’s a crowd of children stranded in a bus on the track. On the right-hand side, there’s an old man sitting in his cart with a donkey. And you are holding the lever. Do you direct the train to hit the children or the old man? That kind of thing. But we’re faced with that all the time, and it’s hugely difficult without going near AGI yet.

John Anderson: Yet. And let’s come to AGI. What is AGI? And because, up until now, we’re talking about intelligence that’s not human. It can’t make judgments, it can’t switch tasks, it can’t multitask. It can just be built up to do an enormous thing.

Professor John Lennox: One thing.

John Anderson: One thing. Even though that might be massively intrusive, as we’ve talked about with surveillance technology.

Professor John Lennox: Correct.

John Anderson: But now, we’re talking about something different altogether: General intelligence. It means …?

Professor John Lennox: Yeah, well it means several things. The rough idea is to have a system that can do everything, and more, that human intelligence can do. Do it better, do it faster and so on. A kind of superhuman intelligence, which you could think of, possibly, as at least in its initial stages, being built up out of a whole lot of separate narrow AI systems, building them up. And that will surely be done to a large extent. But research on AGI, and of course, it’s the stuff of dreams, it’s the stuff of science fiction, so people absolutely love it. And interest in it moves in two very distinct directions. There’s, first of all, the attempt to build machines to do it, that is, that are based on silicon, computer, plastic, metal, all that kind of stuff. And then, there is the idea of taking existing human beings and enhancing them with bioengineering, drugs, all that kind of thing, even incorporating various aspects of technology. So that you’re making a cyborg, a cybernetic organism, a combination of biology and technology, to move into the future. So that we move beyond the human. And this is where the idea of transhumanism comes in, moving beyond the humans. And of course, the view of many people is that humans are just a stage in the gradual evolution of biological organisms that have developed according to no particular direction through the blind forces of nature. But now, we have intelligence, so we can take that into our own hands and begin to reshape the generations to come and make them according to our specification. Now, that uses raises huge questions. The first one is, of course, as to identity. What are these things going to be, and who am I in that kind of a situation? Now, AGI, I mentioned, is something that science fiction deals with a lot. The reason I take it seriously is that it’s not only science fiction writers that take it seriously. For example, one of our top scientists, possibly the top scientist, who is our Astronomer Royal, Lord Martin Rees, takes this very seriously. He says, in some generations hence, we might effectively merge with technology. Now, that idea of humans merging with technology is again very much in science fiction, but the fact that some scientists are taking it seriously means, in the end, that the general public are going to be filled with these ideas. Speculative, on the one hand, but serious scientists espousing them on the other, so that we need to be prepared and get people thinking about them, which is why I wrote my book. And in particular, in that book, I engaged, not with a scientist, but with the historian Yuval Noah Harari, an Israeli historian.

John Anderson: Can I interrupt for a moment?

Professor John Lennox: Yes, of course, you can.

John Anderson: To quote something that he said to frame this so beautifully. He actually said this, because I’m glad you’ve come to him. ‘We humans should get used to the idea that we are no longer mysterious souls. We’re now hackable animals.’ Everybody knows what being hacked means now. And once you can hack something, you can usually also engineer it. Let’s put that in for our listeners as you go on.

Professor John Lennox: That’s a typical Harari remark. And he wrote two major best-selling books, one called ‘Sapiens’, homo sapiens, human beings, and the other ‘Homo deus’. And it’s with that second book that I interact a great deal because it has huge influence around the world. And what he’s talking about in that book is re-engineering human beings and producing ‘Homo deus’, spelt with a small ‘d’. He says, think of humans turning into gods, something way beyond their current capacities and so on. Now, I’m very interested in that, from a philosophical and from a biblical perspective, because that idea of humans becoming gods is a very old idea, and it’s being revived, and in a very big way. Now to be more precise, Harari sees the 21st century as having two major agendas, according to him. The first is to, as he puts it, solve the technical problem of physical death, so that people may live forever. They can die, but they don’t have to. And he says technical problems are technical solutions, and that’s where we are with physical death. That’s number one. The second agenda item is to massively enhance human happiness. Humans want to be happy, so we’ve got to do that. How are we going to do that? Re-engineering them from the ground up, genetically, every other way, drugs, etc., etc., all kinds of different ways, adding technology, implants, all kinds of things. Until we move the humans from the animal stage, which he believes happened through no plan or guidance, we, with our superior brain power, will turn them into superhumans. We’ll turn them into little gods. And of course, then comes the massive range of speculation. If we do that, will they eventually take over, and so on and so forth? So, that is transhumanism, connected with artificial intelligence, connected with the idea of the superhuman. And people love the idea. And you probably know, there are people, particularly in the USA, who’ve had their brains frozen after death. I hope that one day they’re going to be able to upload their contents onto some silicon-based thing that will endure forever, and that will give them some sense of immortality. Now, if you notice, those two things, John, solving the problem of physical death, re-engineering humans to become little gods, that has all to do with wanting immortality. And as a Christian, I have a great deal to say about that because what’s happening, I believe, in the transhumanist, the desire for that, is a parody on what Christianity actually is all about. Doesn’t it, to some extent, reflect that? I think the very great majority of us are conscious that deep down, we don’t want to think we’ll come to an end.

Professor John Lennox: Oh no, we don’t.

John Anderson: I’m an individual who actually has no great aspiration to live to an advanced old age.

Professor John Lennox: Well, I’m the same.

John Anderson: Not to say I don’t enjoy life. Doesn’t mean that at all. Just means I don’t aspire to a great physical old age, frailty, and what have you. And I have a different perspective on what happens after that. But deep down, I don’t want to think it ends without a physical death. And I think that’s pretty much hardwired into all of us.

Professor John Lennox: I think it’s hardwired. And that’s important, this business of what’s hardwired at the human beings’ version 101, so to speak. I think it’s vastly important. Many years ago, I came across that idea in the moral sense’. C.S. Lewis talking about it in his book, and it’s relevant to what we’re talking about at the moment, ‘The Abolition of Man’. There’s an appendix at the end, where he points out that all around the world, look at every culture, they may differ, but they’ve got certain moral rules in common. It looks as if morality is hardwired. I believe it is, by a benevolent Creator. But now, we come up to this, and we see that there’s hardwiring again at this particular level. ‘God has set eternity in the human heart.’ Now, of course, that’s a theistic perspective. But if you take the atheistic take on it, then you’ve got to explain where it comes from. And again, I found C.S. Lewis, as always, right on the money, so to speak. He makes the point, and I’m going to paraphrase it slightly. “It would be very strange to find yourself in a world where you got thirsty and there was no such thing as water.” I think that’s a very powerful thing, that longing. And C.S. Lewis has written a great deal about it. A brilliant essay called ‘The Weight of Glory.’ That longing for another world implies that — these are not his words, but they’re his sentiments — that we were actually made for another world. Now, I feel that the transhuman quest is an expression of the fact that we’re hardwired with a longing for something transcendent, and it’s trying to fulfil it. And I have reasons for thinking it will do that, but you may want to ask about that later.

John Anderson: Well, I think we’re probably coming into land. The thing that I want to explore with you for a moment is that I think a lot of people are at the point, where they don’t, it requires a lot of energy, quite a bit of anguish, to say, ‘I’m going to make some tough decisions about what I really believe.’ And it seems to me that this whole area of artificial intelligence, and the chance that we may reach the capacity to literally destroy ourselves, requires us to think long and hard, and to make judgments that will have to be based, if you like, on faith. You can’t know exactly what’s going to happen. So you see, if you want to say, ‘Well, it requires a lot of faith to believe in that, to through whether I believe in a God,’ I would have thought this whole area presents just as great a challenge. ‘Who am I? How am I going to work this out? Do I put some ethical framework down, or do I just sit in the pot and let the water gradually boil until it’s too late?’

Professor John Lennox: Yes, I think this is a very important issue we’ve come to. There’s such confusion in the world about what faith is, and that’s mainly the fault, and I would say the fault, of people like Dawkins and Hitchens, who actually didn’t know what they were talking about, because they redefined faith, actually, as a religious word that means believing where there’s no evidence. And what they fail to see is that’s a definition of blind faith that only a fool would get involved with. The word ‘faith,’ from in English, from the Latin ‘fides,’ from which we get ‘fidelity,’ which conveys the whole idea of trustworthiness. And trustworthiness comes from having a backup in terms of evidence. A bank manager will only have faith in you if you prove you’ve got the collateral. You have to bring the evidence. We’d be foolish to trust people without evidence. So, evidence-based faith is something everyone understands, but they don’t realize is that it’s essential to science, and it’s essential to a genuine Christian faith in God. I get leery these days, John, of using the word ‘faith’ on its own because people think you’re talking about religion. Sometimes they say to me, ‘Will you give a talk on faith and science?’ I say, ‘Do you want me to talk about God?’ ‘Oh yes.’ ‘Well,’ I say, ‘it’s not in your title. I could talk about faith in science without even mentioning God, because scientists have got a basic credo, things they believe. They’ve got to believe that the science can be done. They’ve got to believe that the universe is rationally intelligible. That is their faith, and no scientist could be imagined without it.’ As Einstein once said, ‘If you want to talk about faith as faith in God, please call it faith and God, or else we’re going to get very confused.’ Now, coming back to this, you are absolutely right. This is going to force us, whether we like it or not, to do some hard thinking, and to re-inspect and recalibrate our worldview. Because our attitude to these things depends on our worldview, our set of answers to the big questions of life: What is reality? Who am I? What’s going to happen after death? And all those kinds of things. They’re coming out in this area; we’re being forced to think about them. And as you say, we can sit like the toad in the kettle when the water is boiling and pretend that nothing’s happening, but we can’t afford that. That isn’t a luxury; that’s suicidal. And the trouble is, there is a book called ‘The Suicide of the West,’ where we’re just not thinking enough. And I feel, and I know you’re doing this, and I feel called to do it too, to put issues out into the public space so that people can really see that they can think about them and they can come to conclusions about them. And as you say, we’re nearly landing this discussion. And it seems to me that focusing on what’s going on … I read Harari and I read other books like this, and I say, ‘You know, I can understand what you’re looking for. You’re looking for something that’s very deep and hardwired in us.’ But, and I make people smile sometimes when I meet these transhumanists, and I say, ‘Guys, I respect what you’re after, but you’re too late.’ And they say, ‘What? Too late? Of course, we’re not too late.’ I say, ‘You actually are too late. Take your two problems: one, physical death.’ I say, ‘Now, I believe there’s powerful evidence that that was solved 20 centuries ago. It was actually solved before that, but 20 centuries ago, there was a resurrection in Jerusalem. We celebrated it with Easter; we’re just after Easter now. And as a scientist, I believe it, for various reasons that we can discuss. But the point is, that if Jesus Christ broke the death barrier, that puts everything in a different light. Why? Because it affects you and me. How does it affect you and me? Because if that is the case, then we need to recalibrate and take seriously His claim to be God become human.’ I said, ‘Isn’t that interesting? What are you trying to do? You’re trying to turn humans into gods. The Christian message goes in the exact opposite direction. It tells us of a God who became human. Do you notice the difference? And, of course, that actually gets people fascinated. I say, you are actually taking seriously the idea that humans can turn themselves into Gods by technology and so on. Why won’t you take seriously the idea that there is a God who became human? Is that any more difficult to do? And once you’ve got that, then I think, arguably, you need to take seriously what Jesus says. And what he says is — and that is the Christian message — he is God become human in order to do what? To give us his life. If you like, to turn us into what you want to be. Because the amazing thing about this is that the central message of the Christian faith to you and me is the answer to the transhumanist dream. One, Christ promises eternal life, that is life that will never cease, and it begins now, not in some mystical transhuman uncertain future, but right now. Secondly, because he rose from the dead, he promises that we will one day be raised from the dead to live with him in another transcendent realm that’s perhaps even more real, probably more real, is more real than this one. And that’s going to be the biggest uploading ever, you see. So, your hope for the future of humanity, changing human beings into something more desirable, living forever and happier, all of that is offered. But the difference between the two is radical. Because firstly, your idea is using human intelligence to turn humans into gods, bypassing the problem of moral evil. You’re never going to do it. No utopia has ever been built. And of course, you’re not thinking straight because there have been attempts to re-engineer humanity, crude of course — the Nazi program of eugenics, the Soviet attempts to make a new man. And what do they lead to? Rivers of blood. 20th century being the bloodiest century in history. Mind you, what’s happening now might make this a very bloody century. But what I’m saying, John, is that I believe even more strongly than ever that we’ve got, as Christians, a brilliant answer and a message to speak into this that crosses all the boxes. But it means facing moral reality, which is exactly at the heart of the scariness with which some people approach these issues.

John Anderson: John, I think we should land the plane there. You couldn’t more clearly articulate the reality of the changes before us, the challenges before us, and the need for people to get off the fence and not allow themselves to be satiated by false comfort. The world doesn’t give a set option anymore. In my view, if we don’t make decisions now, individually and corporately, we’re sunk. I don’t want to subtract or add to that remarkable overview of what we’re facing. So, I’ll land the plane and thank you very much indeed.

Professor John Lennox: Happy landing.

--

--

No responses yet