Dennett toys with the idea, as philosophers are wont to do, and explains that in fact you can build a kind of comprehension out of the competence of a basic computer. “Does it understand? Well, not really, but it’s as good as. It’s a sort of understanding,” he says.
One thought experiment proposed by AI experts involves replacing biological neurons in our brains with electronic ones. At what point would electronic consciousness supersede natural consciousness, if at all? But that’s not much of a dilemma for Dennett.
“The idea that you couldn’t do that is the idea that some part of the brain is - to use a nice term from economics - non-fungible. But we’ve no reason to believe that at all.” Still, he adds, what is possible in principle is not necessarily possible in practice, at least for the time being.
Dennett has long been a follower of the latest research in AI. The final chapter of his book focuses on the subject. There has been much talk recently about the dangers posed by the emergence of a superintelligence, when a computer might one day outstrip human intelligence and assume agency. Although Dennett accepts that such a superintelligence is logically possible, he argues that it is a “pernicious fantasy” that is distracting us from far more pressing technological problems. In particular, he worries about our “deeply embedded and generous” tendency to attribute far more understanding to intelligent systems than they possess. Giving digital assistants names and cutesy personas worsens the confusion.
“All we’re going to see in our own lifetimes are intelligent tools, not colleagues. Don’t think of them as colleagues, don’t try to make them colleagues and, above all, don’t kid yourself that they’re colleagues,” he says.
Dennett adds that if he could lay down the law he would insist that the users of such AI systems were licensed and bonded, forcing them to assume liability for their actions. Insurance companies would then ensure that manufacturers divulged all of their products’ known weaknesses, just as pharmaceutical companies reel off all their drugs’ suspected side-effects. “We want to ensure that anything we build is going to be a systemological wonderbox, not an agency. It’s not responsible. You can unplug it any time you want. And we should keep it that way,” he says.
In spite of his extended replies, which pour out in perfectly formed paragraphs, Dennett has succeeded in polishing off his lamb far faster than I have dispatched my cod.
A chocolate pot soon arrives, which he also tackles with relish, while I spoon up luxuriant vanilla rice and poached quince. Dennett, a frequent visitor to London, declares he has always liked the restaurant because of its lightness, views, and convenience.
We plunge straight back into one of his other great preoccupations: the impact of digital technology on our societies. In a 2015 essay co-written with Deb Roy, a professor at the Massachusetts Institute of Technology, Dennett compared our times with the Cambrian explosion, an era of extraordinary biological innovation that occurred half a billion years ago. One hypothesis had it that the world was suddenly flooded with light, forcing animal life rapidly to evolve or - in most cases - die.
Employing the Cambrian explosion as a stunning analogy, he suggests that the blinding light of transparency from digital technologies is having a similar effect on life today. “Every human institution, from marriage to the army to the government to the courts to corporations and banks, religions, every system of civilisation is now in jeopardy because of this new transparency.”
The “membranes” protecting these institutions have been permeated and we are emerging into a world where it is near-impossible to keep secrets. Where some ideologues may consider this to be a good thing, Dennett argues it is having terrible consequences. “People haven’t really come to grips with the fact that it’s not just personal privacy that matters, it’s also institutional privacy,” he says.
To take just one relatively innocuous example, he says it’s vital that everyone receives the employment numbers from the Department of Labor at the same time, rather than through a series of leaks. “It’s harder to protect your reputation for reliability than to damage it. It turns out that, as usual, offence is cheaper than defence,” he says. “Everybody who cares about the preservation of any institution has to stop everything, ring the alarm bell, and start thinking about how to preserve that ‘membrane’ in a way that is morally permissible.”
Even worse, from Dennett’s viewpoint, is that the US has elected a president who is accelerating that erosion of trust in institutions, starting with the presidency itself. “He’s undermining the credibility of himself, the courts, Congress, the media. He’s a one-person cultural vandal,” he says.
Dennett is so concerned by the political situation that he is devoting much of his time to exploring ways to protect the truth in societies and restore trust. “The arms race between deliberate deception and our capacity to protect ourselves from it is hugely unbalanced and we’re in danger of losing,” he says.
I suggest that philosophers are straying into contentious territory whenever they start talking about truth, a concept that has been furiously debated for millennia. He acknowledges that politics involves normative judgments but that decisions must be grounded in objective facts. He rails against those philosophers who forget that they rely on objective truth thousands of times a day. “Even postmodernists get furious if their health insurance is misrepresented to them. They don’t say: ‘Oh, that’s just one of those conversations, ha, ha.’ They say: ‘Damn it! You just told me a lie, now you fix it!’ ”
But how is it possible to protect truth in such a glaringly open world? Dennett replies that if he knew the answer he wouldn’t be sitting in a restaurant talking to me. But he says that he and some like-minded colleagues are working on the problem. He refers to another thought experiment that a billionaire once threw at him. What would Dennett do if he was given $1bn?
Dennett argued that he would try to set up an international, self-policing, co-operative “truth source”. It would be like some kind of combination of the Reuters news agency, the Wikipedia online encyclopedia, and the Snopes website for debunking urban myths. “It would be the place to go to check out your hunch when something is too good to be true. That’s what I’d do with $1bn, I would endow that,” he says.
Dennett did not persuade that billionaire to part with his money. But if there are any others out there, he is awaiting your call. He stresses the urgency of rebuilding “islands of trust” in communities before building out from there.
Over a double macchiato (for him) and tea (for me), we discuss the role of philosophers in society. He is fervently of the view that philosophers should not retreat to their ivory towers but immerse themselves in the real world.
He draws a sharp distinction between those who do philosophy and those who do philosophy appreciation. “In some places, you learn to identify and classify all the different ‘isms’. Forget it! Imagine you discovered stunning, incontrovertible evidence of an attempted coup d’état and you went to the FBI and they looked at it and said: ‘This is a very interesting example of early 21st-century conspiratorialism.’ But it’s bloody true!
“What’s really important is: do I believe it, is it true, does it matter? If you lose sight of that then you’ve sort of abandoned the whole point of philosophy.”