Episode 45: Naval Ravikant | End Games (part two) | Click to Listen
You can subscribe to this podcast from any podcast player by typing "After On" in the search window. If you're on an iPhone and use Apple's podcast software, just click here, If you're on a computer, click on that same link – then click the blue “View on iTunes” button (under the After On image on the left side of the page), then click “Subscribe” (in similar location) in the iTunes window.
After On Podcast #45: End Games (part two)
First – a transcript of this interview (with certain interesting points highlighted) can be obtained by clicking on the large button immediately above these words. You are welcome to post it, excerpt it, and generally distribute it in any way you see fit. Also - the folks at Podcast Notes wrote up an amazing summary of this conversation, which is right here.
If the things Naval & I said in these two episodes resonate with you at all, I hope you'll consider joining a long, slow, steady campaign to preclude the arrival of an actual Ender. Yes, this is something we get to do – as a society – if we put our minds to it early enough, and in a level-headed way.
But this won't be easy. Nor will it just happen on its own. At least a thousand suicidal mass murderers detonate every year – and technology will continue to arm them with evermore powerful weapons.
No amount of wishful thinking will cause them to suddenly vanish. Despite that, some well-meaning people discouraged me from having this conversation in public, lest we give would-be Enders "ideas." But I'm afraid they already HAVE ideas. Ones they’ve had from time immemorial. And they're acting upon them – every year, on every continent, and in every type of society. All that’s changing, in a relentless & quite structural way, is the technology that some subset of them can access. If ever there was a situation in which hope is not a strategy, this is it.
So what’s a more adult way to confront this than wishful thinking? Step One is simply spreading level-headed awareness of the dangers, on very broad level. A widespread infantilizing notion has it that only certified experts have a right to an opinion, or a seat at the table, when these sorts of risks are contemplated. Naval mentioned this when describing certain self-styled AI experts, who believe that no one outside their own narrow clique should have any voice in how AI is developed.
But to quote Upton Sinclair again, “it is difficult to get a man to understand something, when his salary depends upon his not understanding it.” Antique and overly gender-specific as the language maybe, this is an important and powerful point. We left it to experts with highly-distorted personal incentives to both run and watch over our financial system. Much more recently, we decided that only experts at places like Facebook could truly understand and safeguard our digital privacy. A reasonable conclusion is that experts with upside shouldn’t be granted monopoly powers over the governance of anything. And definitely not when billions of lives could be on the line.
Now, this certainly doesn’t mean there’s NO role for experts in conversations like this. Obviously, there’s an enormous one. It’s just not a monopoly role. For them, or for anybody.
As we face the bedeviling risks that an ancient glitch in human nature will pose when it joins forces with near-future technology, what advantages lie on our side? For starters, we sure have the numbers. Suicidal mass murderers are a minuscule fringe of society – literally, less than one person in a million in any given year. And they’re no more likely than anyone else to be geniuses. So the odds of one of them conjuring up a masterfully diabolical plan that the rest of us are too thick to foresee are basically nonexistent. That said, our huge numerical advantage will mean nothing if we fail to broadly engage in this risk, and instead delegate it to a conference room of “experts with upside” in a think tank someplace.
The second thing we have on our side is that the real dangers are at least a decade away – and maybe quite a bit more than that. This gives us a huge jump on a diabolical actor who might strike when the time has truly come. Many of those people may not have been born yet. But again – to use time to our huge advantage, we have to put our imaginations and analytical horsepower to work now, and not the day after a first horrifying near miss has been inflicted on the world. Because who’s to say if the first strike will miss at all?
This means a simple thing anyone can do to help interdict these dangers is to spread the word. Tweet, blog, or otherwise discuss this episode, if the arguments resonated with you. If you’re a podcaster yourself, and would like to literally rebroadcast this conversation as one of your own episodes, please reach out to me on Twitter, or through this website, and I'll be delighted to enable that. This is a worthy conversation to have with your own listeners (plus, borrowing this episode could get you a week off from production – which I know from personal experience can be a godsend).
If Naval & I just didn’t do it for you, there are plenty of other resources out there to disseminate instead. Countless people are now discussing these risks in public forums – so join the conversation, and spread the word in whatever manner is most natural for you. If these risks are as widely-understood, analyzed and dreaded as nuclear winter was during the Cold War, we will have come a long, long way toward precluding them.
And if you’re really ambitious and scientifically-minded, you could take it a giant step further, and become a bioengineer. Or, work in some other field connected to existential dangers. Truly: the more bioengineers we have, the safer we are – because the crushing majority of people are good guys, not bad guys. So consider joining this amazing field. Maybe you'll help deliver on some of its extraordinary promise for humanity. Or, become a hero who helps build a critical piece of protective infrastructure, which one day saves innumerable lives.