There are several kinds of risk management. Traditional risk management, which goes back more than 400 years and was the essence of my job at UC Berkeley, focuses on insurable risks. Financial risk management arose in the 19th century and is supposed to protect large fortunes from losses. Enterprise risk management, which arose in the 1990s, addresses risks that might prevent an organization from achieving its goals.

And then, at the field’s cutting edge, there’s existential risk management, which receives scant attention because it has no business purpose.

That’s because all it deals with is humanity’s survival.

Nick Bostrom, a philosophy professor at Oxford and head of that university’s Future of Humanity Institute, defines an existential risk as a threat that could cause human extinction or destroy humankind’s potential. Some existential risks are natural and have always been around: epidemics, supervolcano eruptions, asteroid strikes, etc. Increasingly, however, humanity’s collective well-being and survival is threatened by risks of our own making (anthropogenic risks): nuclear war, climate change, and emerging applications of biotechnology and artificial intelligence, to name a few.

Anthropogenic existential risks have significantly raised the chance of catastrophe. In 2001, Bostrom put the likelihood of human extinction this century at 25%. An older colleague, Canadian philosopher John Leslie, has put it as high as 50%. And in his final book, Brief Answers to the Big Questions, Stephen Hawking deemed it “almost inevitable that either a nuclear confrontation or environmental catastrophe will cripple the Earth at some point in the next thousand years.”

How do we prevent the bad thing from happening?

For starters, we have to begin treating existential risk management as an integral human endeavor. We are homo sapiens, not homo economicus, and it’s long past time we considered human survival and welfare before profit. Even within academia, which is slightly less money-mad than business, Bostrom laments that “there is more scholarly work on the life-habits of the dung fly than on existential risks.” Bostrom himself is better known for his speculations about whether our reality is someone else’s Matrix-like computer simulation than for his broader work in existential risk.

Next, we have to start applying a concept Bostrom calls Maxipok to any new technology with potentially global impact. Maxipok requires that we “maximize the probability of an okay outcome” for that technology, with okay modestly defined as “any outcome that avoids existential disaster.”

Last, we have to go back to something out of fashion: trusting trained scientific and ethical experts to identify existential risks, quantify those risks to the degree possible, and recommend actions most likely to result in okay outcomes.

I know: try selling that to a global population awash in delusion and fake news. But we don’t have much choice. Existential risks are not like the traditional risks I dealt with at UC Berkeley. When a bad thing happened on campus, I had a chance to minimize its impact and to reduce the likelihood of it happening again. But by definition, an existential risk can’t be minimized after it occurs — and we don’t get a second chance.

Bikini Atoll, 1946.

Fun post to start the new year, right? What can I say, I’ve been sick. For even grimmer detail, here are links to a couple of Nick Bostrom’s key academic papers: Existential Risks: Analyzing Human Extinction Scenarios (2001) and Existential Risk Prevention as a Global Priority (2013). More accessible but less rigorous is this recent 23-minute BBC podcast.

Former Risk Manager at UC Berkeley, author of four books, ectomorphic introvert.

Former Risk Manager at UC Berkeley, author of four books, ectomorphic introvert.