Toby Ord writes a book that any human who cares about the future should read. The Precipice is about all the things that could end humanity for good.

And if you thought we only needed to worry about climate change and nuclear war, you were wrong.

There are so many more nightmares out there that should be keeping you up at night. Ask most people and I doubt that super volcanoes, asteroids and stellar explosions come top of mind, but they got the dinosaurs! And according to Toby there’s a 1 in 1000 chance that in the next century one of these things could get us.

The newest kid on the block is unaligned AI. “Unaligned” is a fancy term for AI that won’t do our bidding and get in line with the idea that we should be kept alive. What a euphemism.

Toby puts anthropogenic risks like AI, climate change, and war at a 1 in 6 chance that in the next century it will end us.

To put that in perspective, we have a 1 in 100 chance of dying in a car crash in our lifetimes, so 1 in 6 chance feels like a bit much. Hell, I don’t even like a 1 in 1000 chance for non-anthropogenic risks.

What are we currently doing about this?

Nothing really. Toby talks about the puny investments we are making in solving these risks.

And do we have humanity’s best people working on this? Of course not, they are in fact helping to build one of things that are most likely to kill us in the next 100 years.

It’s all very sobering, so what are we to do about it?

Toby takes a stab it.

First, he says, we need to focus on the existential risks. We need to get international institutions working on this. We need to use technology. We need to ramp up research. In essence, when your ship is sinking, you need to focus on plugging the hole before you can worry about whether the ship is sailing in the right direction.

Second, if we survive this, the precipice, he says we need to take a break and reflect. He calls this the “long reflection.” This is the eat, pray, love part.

The reason we need to do all this is that we, as humans, have great potential. If we die, we will never know what our potential could be. It will be stolen from us like a child who died too soon.

It’s this part of the book that I like least.

While Toby does give some ideas about how to solve the problems and even suggests things that we can do as individuals, I find his approach both too sober given the stakes and not developed enough.

For example, he cautions people not to be too emotional lest we turn people away from the cause. But a 1/6 chance should be emotional. If your house was on fire, you would be screaming to make sure everyone got out in time. I’ve been in fire twice! I was in fact calm and sober the first time and I should not have been! Fortunately a german man in a speedo saved my life by knocking on my hotel door. Behind him were giant flames. A sight, literally, burned into my mind to this day.

Toby also wants to play nice with all the people who could be donors, but who are also driving us to extinction. The system, and it’s major players, are behaving irresponsibly. Who are they? Why are we so weak to stop them? Why aren’t more people capable of understanding these risks? How should we change these systems?

We will never know.

Also, let’s talk about inspiration.

Toby’s book tries to talk about our “great potential” but no one sacrifices their life for abstract ideas of great potential. So Toby highlights all the great things we have done as humanity and what’s left to do.

Imagine if we could solve hunger and disease? What if we could solve injustice?

What about if we could make all the things we like about life: love, making babies, happiness and joy and have even more of them!

At the end of the day, his argument sound dangerous close to that of techno-optimists like Marc Andressen: technology and science will solve all our problems. And humans are necessarily going to be upgraded to something better.

But his vision rings hollow to me. The same as that of the techno-optimists.

What should humanity be doing with itself? What should be our purpose?

This is a question that everyone punts on.

Toby argues that we should deal with that in our “long reflection” - after we’ve vanquished our risks. He might be right, but perhaps the biggest risk of all is not AI or super volcanoes, but rather the fact that we haven’t set a shared goal and purpose that we are all willing to fight and live for. Because humans want more than mere survival - we want meaning and purpose - and when we don’t have it then what’s the point of living?

All that said, I love this book and even if it only covered the existential risks and nothing more, it would still be worth reading by every human on the planet.