When it comes to the aesthetics of the apocalypse, human beings have always dreamed vividly about the end of the world. I like to think that this is because we are a world unto ourselves. Awareness of our own mortality — or our ability to transcend the flesh, whatever is more important to you — is coded into us on the molecular level.
On the other hand, people have fantasies about what the apocalypse will look like without understanding, or at least without directly admitting, how they may play a hand in it. For example, I recently found out that tech CEO Sam Altman is a survival prepper.
Prepping by itself is not bad, frankly. Do whatever makes you happy, you know? Keep valuable items on hand. Learn valuable skills. You don’t know when you might need them.
I don’t like the outright dismissal of prepping. Just like the screechy dismissals you hear of hunting and fishing — important skills to have! Ask me about how my father fed us when I was a small child and store shelves were empty in the Soviet Union! — it’s very privileged and foolish to assume you’ll simply never need a water purifying tablet or even a good flashlight.
On the other hand, Sam Altman is a prepper because, like the article linked above states, one of his worries is AI becoming sentient and turning against us down the line.
What does Altman work on? That’s right, AI. In fact, Altman is one of the co-founders of OpenAI, which brought us ChatGPT.
If he thinks that a full-on AI rebellion is eventually possible, he probably realizes that the gas masks he got from the Israeli Defense Force aren’t likely to be helpful for long.
Cosplaying as a cool guy in a post-apocalyptic wasteland is not the only thing you should be doing when faced with any potential dangers from AI. There are moral imperatives at work. Or — there should be.
To be clear, people much smarter than I am will tell you that ChatGPT is not sentient — however, it is learning, and who knows, it may be a sign we’re headed closer to the technological singularity.
What’s that going to look like? I don’t know. I think it’s OK to say that. I think lack of knowledge can also be just as scary as doom itself.
What I do know is that human beings are still going to need each other.
Consider a typical scenario of a potential societal breakdown: altruism will benefit us just as much as prepping. Consider a group of survivors, and how they will need to protect the town dentist, for example — because they won’t need guns and gold if they’re dying from a blood infection caused by a rotten tooth. People have valuable skills that can be leveraged as needed, and those skills are just as important as gear and supplies. This points to a greater truth about us.
Working together has gotten humans very far. When we think of the apocalypse, and how it may free us from the grasping bonds of society in order to strike out bravely on our own, we forget that part. Yet cooperation is one of our greatest assets.
This is precisely why we should be working together to prepare collective responses to the development of AI at this point in its epic journey.
We need to have better frameworks in place for how AI is bound to change the world. We need them now, not whenever our tech bros stop quarreling with each other and stockpiling antibiotics.
Image: Daniel Ramos