If the 19th century was about physical colonization of places on the planet, the 20th century was about continued fighting over the spoils of that colonization, plus the development of new weapons of domination, like chemicals, nuclear fission, and industrial machine technology.
Late in the 20th century, the computer revolution kicked off, and a new era of domination began to unfold: the Algorithm Age. It's been quite something to live through this accelerated period of human history, in which we have created tools that now actually exceed our own mental abilities, with no end in sight.
All tools can be used for good or for ill—to enhance life or to end life. You can build a home with a hammer, or you can smash a head in—it all depends on the intentions and ethics of the person wielding the implement.
This crude analogy holds true for computers as well. AI has the potential to save our lives or to engineer our extinction—depending on its programming.
It turns out that the potential heroes of our moment are the programmers. Are they going to blindly follow the orders of their corporate capitalist masters, which command them to program the computers for maximum individual profit? Or are they going to listen to the ethical promptings of their own hearts and program the AIs to provide for maximum benefit for the Earth community writ large?
Apparently software engineers are alert to the risks and moral murkiness of their work. A recent op-ed piece in the New York Times noted that "It is not just our own lack of understanding of the internal mechanisms of these technologies but also their marked improvement in mastering our world that has inspired fear. A growing group of leading technologists has issued calls for caution and debate before pursuing further technical advances. An open letter to the engineering community calling for a six-month pause in developing more advanced forms of A.I. has received more than 33,000 signatures....
“At a White House meeting with President Biden, seven companies that are developing A.I. announced their commitment to a set of broad principles intended to manage the risks of artificial intelligence. In March, one commentator published an essay in Time magazine arguing that “if somebody builds a too-powerful A.I., under present conditions,” he expects “that every single member of the human species and all biological life on Earth dies shortly thereafter.”"
The author of this op-ed essay, Alexander Karp, cites these cautions only as straw men to be demolished, however. As the CEO of a software engineering firm that services the US military, his perspective is 100% hawk:
"We must not...shy away from building sharp tools for fear they may be turned against us," he says. "Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed. This is an arms race of a different kind, and it has begun," he concludes ominously. "We must not grow complacent. The ability of free and democratic societies to prevail requires something more than moral appeal. It requires hard power, and hard power in this century will be built on software."
The rise of dominator AI, against a backdrop of global climate chaos and political instability, presents us with a frightening scenario, ripe for sci-fi dystopia.
But my sense is that we are still teetering on the edge of that crater of total disaster. We can still pull ourselves back, regroup, and—using our tools ethically, with mental and emotional intelligence—turn the 21st century into a time of regenerating the ecological web of our planet, and reimagining our own place and role within that web.
Obviously, education is key.
Our best and brightest minds are flocking to the STEM fields, correctly recognizing that this is where they can have the most impact on humanity's future. STEM programs have a responsibility to give these young people the most complete education possible.
I am grateful to the thousands of software engineers who are actively warning about the dangers of unregulated AI. It speaks to the importance of a broad, interdisciplinary education that focuses on technical skill building in the context of the kinds of ethical, philosophical, social and political questions that come up in discussion-based humanities classes.
Yes, engineers should take classes in literature, history and philosophy! Yes, engineers should regularly be encouraged to get away from their screens and commune with the natural world that fuels their virtual realities. Yes, engineers need social skills, especially in ordinary communication, so that they can convey to non-specialists the ramifications of their technical achievements.
Education must move away from the siloed "department" system, where members of the arts faculty rarely interact with their colleagues in the sciences. Those siloes need to be opened up and recreated as interconnected nodes, with students not just forced to take a random class outside of their major to fulfill a requirement, but encouraged to look at the big picture of what kinds of classes will best prepare them to be active, informed, discerning human beings at this crucial stage of human history.
Every young person is a potential "Noah," able to contribute a vital piece to the construction of the philosophical and technical Ark that will allow the Earth community to rebalance and continue to thrive. Educators today have a sacred responsibility to nurture these young minds with the utmost care, respect and steadiness of vision.
Are we up to the task? This is certainly a moment to give it our all.