tl;dr: The promise of AI is too great to be left to warring factions with totalizing metanarratives. Skeptical optimists — those who believe that potential problems posed by rapid AI development are solvable with effort and coordination — have a vital role to play and are more likely to change minds, build coalitions, and get things done. They recognize that safety enables speed and want to secure the best possible AI-enabled future for the most people.
Doomers, anti-hypers, and accelerationists. If you’ve spent any time reading about artificial intelligence lately, you’ve likely encountered these three groups. Each of them have data to support their arguments, neatly internally consistent theories, and extremely vocal supporters willing to sea-lion in replies and comments. Unfortunately, as with most online phenomena, the loudest and most extreme positions gain a disproportionate amount of screen and cognitive real estate, leaving more measured positions to wither in obscurity. After all, who wants to retweet a debate where everyone agrees that everyone else is a little bit right and the answer lies in the middle somewhere? This debate has become factionalized and polarized. Even using the language of one of these groups is seen as a concession to their overall beliefs; acknowledging well-made points is a sign of weakness.
There is a real danger of nuanced positions being thrown out with the bathwater. Acknowledging that current approaches to AI have uncovered serious issues and present causes for concern places you in the anti-hyper camp. Being excited and wishing to build for potentially historic economic growth puts you in the accelerationist at all costs camp.
Here are the three positions, in all of their straw-manned glory:
Doomers: Artificial intelligence is going to change everything, affecting hundreds of millions of jobs and maybe even killing everyone on earth). We need to pause development for at least six months (or forever). Guardrails on current models that prohibit the generation of political commentary or racial slurs are a distraction from this existential risk. Many doomers are found in the "AI safety" and "alignment" communities.
Anti-hypers: Actually, current AI models are just fancy autocomplete -- stochastic parrots that just memorize their training data -- and are being endlessly hyped by "techbros" who want to accelerate tech at all costs or, alternatively, by "techbros" who have read too much sci-fi and are in doomsday cult. The fact that AI models are coming out of the US tech industry should be a warning sign that they are fundamentally without merit. Instead, the anti-hypers argue, we should focus on current issues with models -- racism, sexism, and other forms of bias. Anti-hypers tend to be found in the "AI ethics" communities.
Accelerationists: All of the above people are wrong, they argue. AI is going to usher in a new future for humanity, where all work but the most fulfilling has been automated away, everyone's needs are taken care of, and GDP per capita is roughly infinite. Any safety precautions are literally preventing a transhumanist utopia. And for some reason that I can't quite understand, this **also** includes the free generation of racial slurs by large language models. For some inexplicable reason, the horseshoe theory for AI criticism seems to involve racial slurs.
The truth is that no one knows for sure where we’re heading right now, and anyone who claims to confidently know is likely engaged in motivated reasoning, is overconfident in their predictions, or is selling something (including membership to their in-group). This has led to some fairly deep rifts in the AI community, with multiple factions sparing no invective for others and has left adrift many of us who are excited by the rapid development we’re witnessing in AI, are concerned by some of the potential effects on society, and are unconvinced by both rampant boosterism and out-of-hand dismissal. Other than certain pockets of Twitter, I don’t think the accelerationist position is gaining much traction, so I won’t spend much time on it here. Nor am I interested in debunking the specific points of either the doomers or anti-hypers, but rather see them as poles on the same axis whose tactics are often similar.
Of course, the truth likely lies somewhere in the middle — the exclusion of and discrimination against underrepresented and marginalized groups by machine learning models is well documented and the current trajectory of AI progress can lead to a future of increased abundance. Just because “techbros” are working on AI does not mean it is a technological trajectory without promise. If we want to minimize marginalization and maximize abundance, it’s absolutely crucial that skeptical optimists don’t remain on the sidelines.
Each of these groups, by adopting the norms and conventions of online political debates, are moving us further away from a future where AI helps humans fulfill their potential, helps marginalized groups break down bias and barriers, and increases abundance and prosperity. I’m not claiming that this future is guaranteed (or even likely!), contra the accelerationist position; however, I am claiming that we are not on a path towards that future and won’t be unless skeptical optimists step up and get involved in building innovative and responsible AI systems.
Much has been written on the relative merits of optimism, including the pithy observation that ”pessimists sound smart, optimists make money”, but the concept of optimism is still widely misunderstood and assumed to mean blind faith that things will work out or get better. Rather, optimism is the belief that things can improve and problems are solveable; but for that to happen, work and luck will be required and success is not guaranteed. (See also Hannah Ritchie of Our World in Data on this topic).
I’m defining skeptical optimism as an extension of this — people who believe that the problems we face are solvable, but are skeptical that they will be solved without a lot of energy and coordination. We are not Cassandras or Pollyannas. We believe the following things are true (but this is not a totalizing ideology with fixed beliefs):
- Enthusiasm should be embraced, with guardrails. The progress we’re seeing in AI is real, has great potential, and represents a pivotal moment in human history.
- Regulation is necessary but requires broad and diverse coalitions of AI researchers, social scientists, and policy experts to be successful. Regulation hastily drafted in response to ideological narratives of the doomers, hype-deniers, or accelerationists are likely to have unforeseen side effects and are unlikely to achieve their stated goals.
- Rapid acceleration without precaution is likely to lead to accidents, uncaptured externalities, and Matthew effects, rather than a rising tide that lifts all boats
- Sci-fi scenarios where a superhuman intelligence exterminates humanity (or worse) are fantastical and not deserving of serious consideration. Entertaining them risks losing the enormous benefits that AI could provide.
- Real problems of racism, sexism, and other forms of bias and toxicity exist in today’s models and are likely to be amplified by future models. Addressing these problems equitably is extremely difficult.
- Safety enables speed. Formula 1 cars can go 300 km/h thanks to the remarkable investments made in safety technology. Commercial airplanes go faster than the fastest car and are orders of magnitude safer.
Skeptical optimists must have a seat at the table in deciding the future of AI. Totalizing ideologies crowd out diverse viewpoints and make building cross-cutting coalitions more difficult or impossible. Groups lacking diversity of thought perform worse in their decision making than groups with a broad range of perspectives and backgrounds. The path to changing minds is a long and winding one and requires empathy, serious consideration of other viewpoints, and being willing to compromise to achieve a common goal. Totalizing ideologies offer no aid here.
In a future post, I’ll discuss how I’ve focused on these principles in building a responsible AI team and offer some thoughts on how others might use this model. Until then, let’s focus on building an abundant and amazing future.
Disclaimer: I lead a responsible AI organization during work hours, but this post was written in my capacity as a private person, a machine learning practitioner, and a social scientist. These views are not offered as those of my employer.