Prof. Arun Sundararajan
Professor of Entrepreneurship and of Technology at NYU Stern School of Business/ New York, USA
KI-Systeme haben nicht den gleichen Drang zur Machtkonzentration wie Menschen. Man kann sie so gestalten, dass sie die Menschheit nicht beherrschen. Die größte langfristige Chance der KI sehe ich darin, dass sie zum Kern eines gerechteren Wirtschaftssystems werden, die Demokratie stärken und die klassische Machtkonzentration in den Händen weniger umgehen könnte, wie dies historisch in jedem Wirtschaftssystem der Fall war. Doch die Zeit zu handeln ist jetzt. Das aktuell größte Risiko für die Demokratie durch KI kennen wir seit einem Jahrzehnt von digitalen Plattformen: KI erschwert ein gemeinsames Verständnis von Wahrheit und stellt so eine ernste Herausforderung für den demokratischen Diskurs dar.
Ein zweites, erhebliches Risiko besteht darin, dass die KI einen Großteil der arbeitenden Bevölkerung in der Lebensmitte zu beruflicher Umorientierung zwingen wird. Scheitert dieser Übergang, wird das die demokratische Stabilität weiter untergraben. In Ländern, die diese Transition schlecht managen, werden Bürger anfälliger für ‚Stammesbotschaften‘. Im 21. Jahrhundert ist die Infrastruktur für berufliche Neuorientierungen in der Lebensmitte zentral für die Stabilität der Demokratie, und die KI beschleunigt diese Entwicklung.
In the near term, AI harbors the potential to elevate the democratic process by informing citizens more comprehensively. For instance, AI can demystify complex legislative topics like the U.S. debt ceiling or weapons aid to Ukraine, thereby fostering a well-informed and engaged electorate. However, the current deployment of AI has, paradoxically, led to information silos. Algorithms designed to maximize user engagement in pursuit of advertising revenue have restricted people to monochromatic perspectives on multi-faceted issues. This short-term tunnel vision is concerning but reversible.
Over the last five years, I’ve been studying the future of capitalism and democracy. After examining a range of economic systems throughout history, from capitalist to socialist, from authoritarian to ancient Sumerian and Chinese, a pattern emerges. Inevitably, market power gets concentrated in the hands of a few, leading to alienation of the masses and the eventual destabilization of institutions.
Heterogeneity in human desires for power invariably skews any economic system, be it socialist, communist, or capitalist. Therefore, alternatives to capitalism are not inherently superior; they too would succumb to power imbalances and societal instability.
The biggest opportunity presented by AI, then, is more profound than mere information dissemination. It extends to genuine systemic transformation.
AI systems do not currently possess the same hunger for accumulating power that humans have displayed in the past. We can design AI systems to avoid going down a to not go down a path where we worry about AI taking over humanity. Rather, the biggest long-term opportunity I see from AI is to be the center of a more equitable economic system, one which will keep democracy strong and bypass the usual accumulation and concentration of market power by a small subset of humans that every economic system has historically experienced.
The impact of technology is very much shaped by the political economy it is embedded in. If the political economy is geared towards furthering democracy, that will be the effect. If it’s designed to strengthen authoritarian control or allow greater inequality, that’s what will happen. Technology isn’t an independent catalyst of change; it’s an accelerator of existing systems.
I believe the time to act is now. The most important message is that the consequences of a technology are what you make them. Collectives of human beings shape what AI does. If furthering democracy is a goal, a democratic government can set the country on that path. So, let’s understand that we have a hand in shaping AI, not the other way around.
The current biggest risk from AI isn’t new; it’s the same risk we’ve seen from platforms for a decade. AI exacerbates the difficulty of reaching a common understanding of truth, posing a substantial challenge to democracy. While governments and civil society can leverage AI to counteract this, AI’s widespread availability means misinformation can be rapidly and convincingly tailored to individuals. There are no purely centralized technological or regulatory solutions; the only real antidote is at the fringes, an informed populace with incentives to believe the truth.
A second significant risk is that AI will require a large portion of the workforce to transition careers mid-life. Failure to manage this transition will lead to greater societal polarization and undermine democratic stability. In countries that mismanage ignore this transition, citizens will be susceptible to tribal messages and greater fissures in their democracy. Much of the inequality and susceptibility to polarizing messages we see today can be traced back to mismanagement of transitions brought about by automation in the past.
Universities provide a complex set of resources—skills, for sure, but also critical thinking, networking, clubs, credentials and branding, a rite of passage—that help students transition from high school to their first adult career. We need a similarly comprehensive approach for mid-career transitions. The focus needs to shift from boasting about top undergraduate educational institutions to taking pride in world-class mid-career transition infrastructure, which is ultimately far more important. In the 20th century, the four year college was at the heart of economic progress. In the 21st century, the mid-career transition infrastructure will be central to the stability of democracy, and AI is accelerating the need for that.