Blog

GesellschaftKay Firth-Butterfield

Kay Firth-Butterfield

CEO, Good Tech Advisory, Former Head of AI and Member of the Executive Committee at World Economic Forum/ Austin, USA

2024 stehen in Ländern, die über die Hälfte des globalen BIPs ausmachen, Wahlen an. Die Dringlichkeit, sich mit Fragen über KI und Demokratie und zu den in KI-Daten versteckten Vorurteilen auseinanderzusetzen, könnte nicht höher sein. Außerdem existiert kein universeller Zugang zur KI: Rund 3 Milliarden Menschen haben nicht einmal Internet. Diese Situation verschärft die digitale Kluft und wirft geopolitische Machtfragen auf. Länder mit divergierenden Ansichten zu Demokratie und Menschenrechten wie die USA, Europa und China konkurrieren um die KI-Vormachtstellung, was die Demokratie weltweit zum Besseren oder Schlechteren verändern könnte. In zehn Jahren könnte unser Alltag ganz anders aussehen. Deshalb ist die Frage nicht nur, wie wir KI heute steuern (govern), sondern welche Gesellschaftsvision wir für morgen haben. Regierungen stehen in der Pflicht, sowohl die Vorteile der KI zu maximieren wie auch Schutzmechanismen gegen ihre Risiken zu etablieren. In diesen herausfordernden Zeiten muss sich die junge Generation mit den Kernfragen menschlicher Identität befassen. Ein profundes Verständnis für Geschichte, Literatur und Kunst ist so unverzichtbar wie technologische Expertise.

EN Original:

In 2024, countries representing more than half of the world’s GDP will be holding elections. The urgency to address issues around AI and democracy could not be higher. Additionally, we must confront the biases embedded in the data AI utilizes. A vast majority of data, especially in healthcare and social contexts, is skewed toward specific demographics, for example the bulk of data available about heart attacks relates to white American men over 55.
 
Generative AI in particular risks perpetuating existing biases. Because it trains on historical data, the AI could entrench patriarchal norms, reinforcing a skewed vision of democracy and contributing to a more polarized and less inclusive discourse. 
 
Moreover, access to AI is not universal. About 3 billion people worldwide don’t even have internet access, leaving them excluded from both the potential benefits and risks of AI. This contributes to an increasing digital divide and raises geopolitical questions around influence. Countries with differing views on democracy and human rights, like the U.S., Europe, and China, are all in the race to dominate the AI landscape, which could significantly impact global democracy for better or worse.
 
This issue of restricted access goes beyond the digital divide. It permeates the fabric of our society where many people, even when they have access to information, do not see themselves represented in the dialogue. This lack of representation can be discouraging, often preventing these individuals from engaging in democratic processes, simply because they can’t envision themselves in roles such as politicians.
 
Moreover, this issue is complicated further in the United States by legislative moves in many states that make it significantly harder to vote. Such actions underscore the importance of equitable access to reliable information on voting procedures. 
 
Artificial Intelligence will fundamentally transform society. Ten years from now, the fabric of our daily lives could be radically different. This change extends from how our children are educated to job security and elderly care. Therefore, the question isn’t only about how we govern AI today but what society we envision for tomorrow. Governments have an inherent social contract to ensure the well-being of their citizens. To honor that, political leadership needs to erect guardrails for AI’s potential harmful impacts while nurturing its benefits.
 
Europe’s forthcoming AI Act is promising; it sets a precedent other nations may follow. Just recently, California began formulating its own guidelines on the governmental procurement of AI, aiming to shield citizens from AI-related risks. But government initiatives should also be forward-thinking, not just reactive. As AI’s role in education and work grows, for instance, we need to consider the kinds of data that shape these AI systems. Should a toy company have unrestricted access to our children’s data? Or should we have stringent rules to protect that information?
 
Let’s not be solely consumed with today’s problems. We must also consider the future, asking crucial questions about the societal structure we desire in 2045 or 2050. Only then can we truly say that we’re maintaining our end of the social contract in the age of AI. In our rapidly evolving landscape dominated by AI and a host of other complex challenges, including ecological crises, it’s crucial for the next generation to grapple with what makes us fundamentally human. A deep understanding of history, literature, and the arts is just as important as technological know-how.
 
Reflecting on what preeminent AI scientist Stuart Russell said a few years ago, as machines do more work, our most significant task will be to understand and be there for one another, fostering compassion and connection. It’s this very essence of humanity that will enable us to leverage AI in a manner that augments our societal good, rather than diminishes it. So my advice to young people navigating this challenging world is to not let the speed of information consumption rob you of the depth of human experience. The ability to think deeply, to feel profoundly, and to interact meaningfully takes time — and that time is well spent.

Back To Top

human ist das Leitmedium für strategische Entscheider:innen und alle, die mit KI arbeiten und leben werden. Wir stellen den Menschen in den Mittelpunkt. Für eine lebenswerte Zukunft aller.

Folgen Sie unserer Einladung zum Dialog – werden Sie Teil einer spannenden Reise…

Social