Rupa Singh
Founder and CEO at ‚The AI Bodhi‘ and ‚AI-Beehive‘ and Expert Member at Global AI Ethics Institute/ Bengaluru, India
Die Demokratie nährt sich aus der Vielfalt des Denkens und dem Wohl ihrer Bürger. Der Buddhismus betont das Gleiche – als „kollektives Inter-Sein“. Würde KI auf Grundlage dieser Prinzipien entwickelt und reguliert, könnte sie nicht nur kritische Anwendungen verhindern, sondern auch demokratische Institutionen stärken. Der Buddhismus steht für die richtige Achtsamkeit, die richtige Anstrengung und die richtige Sichtweise. Diese Prinzipien sind nicht nur für das spirituelle Wachstum des Einzelnen wichtig, sondern können sich auch stark darauf auswirken, wie wir KI-Technologien entwickeln, nutzen und verstehen. Ein im Geiste der „richtigen Anstrengung“ entwickelter Algorithmus würde darauf abzielen, Schaden zu minimieren, sei es bei der Datenerhebung oder in der Art und Weise, wie er das Nutzerverhalten beeinflusst. Im Sinne der „richtigen Sichtweise“ würde dieser Algorithmus nicht zum Kreislauf der Fehlinformationen beitragen, sondern vielmehr aufklären und informierte Entscheidungen zu erleichtern.
Die Prinzipien des Buddhismus gelten nicht nur für Entwickler und Forscher. Vielmehr erstrecken sie sich darüber hinaus auch auf politische Entscheider und sogar Laien, die täglich vom Impact der KI betroffen sind. In unserer digital verflochtenen Gesellschaft vergessen wir oft, dass jede Handlung eine Reaktion nach sich zieht – jeder Code, jeder Algorithmus, beeinflusst jemanden, irgendwo. Eine Entscheidung, die in einem Vorstandszimmer im Silicon Valley getroffen wird, kann leicht in irgendeinem abgelegenen Dorf auf der anderen Seite der Welt nachhallen.
EN Original:
Technologies like deepfakes can blur the lines between reality and fabrication. They can serve as potent tools for disseminating misinformation, fostering divisiveness, and even skewing political landscapes, as demonstrated by incidents like the Cambridge Analytica scandal. Here, users‘ data were exploited to manipulate voting behaviors, thereby undermining the core tenets of democratic societies.
However, I view these challenges not as insurmountable obstacles but as pivot points to inspire robust policy frameworks. Europe, for example, is at the forefront with initiatives like the EU AI Act that aim to regulate AI’s societal impact. This surge in regulatory consciousness is not merely a protective response but also a way to streamline how AI integrates with democratic structures globally. Countries are gradually awakening to the need for regulations that could potentially harmonize AI’s use, from autonomous weaponry to healthcare.
One underexplored dimension that could bridge the gaps is spirituality, a concept I approach through a Buddhist lens. The universal ethics of compassion and interconnectedness could inform AI governance, transcending national and individual biases. When ethics are fragmented across nearly 8 billion people, each with divergent views, a universal spiritual touchstone offers a cohesive narrative.
Democracy, in its essence, thrives on the plurality of thought and the collective wellbeing of its citizens. And Buddhism emphasizes the same – the notion of ‚collective interbeing.‘ If AI is built and regulated on these tenets, it could not only deter divisive applications but also empower democratic institutions. For instance, AI systems could be designed to prioritize collective welfare, making political processes more transparent and inclusive.
Buddhism champions mindfulness and right view, right effort, and right approach. These principles aren’t exclusive to individual spiritual growth; they hold powerful implications for the way we develop, use, and understand AI technologies. An algorithm designed with ‚right effort‘ in mind would prioritize minimizing harm, whether in its data collection methods or the way it impacts user behavior. In the spirit of ‘right view,‘ it would avoid contributing to the cycle of misinformation and instead aim to enlighten and facilitate informed decisions.
These principles extend beyond developers and researchers to include policy-makers and even laypeople impacted daily by AI technologies. In our digitally interwoven society, we often forget that every action has a reaction—every piece of code, every algorithm, affects someone, somewhere. A choice made in a Silicon Valley boardroom may well reverberate in a remote village halfway across the world.
It’s tempting to think that the benefits of AI accrue only to those who can afford it—the developed nations that pioneer these technologies. But herein lies a troubling irony. While AI has made life more convenient for those in developed countries, it often does so at a staggering environmental and human cost, shouldered disproportionately by underrepresented groups and developing nations. In the mad rush for AI-fueled conveniences, we end up accumulating waste, both material and digital, and exploiting resources and people in far-flung corners of the globe. But remember, we’re all interconnected. Ultimately, the harms we indirectly cause will find their way back to us.
Buddhist philosophy also offers a lens through which to view the contentious issue of autonomous weapons. Principles of non-harm directly contradict the very idea of a machine designed to kill. It’s not just about the developers or the nations that deploy these systems; it’s about the collective ethical foundation on which we choose to build our future.