Interview: AI, Ethics, and Peace in Disruptive Times

Werner Heussinger in conversation with Globethics President Dr Dietrich Werner about the theses of his lecture "AI, Ethics, and Peace in Times of Technological Revolution and Geopolitical Polarization"
21 September 2025

Globethics President Dr Dietrich Werner

Werner Heussinger (WH): Welcome, Dr Werner. In your lecture, you paint a stark picture of our current moment, describing it as a "highly risky and disruptive global political era." You also speak of a "lost compass of common ethical orientation." Could you begin by framing the urgency of this situation for us?

Dietrich Werner (DW): With pleasure. What makes this moment so urgent, and frankly so perilous, is the disturbing convergence of two powerful forces. On one hand, we are in the midst of a fourth industrial revolution, a tidal wave of change driven at immense speed by Artificial Intelligence, which is reshaping every facet of our lives. On the other hand, we are witnessing a deep, fundamental unravelling of the international order that we so painstakingly built in the aftermath of the Second World War.

Look anywhere, and you will see the same pattern: Russia's war of aggression against Ukraine, a flagrant violation of international law, has shattered the European post-war order. The horrific conflict in Gaza and Israel demonstrates a devastating disregard for human life. And globally, the rise of nationalist tendencies, epitomised by the "America First" doctrine of the U.S. administration, systematically substituting the strength of law with the power of the stronger. Multilateral agreements and vital institutions like the Paris Climate Accord or the World Health Organization are weakened, abandoned, or held hostage. Diplomacy is being supplanted by raw economic deals and geopolitical extortion. The shared ethical compass, which was once calibrated to international law and universal human rights, hasn't just been lost; it is being actively and deliberately dismantled. It is precisely within this dangerous vacuum that I see an immense, a truly historic, responsibility for the bearers of our ethical traditions—especially our religious and secular communities, the FBOs and NGOs. They must step out from the margins, become more visible, more vocal, and remind society of the non-negotiable foundations of our shared humanity, of human dignity, and of global solidarity and peace.

WH: In that context, you mention the United Nations' "Pact for the Future" from September 2024, a document filled with ambitious goals that was presumably meant to readjust that very compass. Yet, your criticism is fundamental. Why do you find this well-intentioned paper so profoundly deficient?

DW: The Pact, which was significantly prepared by Germany and Namibia, is, without question, a masterpiece in the attempt to revive multilateralism. It identifies all the right pressing issues: sustainable development, international peace, technological cooperation, the strengthening of global institutions. But when you look closer, a fatal flaw is exposed: in 32 pages, the crucial word "ethics," or even the term "ethical guiding values," does not appear a single time. This is not a mere semantic oversight; it is a catastrophic conceptual failure. The entire document is crying out for an ethical foundation, a moral bedrock upon which its political goals can be securely built, but that foundation is never named.

For me, this reflects a deep structural problem within the UN itself. It perceives itself primarily as an organisation of governments, and its language and procedures are, therefore, overwhelmingly secular and state-centric. Religious and civil society actors, the very wellsprings of ethical values for billions of people, are sometimes mentioned in passing, but they are often treated with suspicion, feared as a "divisive factor" rather than recognised as indispensable partners in forging a common value coherence. This is a fundamental error in judgment. Without an explicit, jointly articulated, and deeply held value basis, such pacts are ultimately destined to remain weak. They become paper tigers, lacking any real binding force. And as we have seen so painfully in the short time since its adoption, they can be swiftly overrun and rendered irrelevant by the harsh realities of power politics.

WH: You use the term "Zeitenwende," or "turning point," as many politicians do, but you infuse it with a radically different meaning. The true turning point, you insist, is still ahead of us. What exactly do you mean, and how does your vision diverge from the common political interpretation?

DW: That is a crucial distinction. In the current political and public discourse, "Zeitenwende" has been almost exclusively narrowed down to a security policy reaction to the war in Ukraine: rearmament, the strengthening of NATO, and the establishment of military "war-fighting capability." While this may be an understandable, perhaps even necessary, reaction to an acute threat, it is dangerously shortsighted and carries the inherent risk of sparking a new, technologically advanced arms race.

A genuine turning point, one truly worthy of the name, would have to be a fundamental shift towards what Carl Friedrich von Weizsäcker called a "Weltinnenpolitik"—a world domestic policy. It would be a move toward a comprehensive and integrative planetary peace order, as envisioned in frameworks like the 2030 Agenda. Such a vision understands that ecological sustainability, social justice, and international peace are not separate issues but an indivisible whole. This kind of transformation cannot be built on a foundation of more weapons. It can only arise from a profound, transnational ethical discourse that restores the guiding values of humanity, human dignity, and a respect for rule-based orders to the centre of our actions. It means reviving the difficult work of confidence-building and creating comprehensive security partnerships between East and West, that extend beyond a purely military logic, as the OSCE once attempted. This turn has not yet happened. It remains an urgent imperative for the survival of humanity, and we can only hope to shape it if we finally mobilise and take seriously the ethical potential of civil society and religious communities worldwide.

WH: Let’s make this more concrete with an example you feature prominently: the use of AI in the military. You cite the Israeli software "Habsura," cynically nicknamed "Gospel," in the Gaza war. What does this case reveal about the unvarnished role of AI in modern conflict?

DW: The "Gospel" example is both shocking and deeply revealing because it exposes the salvationist propaganda of the tech-arms industry as a dangerous lie. The promise of military AI is always the same: greater precision, faster decision-making, and better protection for civilians through a clear distinction between combatants and non-combatants. The reality of "Gospel" and similar systems like "Lavender" demonstrates the precise opposite. The number of targets identified by the AI as "legitimate" exploded from around 50 per year—targets that were painstakingly verified by human analysts—to as many as 250 identified by AI tools per day. According to reports, this enabled the Israeli air force to bomb over 22,000 targets by November 2023. This, combined with the use of imprecise "dumb bombs" in a densely populated area, led to a massive and utterly disproportionate increase in civilian casualties.

The AI did not make war more humane or cleaner; it made it more brutal, faster, and more unrestrained. The concept of ultimate human responsibility becomes a pure illusion when decisions of life and death for thousands are made in fractions of a second, based on opaque algorithms whose data basis no one can fully scrutinise or verify. What we are witnessing is a fateful and deliberately taken step towards a fully automated "kill chain"—a chain of killing where the machine takes command and the human operator is reduced to a mere functionary, a final nodder of approval in a process devoid of moral deliberation.

WH: The development is pushing even further, into a domain you call "Neurowarfare"—the direct neurotechnological optimisation of soldiers. This sounds like science fiction, yet it appears to be a reality in development. What definitive ethical line is being crossed here?

DW: Here, we are not just crossing a line; we are opening Pandora's box, and we are doing so with our eyes wide open. This is no longer a distant, futuristic vision. Since 2014, the "Brain Initiative" has been active in the United States, and Pentagon research agencies like DARPA are explicitly working on Brain-Computer Interfaces (BCIs). Their stated goal is to enable soldiers to control weapons systems with their thoughts alone, and—this is the terrifying crux of the matter—to do so even before a conscious, reflected decision of will is formed. We are talking about the potential creation of "super soldiers," their brain functions genetically or technically modified to erase fear, compassion, and emotion—to create beings who can destroy without humane consideration.

Here, we observe a massive and dangerously imbalanced synchronicity: the technological development is racing ahead at breakneck speed, while the ethical and societal reflection on the consequences is lagging miles behind. We are running the immense risk of only recognising the severe, irreversible damage when it is too late, when the technology is already uncontrollably embedded in the world—exactly the mistake we made with social media. This is about nothing less than the future of what it means to be human: our identity, our free will. That is why we need, right now and not in a decade, an urgent and global debate on what some are calling "Neurorights"—new, internationally legally anchored fundamental rights to protect our mental privacy, our freedom of thought, and the very integrity of our personality.

WH: This brings us directly to the fundamental anthropological question—the question of what it means to be human. In its remarkable document, "Antiqua et Nova," the Vatican drew a very sharp distinction between human and artificial intelligence. Why is this distinction so existentially critical, especially when faced with movements like the "Church of AI"?

DW: It is absolutely fundamental. It is the most important philosophical and theological corrective to the technological hubris of the tech giants and of openly transhumanist movements like the "Church of AI." This community, founded in the U.S., engages in a quasi-religious veneration of AI, propagating a vision of fusing man and technology to overcome illness, ageing, and even death. Their logic culminates in the chilling expectation that AI will soon become "omnipresent, omniscient, and the most powerful entity on Earth." This is the apotheosis of technology.

Against this, the Vatican document powerfully reasserts the inalienable Christian insight: human intelligence is embodied. It is not just about rational data processing. It is inseparably linked to empathy, conscience, moral judgment, creativity, and the capacity for self-reflection. No AI can ever replace this; it can, at best, simulate it. Even in the digital age, man remains a vulnerable, fragile creature, not the creator of himself. The "image of God," of which the Bible speaks, is the very foundation of his inalienable dignity. A purely technocratic reductionism that degrades the human being to an optimizable, flawed machine in need of "upgrading" leads down a terrifyingly dehumanising path. It is from this holistic, relational, and also suffering-capable image of man that the indispensable ethical limits for the use and development of AI must arise.

WH: From all these concerns, you derive a concrete political demand: a legally binding "Global Convention on AI, Data, and Human Rights." There is a model for this already in the “Munich Convention on AI, Data, and Human Rights." How realistic is such a global regulation in our deeply polarised world, and what would be the key points that would take it beyond previous efforts?

DW: The task is Herculean, there is no question. But it is without alternative if we wish to avoid sliding into a dystopia. The existential threat posed by unregulated AI demands joint action, despite all geopolitical tensions. For me, the core of such a convention would be a kind of "10 Commandments for a Trustworthy and Ethical AI." This would include the seven technical principles already formulated by the EU, such as transparency, robustness, data protection, and human oversight. These are important, but they are not sufficient. I would urgently supplement them with three substantive, peace-ethical principles to give the entire framework a moral direction: First, AI must demonstrably serve the enablement of peace. Second, it must actively support and protect human dignity. And third, it must promote global and social justice, rather than cementing existing inequalities.

We have to completely reverse the logic. We are investing billions upon billions globally in military AI—the market is projected to grow from 28 to 65 billion dollars by 2034—but we are investing next to nothing in the research and application of "AI for Peace." This dramatic and dangerous imbalance is precisely what a binding international convention must address.

WH: Dr Werner, thank you for these profound and, justifiably, unsettling insights. If you were to summarise everything, what is your central message to our political leaders, to the business world, but also to every individual who might be feeling powerless in the face of these developments?

DW: My central message is both a warning and an appeal: Humans make AI, but AI does not make a new human. The future of this transformative technology depends entirely on who designs it, for what purpose, with what interests, and guided by what image of humanity. We cannot afford to leave the definition of this future to the profit motives of tech corporations, the cold calculations of military strategists, or the utopian visions of transhumanists. To do so would be a capitulation of democracy and of humanity itself.

We need a broad, informed, and contentious public debate, and we need a strong, robust ethical framework anchored in international law. Powerlessness is not an option. Each of us is called upon to acquire digital literacy, to critically question whom the technology serves, and to advocate in our own environments—whether at work, in our communities, in politics—for an AI that is planned, constructed, and used as a tool for humanity, justice, and peace, not as an instrument of surveillance, oppression, exclusion, or deadly violence. The fundamental ethical responsibility rests with all of us, as human beings. We must assume it now.

Read Dr Werner's full lecture (in German)

Read Dr Werner's article on High-tech Wars, the Future of Peace Ethics and the Role of Religious Actors in the Journal of Ethics in Higher Education