International experts map path forward for AI safety in Melbourne
Globethics played a central role at the International AI Cooperation and Governance Forum 2025, chairing a key panel titled “From Risks to Responses: Collective Action for AI Safety.” The forum was hosted by the Centre for Artificial Intelligence and Digital Ethics (CAIDE) of the University of Melbourne, Australia, in collaboration with the National University of Singapore and Tsinghua University.
Held under the theme “Inclusive AI: Who Builds – Who Benefits,” the event brought together leading academics, industry experts, and regulatory authorities to advance global cooperation in the governance of artificial intelligence.
Paola Galvez Callirgos, Globethics’ AI Ethics Manager, moderated the roundtable, which focused on moving from real-world AI safety risks toward practical, co-created mitigation strategies. The session explored technical insights, pressing safety challenges, and sector-specific case studies before transitioning into a collaborative discussion on solutions.
The expert panel featured:
- Ratna Malar Selvaratnam, Manager of Learning Technologies & Innovation, Edith Cowan University, and member of the Globethics Pool of Experts
- Tianhao Chen, Associate Professor, School of Public Policy and Management, Tsinghua University
- Kate Seward, Director of Corporate, External and Legal Affairs, Microsoft
- Dr. Zena Assaad, Senior Lecturer, School of Engineering, The Australian National University
Together, the speakers contributed to developing a shared map of priority AI safety risks and a set of proposed actions aimed at strengthening global AI safety efforts.
The Forum underscored the importance of international collaboration as a complement to national regulation and industry self-governance—affirming that inclusive, cross-border engagement is essential to ensuring that the benefits of AI are equitably distributed and responsibly developed.