AI safety and ethics demand global governance.
The Global AI Governance Alliance (GAIGANow) is committed to realizing this vision.

AI safety and ethics demand global governance.
The Global AI Governance Alliance (GAIGANow) is committed to realizing this vision.

AI safety and ethics demand global governance.
The Global AI Governance Alliance (GAIGANow) is committed to realizing this vision.

Join Us

Join Us

Join Us

About GAIGANow

The rapid advancement of Artificial Intelligence (AI) presents both immense opportunities and profound risks. There is an urgent need for effective, accountable, and inclusive global governance to ensure AI serves humanity safely and ethically. 


Many outstanding organizations and individuals are actively advocating for specific pathways toward this goal. The Global AI Governance Alliance - GAIGANow - seeks to unite these voices, fostering a broad and transformative movement dedicated to securing safe and ethical AI governance. 


Through collaboration and advocacy, the Global AI Governance Alliance seeks to bring about Global AI Governance. We recognize the gravity and immediacy of the challenge. The time to act is now.

What does GAIGANow do

What does GAIGANow do

Promotes

Promotes the need for the early multilateral and global governance of AI.

Promotes

Promotes the need for the early multilateral and global governance of AI.

Builds

Builds a broad alliance of NGOs, academia, and other stakeholder groups.

Builds

Builds a broad alliance of NGOs, academia, and other stakeholder groups.

Engages

Works with that broader alliance to engage like-minded states.

Engages

Works with that broader alliance to engage like-minded states.

Catalyses

Works with those states and other stakeholders to bring about the timely multilateral and global governance of AI.

Catalyses

Works with those states and other stakeholders to bring about the timely multilateral and global governance of AI.


Promotes


Promotes the need for the early global governance of AI.

Catalyses



Works with those states and other stakeholders to bring about a global decision to launch AI Global Governance negotiations.


Engages


Works with that broader alliance to engage like-minded states.

Builds



Builds a broad alliance of NGOs, academia, and other stakeholder groups.


Building on a Lasting Legacy

The public campaign to create the International Criminal Court (ICC) stands as proof that determined collaborative activities in support of global governance can have lasting positive impact.

GAIGANow builds on this historic achievement, applying the lessons of the ICC process to lead the way toward safe and ethical, global governance of Artificial Intelligence.



Building on a Lasting Legacy

The public campaign to create the International Criminal Court (ICC) stands as proof that determined collaborative activities in support of global governance can have a lasting positive impact. 
GAIGANow builds on this historic achievement, bringing together experienced organizations from around the world, and applying the lessons of the ICC process to lead the way toward safe and ethical, global governance of Artificial Intelligence.

AI Safety

Ethics
Equity
Interoperability

Why is AI governance needed?

Artificial Intelligence is a source of significant benefit to humanity. However, it also carries serious risks: catastrophic, ethical, equitable, and interoperable, that need to be addressed.


Safe

AI poses potentially catastrophic and existential safety and security risks, such as the misuse of advanced AI by bad actors and the possible loss of control over the systems themselves.

AI poses potentially catastrophic and existential safety and security risks, such as the misuse of advanced AI by bad actors and the possible loss of control over the systems themselves.

AI poses potentially catastrophic and existential safety and security risks, such as the misuse of advanced AI by bad actors and the possible loss of control over the systems themselves.

Ethical

Ethical


AI currently poses serious ethical risks, including bias, threats to privacy, surveillance, and misinformation.

AI currently poses serious ethical risks, including bias, threats to privacy, surveillance, and misinformation.

Ethical


AI currently poses serious ethical risks, including bias, threats to privacy, surveillance, and misinformation.


Equitable

The development of AI is currently controlled by a small number of companies and states, exacerbating significant global imbalance, and hindering an equitable balance of resources.

The development of AI is currently controlled by a small number of companies and states, exacerbating significant global imbalance, and hindering an equitable balance of resources.


Equitable

The development of AI is currently controlled by a small number of companies and states, exacerbating significant global imbalance, and hindering an equitable balance of resources.

Interoperable

Interoperable


AI is a cross-border technology, where differences in regulation greatly complicate the determination of liability and effective interoperability.

AI is a cross-border technology, where differences in regulation greatly complicate the determination of liability and effective interoperability.

Interoperable


AI is a cross-border technology, where differences in regulation greatly complicate the determination of liability and effective interoperability.

Why is AI governance urgent?

Why is AI governance urgent?

Humanity is totally unprepared


Humanity is totally unprepared for the emerging risks posed by advanced AI. These include the potential for bad actors to weaponize AI, creating chemical, biological, or other radical threats, and the existential danger of losing control over advanced intelligence.


At the same time, we face major societal disruptions, from the impact on employment and the widening of inequality to the deeper challenge of redefining human purpose in a world increasingly shaped by artificial intelligence.



Humanity is totally unprepared


Humanity is totally unprepared for the emerging risks posed by advanced AI. These include the potential for bad actors to weaponize AI, creating chemical, biological, or other radical threats, and the existential danger of losing control over advanced intelligence.


At the same time, we face major societal disruptions, from the impact on employment and the widening of inequality to the deeper challenge of redefining human purpose in a world increasingly shaped by artificial intelligence.



Humanity is totally unprepared


Humanity is totally unprepared for the emerging risks posed by advanced AI. These include the potential for bad actors to weaponize AI, creating chemical, biological, or other radical threats, and the existential danger of losing control over advanced intelligence.


At the same time, we face major societal disruptions, from the impact on employment and the widening of inequality to the deeper challenge of redefining human purpose in a world increasingly shaped by artificial intelligence.



The Timeline is Critically Short


Leading experts anticipate the arrival of Artificial General Intelligence (AGI) in the very near future. Sam Altman (OpenAI) and Elon Musk (xAI) suggest it could emerge by the end of 2025; Dario Amodei (Anthropic) expects it within the next few years; and Demis Hassabis (Google DeepMind) projects it by 2030.


Once AGI is achieved, intelligence is expected to grow exponentially, intensifying existing risks and introducing new, unforeseen ones. The window for proactive intervention is rapidly closing. Without immediate action, we may soon find ourselves unable to mitigate the consequences of uncontrolled AI development.



The Timeline is Critically Short


Leading experts anticipate the arrival of Artificial General Intelligence (AGI) in the very near future. Sam Altman (OpenAI) and Elon Musk (xAI) suggest it could emerge by the end of 2025; Dario Amodei (Anthropic) expects it within the next few years; and Demis Hassabis (Google DeepMind) projects it by 2030.


Once AGI is achieved, intelligence is expected to grow exponentially, intensifying existing risks and introducing new, unforeseen ones. The window for proactive intervention is rapidly closing. Without immediate action, we may soon find ourselves unable to mitigate the consequences of uncontrolled AI development.



The Timeline is Critically Short

Leading experts anticipate the arrival of Artificial General Intelligence (AGI) in the very near future. Sam Altman (OpenAI) and Elon Musk (xAI) suggest it could emerge by the end of 2025; Dario Amodei (Anthropic) expects it within the next few years; and Demis Hassabis (Google DeepMind) projects it by 2030.


Once AGI is achieved, intelligence is expected to grow exponentially, intensifying existing risks and introducing new, unforeseen ones. The window for proactive intervention is rapidly closing. Without immediate action, we may soon find ourselves unable to mitigate the consequences of uncontrolled AI development.

A Race to the Bottom in Safety


A high-stakes AI race is creating an environment where safety is increasingly deprioritized. Two parallel races are currently underway: a commercial race among major AI corporations and a geopolitical race among nation-states. Both prioritize dominance over collective well-being, with insufficient safeguards for humanity as a whole.


As competition intensifies, safety considerations are being pushed aside in favour of speed, profit, and power. Without a collaborative global approach, AI development risks spiralling further out of control, leaving the world vulnerable to unintended and potentially irreversible consequences.



A Race to the Bottom in Safety


A high-stakes AI race is creating an environment where safety is increasingly deprioritized. Two parallel races are currently underway: a commercial race among major AI corporations and a geopolitical race among nation-states. Both prioritize dominance over collective well-being, with insufficient safeguards for humanity as a whole.


As competition intensifies, safety considerations are being pushed aside in favour of speed, profit, and power. Without a collaborative global approach, AI development risks spiralling further out of control, leaving the world vulnerable to unintended and potentially irreversible consequences.



A Race to the Bottom in Safety

A high-stakes AI race is creating an environment where safety is increasingly deprioritized. Two parallel races are currently underway: a commercial race among major AI corporations and a geopolitical race among nation-states. Both prioritize dominance over collective well-being, with insufficient safeguards for humanity as a whole.


As competition intensifies, safety considerations are being pushed aside in favour of speed, profit, and power. Without a collaborative global approach, AI development risks spiralling further out of control, leaving the world vulnerable to unintended and potentially irreversible consequences.


The Time to Act is Now


The world’s future is being shaped now, and urgent steps are needed to ensure a fairer, more inclusive AI-driven era, preventing the permanent concentration of power and wealth in a few companies and states.


If left unchecked, this trajectory will entrench monopolies, marginalize entire regions, and erode the foundations of democratic and open societies.


The longer global governance is delayed, the more complex interoperability will be.



The Time to Act is Now


The world’s future is being shaped now, and urgent steps are needed to ensure a fairer, more inclusive AI-driven era, preventing the permanent concentration of power and wealth in a few companies and states.


If left unchecked, this trajectory will entrench monopolies, marginalize entire regions, and erode the foundations of democratic and open societies.


The longer global governance is delayed, the more complex interoperability will be.



The Time to Act is Now

The world’s future is being shaped now, and urgent steps are needed to ensure a fairer, more inclusive AI-driven era, preventing the permanent concentration of power and wealth in a few companies and states.


If left unchecked, this trajectory will entrench monopolies, marginalize entire regions, and erode the foundations of democratic and open societies.The longer global governance is delayed, the more complex interoperability will be.

Why does AI governance need to be global?

Why does AI governance need to be global?

To avoid Governance Loopholes

To avoid Governance Loopholes

The governance relating to ensuring that advanced AI is safe must be truly global so as to minimise the catastrophic risk of unregulated behaviour leading to the AI system being accessed by bad actors, or to the loss of control of the AI.

AI safety regulations must be designed to ensure no jurisdiction shopping - no regulatory arbitrage. The key is that AI systems can act across borders to the ubiquitous internet connections. AI safety is not just a national issue; it is a global imperative.

To address the Concentration of Power and Wealth

To address the Concentration of Power and Wealth

The development of AI Platforms  is currently controlled by a small number of companies and states, creating significant global imbalances. 

The world’s future is being shaped now, and urgent steps are needed to ensure a fairer, more inclusive AI-driven era, preventing the permanent concentration of power and wealth in a few companies and states. If left unchecked, this trajectory will entrench monopolies, marginalize entire regions, and erode the foundations of democratic and open societies.

To provide the Unified Global Approach that commerce needs

To provide the Unified Global Approach that commerce needs

AI technology is a global topic more than any other. An AI system could be conceived in country A, developed in country B, on a platform belonging to a provider from country C, and delivered to a company in country D for its customers in that country and neighbouring country E. 

What if something goes wrong? Who does the customer want redress from, and in what jurisdiction?  The greater the similarity or “interoperability” among the jurisdictions, the more straightforward the resolution of any legal issues, protection of the customers, and legal certainty of the businesses.

What do Experts say?

  • Demis Hassabis, Nobel Prize Laureate; Co-founder and CEO of DeepMind

    "The more artificial intelligence becomes a race, the harder it is to keep the powerful new technology from becoming unsafe."​


    Source: "Axios interview: Google's Hassabis warns of AI race's hazards," Axios, February 14, 2025.

  • Mustafa Suleyman, Co-founder of DeepMind and Author on AI Ethics

    "The coming wave of technologies threatens to fail faster and on a wider scale than anything witnessed before. This situation needs worldwide, popular attention. It needs answers, answers that no one yet has. Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible."


    Source: "The Coming Wave: AI, Power, and Our Future," September 5, 2023.

  • Yoshua Bengio, Turing Award Laureate and Deep Learning Pioneer; Professor of Computer Science, Université de Montréal

    “I want to raise a red flag. This is the most dangerous path... All of the catastrophic scenarios with AGI or superintelligence happen if we have agents.”


    Source: Business Insider, January 24, 2025.

  • Stephen Hawking, Theoretical Physicist and Cosmologist; Former Lucasian Professor of Mathematics, University of Cambridge

    "The development of full artificial intelligence could spell the end of the human race."​


    Source: BBC, December 2, 2014.

  • Stuart Russell, Professor of Computer Science, University of California, Berkeley

    "We could have done something useful, and instead we're pouring resources into this race to go off the edge of a cliff."​


    Source: "I met the 'godfathers of AI' in Paris – here’s what they told me to really worry about," The Guardian, February 14, 2025.

  • Geoffrey Hinton, Nobel Prize Laureate; Emeritus Professor of Computer Science, University of Toronto

    "It's not inconceivable" that AI could "wipe out humanity.


    Source: CBS News, March 25, 2023.

What do Experts say?

  • Demis Hassabis, Nobel Prize Laureate; Co-founder and CEO of DeepMind

    "The more artificial intelligence becomes a race, the harder it is to keep the powerful new technology from becoming unsafe."​


    Source: "Axios interview: Google's Hassabis warns of AI race's hazards," Axios, February 14, 2025.

  • Mustafa Suleyman, Co-founder of DeepMind and Author on AI Ethics

    "The coming wave of technologies threatens to fail faster and on a wider scale than anything witnessed before. This situation needs worldwide, popular attention. It needs answers, answers that no one yet has. Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible."


    Source: "The Coming Wave: AI, Power, and Our Future," September 5, 2023.

  • Yoshua Bengio, Turing Award Laureate and Deep Learning Pioneer; Professor of Computer Science, Université de Montréal

    “I want to raise a red flag. This is the most dangerous path... All of the catastrophic scenarios with AGI or superintelligence happen if we have agents.”


    Source: Business Insider, January 24, 2025.

  • Stephen Hawking, Theoretical Physicist and Cosmologist; Former Lucasian Professor of Mathematics, University of Cambridge

    "The development of full artificial intelligence could spell the end of the human race."​


    Source: BBC, December 2, 2014.

  • Stuart Russell, Professor of Computer Science, University of California, Berkeley

    "We could have done something useful, and instead we're pouring resources into this race to go off the edge of a cliff."​


    Source: "I met the 'godfathers of AI' in Paris – here’s what they told me to really worry about," The Guardian, February 14, 2025.

  • Geoffrey Hinton, Nobel Prize Laureate; Emeritus Professor of Computer Science, University of Toronto

    "It's not inconceivable" that AI could "wipe out humanity.


    Source: CBS News, March 25, 2023.

Frequently Asked Questions

Frequently Asked Questions

Frequently Asked Questions

What are the main types of AI, and how do they differ from one another?

Initial development of AI has been in the form of Narrow AI. But as well as becoming more capable, AI has also been developed with a broader scope. Artificial General Intelligence (AGI) is at least as capable as an individual human being in every cognitive capability, including what is known as general knowledge and common sense. Superintelligence is an AGI that is at least as capable as the entirety of humanity in every cognitive capability.

What is global governance, and why does it play an important role?

Global governance relates to the manner in which an issue such as artificial intelligence is managed globally. Global governance can take different forms from binding international treaties to soft law and codes of conduct. It is vital for coordination and helping to ensure peace and stability within the world.

What role does regulation play in shaping the future of AI?

Regulation is a subset of governance. Some parts of the world, such as the European Union and China, have clear regulations that control the way in which AI can be used by law. Many other states, however, currently use voluntary measures or no measures at all.

What are the biggest challenges facing global governance today?

There are three main challenges facing the governance of AI today. First, there are major differences in the way countries consider the main risks should be addressed. Secondly there is tension between the two main developers of AI systems (US and China), making cooperation and negotiation difficult. Third, compared to the risks posed by existing AIs, the emergence of AGI poses urgent new challenges, but these are often omitted from consideration.

How can we ensure that AI remains beneficial and aligned with human values?

We cannot be certain that AI will remain beneficial and aligned with human values, which is why many people argue for a pause in the development of more powerful AI systems. There are ideas as to how AI might be kept beneficial and aligned with human values, and these ideas are being pursued – but with less effort than the world needs.

What are the main risks associated with artificial intelligence?

The most immediate AI risks are ethical (e.g., bias, surveillance), but arguably the most serious are the catastrophic risks that could arise from advanced AI in the hands of bad actors or from the loss of control of an advanced AI.

Can AI systems become uncontrollable or act unpredictably?

All generative AI systems today, from time to time, do unpredictable things. Current AI systems can be controlled but they have already shown signs of seeking to deceive the human controller. It is not currently known how humans will be able to control much more advanced AI systems.

What are the main types of AI, and how do they differ from one another?

Initial development of AI has been in the form of Narrow AI. But as well as becoming more capable, AI has also been developed with a broader scope. Artificial General Intelligence (AGI) is at least as capable as an individual human being in every cognitive capability, including what is known as general knowledge and common sense. Superintelligence is an AGI that is at least as capable as the entirety of humanity in every cognitive capability.

What is global governance, and why does it play an important role?

Global governance relates to the manner in which an issue such as artificial intelligence is managed globally. Global governance can take different forms from binding international treaties to soft law and codes of conduct. It is vital for coordination and helping to ensure peace and stability within the world.

What role does regulation play in shaping the future of AI?

Regulation is a subset of governance. Some parts of the world, such as the European Union and China, have clear regulations that control the way in which AI can be used by law. Many other states, however, currently use voluntary measures or no measures at all.

What are the biggest challenges facing global governance today?

There are three main challenges facing the governance of AI today. First, there are major differences in the way countries consider the main risks should be addressed. Secondly there is tension between the two main developers of AI systems (US and China), making cooperation and negotiation difficult. Third, compared to the risks posed by existing AIs, the emergence of AGI poses urgent new challenges, but these are often omitted from consideration.

How can we ensure that AI remains beneficial and aligned with human values?

We cannot be certain that AI will remain beneficial and aligned with human values, which is why many people argue for a pause in the development of more powerful AI systems. There are ideas as to how AI might be kept beneficial and aligned with human values, and these ideas are being pursued – but with less effort than the world needs.

What are the main risks associated with artificial intelligence?

The most immediate AI risks are ethical (e.g., bias, surveillance), but arguably the most serious are the catastrophic risks that could arise from advanced AI in the hands of bad actors or from the loss of control of an advanced AI.

Can AI systems become uncontrollable or act unpredictably?

All generative AI systems today, from time to time, do unpredictable things. Current AI systems can be controlled but they have already shown signs of seeking to deceive the human controller. It is not currently known how humans will be able to control much more advanced AI systems.

What are the main types of AI, and how do they differ from one another?

Initial development of AI has been in the form of Narrow AI. But as well as becoming more capable, AI has also been developed with a broader scope. Artificial General Intelligence (AGI) is at least as capable as an individual human being in every cognitive capability, including what is known as general knowledge and common sense. Superintelligence is an AGI that is at least as capable as the entirety of humanity in every cognitive capability.

What is global governance, and why does it play an important role?

Global governance relates to the manner in which an issue such as artificial intelligence is managed globally. Global governance can take different forms from binding international treaties to soft law and codes of conduct. It is vital for coordination and helping to ensure peace and stability within the world.

What role does regulation play in shaping the future of AI?

Regulation is a subset of governance. Some parts of the world, such as the European Union and China, have clear regulations that control the way in which AI can be used by law. Many other states, however, currently use voluntary measures or no measures at all.

What are the biggest challenges facing global governance today?

There are three main challenges facing the governance of AI today. First, there are major differences in the way countries consider the main risks should be addressed. Secondly there is tension between the two main developers of AI systems (US and China), making cooperation and negotiation difficult. Third, compared to the risks posed by existing AIs, the emergence of AGI poses urgent new challenges, but these are often omitted from consideration.

How can we ensure that AI remains beneficial and aligned with human values?

We cannot be certain that AI will remain beneficial and aligned with human values, which is why many people argue for a pause in the development of more powerful AI systems. There are ideas as to how AI might be kept beneficial and aligned with human values, and these ideas are being pursued – but with less effort than the world needs.

What are the main risks associated with artificial intelligence?

The most immediate AI risks are ethical (e.g., bias, surveillance), but arguably the most serious are the catastrophic risks that could arise from advanced AI in the hands of bad actors or from the loss of control of an advanced AI.

Can AI systems become uncontrollable or act unpredictably?

All generative AI systems today, from time to time, do unpredictable things. Current AI systems can be controlled but they have already shown signs of seeking to deceive the human controller. It is not currently known how humans will be able to control much more advanced AI systems.

Join GAIGANow

Be part of a growing international community committed to establishing safe, ethical, and effective AI governance. Whether you are from civil society, academia, governments, or the private sector, your voice matters, your voice is needed.

Contact Us

Phone Inquiry

Coming soon

Email Inquiry

Laan van Nieuw Oost-Indië 252
2593 CD The Hague
The Netherlands

Copyright © 2025 World Federalist Movement

Join GAIGANow

Be part of a growing international community committed to establishing safe, ethical, and effective AI governance. Whether you are from civil society, academia, governments, or the private sector, your voice matters, your voice is needed.

Contact Us

Phone Inquiry

Coming soon

Email Inquiry

Laan van Nieuw Oost-Indië 252
2593 CD The Hague
The Netherlands

Copyright © 2025 World Federalist Movement

Join GAIGANow

Be part of a growing international community committed to establishing safe, ethical, and effective AI governance. Whether you are from civil society, academia, governments, or the private sector, your voice matters, your voice is needed.

Contact Us

Phone Inquiry

Coming soon

Email Inquiry

Laan van Nieuw Oost-Indië 252
2593 CD The Hague
The Netherlands

Copyright © 2025 World Federalist Movement

What do Experts say?


Demis Hassabis, Nobel Prize Laureate; Co-founder and CEO of DeepMind

"The more artificial intelligence becomes a race, the harder it is to keep the powerful new technology from becoming unsafe."​


Source: "Axios interview: Google's Hassabis warns of AI race's hazards," Axios, February 14, 2025.

Mustafa Suleyman, Co-founder of DeepMind and Author on AI Ethics

"The coming wave of technologies threatens to fail faster and on a wider scale than anything witnessed before. This situation needs worldwide, popular attention. It needs answers, answers that no one yet has. Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible."


Source :"The Coming Wave: AI, Power, and Our Future," September 5, 2023.




Geoffrey Hinton, Nobel Prize Laureate; Emeritus Professor of Computer Science, University of Toronto

"It's not inconceivable" that AI could "wipe out humanity.


Source: CBS News, March 25, 2023.








Yoshua Bengio, Turing Award Laureate and Deep Learning Pioneer; Professor of Computer Science, Université de Montréal

“I want to raise a red flag. This is the most dangerous path... All of the catastrophic scenarios with AGI or superintelligence happen if we have agents.”


Source: Business Insider, January 24, 2025.






Stuart Russell, Professor of Computer Science, University of California, Berkeley

"We could have done something useful, and instead we're pouring resources into this race to go off the edge of a cliff."​


Source: "I met the 'godfathers of AI' in Paris – here’s what they told me to really worry about," The Guardian, February 14, 2025.




Stephen Hawking, Theoretical Physicist and Cosmologist; Former Lucasian Professor of Mathematics, University of Cambridge

"The development of full artificial intelligence could spell the end of the human race."​


Source: BBC, December 2, 2014.




What do Experts say?


Demis Hassabis, Nobel Prize Laureate; Co-founder and CEO of DeepMind

"The more artificial intelligence becomes a race, the harder it is to keep the powerful new technology from becoming unsafe."​


Source: "Axios interview: Google's Hassabis warns of AI race's hazards," Axios, February 14, 2025.

Mustafa Suleyman, Co-founder of DeepMind and Author on AI Ethics

"The coming wave of technologies threatens to fail faster and on a wider scale than anything witnessed before. This situation needs worldwide, popular attention. It needs answers, answers that no one yet has. Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible."


Source :"The Coming Wave: AI, Power, and Our Future," September 5, 2023.




Geoffrey Hinton, Nobel Prize Laureate; Emeritus Professor of Computer Science, University of Toronto

"It's not inconceivable" that AI could "wipe out humanity.


Source: CBS News, March 25, 2023.








Yoshua Bengio, Turing Award Laureate and Deep Learning Pioneer; Professor of Computer Science, Université de Montréal

“I want to raise a red flag. This is the most dangerous path... All of the catastrophic scenarios with AGI or superintelligence happen if we have agents.”


Source: Business Insider, January 24, 2025.






Stuart Russell, Professor of Computer Science, University of California, Berkeley

"We could have done something useful, and instead we're pouring resources into this race to go off the edge of a cliff."​


Source: "I met the 'godfathers of AI' in Paris – here’s what they told me to really worry about," The Guardian, February 14, 2025.




Stephen Hawking, Theoretical Physicist and Cosmologist; Former Lucasian Professor of Mathematics, University of Cambridge

"The development of full artificial intelligence could spell the end of the human race."​


Source: BBC, December 2, 2014.