Google are clearly ambitious and bold on AI. In their Responsible AI 2024 report Google makes a striking admission, that "Artificial General Intelligence (AGI) is coming into sharper focus". This isn’t just a vague nod to progress in AI research; it signals a fundamental shift in how we should think about the future of artificial intelligence. AGI is an important acronym to learn. The most famous fictional AGI is Skynet, the self-aware AI which is the main antagonist of the Terminator film and TV franchise. So if Google is getting closer to AGI we better be sure that there are guardrails in place. We know how those particular movies end!
TL:DR – While dystopian narratives tend to exaggerate the risks, I do feel like we are looking at a worrying future. The harsh reality seems to me to be that guardrails are not in place, and if we have learned anything at all from the last two decades exponential use of mobile devices and social media and the way it has transformed our society, we have learned that disruptive technologies almost always outpace our ability to govern them effectively.
Contents
Are we ready for AGI?
AGI represents a level of machine intelligence that can perform any intellectual task that a human can and potentially perform it better. Unlike the narrow AI technologies which are becoming mainstream having been fine tuned and even maybe brute forced for specific domains (think ChatGPT or self-driving cars), AGI would be a flexible, capable and autonomous problem solver across a broad range of disciplines. It’s the technological equivalent of moving from the Wright Flyer to Concorde. This might in turn bring about what has been called the technological singularity, a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences.
Dystopian or not, if Google is getting closer to AGI, we better be sure that guardrails exist and are robust.
AI for weapons
For years, Google's motto was 'Don't be evil'. In 2015 this changed to 'Do the right thing' and in 2018 Google publicly distanced itself from military AI applications. Its 2018 AI principles explicitly stated that it would not develop AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Google’s participation in Project Maven, which was a controversial Pentagon AI initiative for analyzing military drone footage, sparked internal employee protests which was likely a factor in this. ( Source:BBC https://www.bbc.co.uk/news/business-44341490) and The Guardian (https://www.theguardian.com/technology/2025/feb/05/google-owner-drops-promise-not-to-use-ai-for-weapons ).
In 2025, that commitment is now gone. In their blog Google executives said the company's original AI principles published in 2018 needed to be updated because the technology had evolved. This shift highlights that AI may have advanced to the point where it is too valuable for governments and corporations like Google to ignore. Whether it is for weapons systems, autonomous drones, or algorithmic warfare, AI will play a role in military strategy whether Google or any other company care to admit it.
The ethical dilemma is obvious. If AI is integrated into weapons and defense systems, how do we ensure that humans remain in control? What happens when AGI starts making battlefield decisions faster than human commanders can react? In a world where AI accelerates conflict decision-making, deterrence strategies and long established norms in rules of engagement could become dangerously unpredictable.
What might the guardrails need to be?
If AGI is inevitable, then governance and safeguards need to evolve at the same pace as the technology itself. That’s not happening. Right now, AI regulation is either toothless (self-imposed corporate ethics statements) or outdated (laws written for a pre-AI, pre-Web world).
So what would meaningful guardrails actually look like?
- Mandatory AI explainability and audit – If AI models are making high-stakes decisions in areas like healthcare, finance, or military operations, they must be explainable and auditable. Black-box decision-making in critical domains is unacceptable. Explainability could deliver improved improved trust and acceptance of AI as well as better decision making and probabl most importantly reduced liability.
- Hard Constraints autonomy – AI should never have the independent authority to launch attacks, trigger military responses, or escalate conflicts. The “human in the loop” principle must be legally enforced, with no room for ambiguity.
- AI Alignment to human values – If AGI surpasses human intelligence, how do we ensure it remains aligned with human values? This problem is unsolved, and yet we are rushing toward AGI anyway. Alignment research needs to be prioritised at a national and global level, not left to the corporate discretion of the tech bro corporations. In any case, when human values may be different in different juridstictions, this would be hard to manage globally.
- Global AI Treaties – We have nuclear non-proliferation agreements, international agreements over air space and use of the oceans but no serious equivalent for AI. A global framework, akin to the Geneva Conventions for AI, is needed before AGI reaches unpredictable levels of capability.
Without these or something similar nd equally robust, AGI development will be dictated by the economic and military interests of those leading the race. That should be cause for concern.
Why Are Our Elected Representatives So Ill-Equipped to Deal With It?
The simple answer? Most politicians lack even a basic understanding of AI, let alone AGI. This is epitomised by the rush by politicians to embrace AI innovation without much thought.
In 2018, during a US congressional hearing, Senator Orrin Hatch asked Mark Zuckerberg how Facebook makes money if it doesn’t charge users. Zuckerberg’s deadpan reply—"Senator, we run ads"—became a viral moment, exposing just how out of touch legislators are with even basic digital business models.
Fast-forward to today, and the knowledge gap is even more dangerous.
- Lawmakers struggle to grasp how algorithms shape public discourse, influence elections, and automate economies.
- Most regulation efforts focus on reactive measures (e.g., limiting deepfakes, AI-generated fraud, or bias in hiring algorithms) rather than anticipating the next wave of AGI-related risks.
- The speed of AI progress means policies debated today will be obsolete before they are enacted.
Even in the UK, where the House of Lords AI committee exists, there is little concrete action beyond issuing reports. Meanwhile, companies like Google, OpenAI, and others are making decisions about AI’s future, completely beyond any possible democratic oversight.
There is a ray of hope in the EU however. "The EU AI Act approved in May 2024 marks seminal progress in regulating AI and related technologies. It included a ban on social scoring, limitations on remote biometric surveillance technologies, and mandated human rights risk assessments for ‘high risk’ uses.
However, it also included significant loopholes for national security, law enforcement and border policing, and prioritized company liability risks over human rights risks. Many European Parliament members reported being targeted with spyware in 2024. The EU has yet to take steps to reign in the development, sale, and use of this technology." (Source Human Rights Watch: https://www.hrw.org/world-report/2025/country-chapters/european-union).
What Can Be Done About It?
While expecting politicians to suddenly become AI experts is unrealistic, there are concrete steps that can be taken to mitigate the risks of AGI development.
- Independent AI Oversight Bodies – Governments should establish technically competent, independent AI commissions that monitor and regulate AGI progress outside of corporate influence. These should be staffed by AI researchers, ethicists, and security experts and not career politicians with no technical background.
- Mandated Corporate Transparency – AI labs like Google DeepMind, OpenAI, and Anthropic must be legally required to disclose progress on AGI capabilities, alignment strategies, and risk mitigation efforts. Right now, too much AI development happens behind closed doors, with vague assurances from executives.
- Public AI Education Initiatives – The general public is alarmingly unaware of the implications of AGI. Schools and universities need core AI literacy programs so that the next generation of voters and policymakers can engage meaningfully with AI governance.
- Stronger International AI Agreements – The UK, EU, and US have taken piecemeal approaches to AI policy. A coordinated global framework for AGI development and safety is essential, ideally backed by the UN or a newly created AI-specific oversight body.
- Slow Down AGI Development Until Guardrails Exist – This is the most controversial, but also the most logical. If AGI is as powerful as its proponents claim, then why the rush? Governments should consider moratoriums or phased deployment strategies to ensure safeguards are in place before AGI is widely adopted.
Final thoughts
We are entering an era of exponential technological progress, far beyond the ability of most governments to manage. AI companies and tech behemoths will continue to push boundaries, citing innovation and competition as justification and participating in a predictable goldrush of oneupmanship which reinforces these behaviours. By the time regulators catch up, the AI horse will have bolted.
Google’s Responsible AI 2024 report suggests AGI is no longer a distant theoretical concept, it is on the horizon. The question is not whether AGI will arrive, but whether we’ll be ready when it does.
At present, the portents lead me to think we won't.
Image attribution: "The Park Service Is Cutting Down on the Building of Fences and Other Protective Structures in the Canyonlands in an Attempt to Keep the Natural Landscape as Undisturbed as Possible, 05/1972" flickr photo by U.S. National Archives https://flickr.com/photos/usnationalarchives/3814968058 shared with no copyright restriction (Flickr Commons)