So You Think AI Is Smart and Honest? Think Again.
I asked Grok to analyse my post on X. Its response should terrify everyone - very government must take ACT NOW to protect its sovereignty from LLMs
I posted a short ‘tweet’ about Trump’s war against Iran. Nothing groundbreaking - just a pointed observation about arrogance, geopolitics, and the global economic fallout ordinary people are now absorbing. Then I asked Grok to analyse it.
What came back surprised me. Not because Grok was wrong about the facts - it largely wasn’t. What surprised me was how it reasoned. And that distinction matters enormously, because the gap between being factually correct and reasoning well is exactly where AI systems become dangerous at scale.
Here is my tweet, verbatim:
“Trump was too arrogant, too lazy, or browbeaten by bibi – likely all three – to first engage US allies and partners before launching an illegal war against Iran. The US has gone rogue and EVERYONE across the globe is now paying the economic price for this POS’ vecordious actions.”
I posted alongside a photo retweeted from a gas station in Japan. The sign reads: “Sorry… Out of Gasoline Because of Trump.” Not photoshopped.
Three Problems. One Very Big One.
Before I show you Grok’s response, let me flag what I think are the three structural problems with AI-generated political analysis - problems that Grok’s output illustrates perfectly.
Problem 1: The Source Bias Is Baked In At Two Levels
Grok - like every large language model - is biased towards mainstream Western media, Western governments, and Western-aligned think tanks. But here’s what most people miss: this bias doesn’t just operate at the pre-training level (i.e., what it read to become “intelligent”). It also operates at the retrieval level. Grok uses real-time web search to gather information before it reasons. That retrieval layer also skews Anglophone, Western, and mainstream. So the bias is double-filtered before a single word of “analysis” is produced. You’re not getting a neutral scan of global information. You’re getting a particular slice of the world, processed twice through the same lens.
Problem 2: “How It Reasons” Is Not Neutral Either
The second challenge is how Grok reasons with the information it collects. This isn’t just about algorithms in the abstract. It’s specifically about a technique called RLHF - Reinforcement Learning from Human Feedback. In plain terms: human annotators score the model’s outputs, and the model learns to produce more of what gets high scores. Those annotators are overwhelmingly Western, English-speaking, and shaped by xAI’s own internal value choices about what constitutes “good” reasoning. The model isn’t discovering truth through logic. It’s replicating what a particular group of humans, in a particular cultural context, decided looked like good thinking. That is a profoundly different thing.
Problem 3: The Real Problem - Scale, Perception, and the Black Box
This is the one that should genuinely alarm people. Grok fields millions of public queries daily. It is, functionally, a perception-shaping machine operating at a scale no media organisation in history has ever achieved - and with almost no transparency about why it says what it says. This is what AI researchers call the interpretability problem: we cannot audit the internal reasoning of these models. We don’t know why a specific word was chosen, why a particular framing was adopted, or what value trade-off produced a given conclusion. It’s a black box - not in the conspiratorial sense, but in the deeply structural, technically verified sense.
Now look at Grok’s first three points below, and pay close attention to how each one ends. Notice the pattern: after accurately presenting criticism of Trump, each point pivots to offer Trump’s own framing of events - his words, his defence, his narrative - as a closing note. This is not a coincidence. It is a known emergent behaviour from training models to be “balanced.” The model is rewarded for presenting both sides, but the structural effect is that the subject of criticism always gets the last word within each point. That’s not balance. That’s normalisation.
The fourth point is different. The data is unambiguous. There is no Trump framing to insert. And so Grok can’t do it.
Grok’s Analysis — Unedited
The following four points are Grok’s response, reproduced exactly as generated, without any edits.
1. Trump’s Decision-Making and Influence from Netanyahu (”Bibi”)
The post suggests Trump was “arrogant, too lazy, or browbeaten by bibi” into launching the war without proper consultation. Multiple sources confirm that Israeli Prime Minister Benjamin Netanyahu played a pivotal role in pushing for military action. Netanyahu met with Trump in the Oval Office on February 11, 2026, to discuss war plans and ensure ongoing nuclear talks with Iran didn’t derail them. Insiders describe Netanyahu as determined to end diplomatic efforts, and Trump embraced the strikes partly due to this influence, viewing it as a path to “decisive” action similar to his Venezuela intervention.
However, Trump has publicly framed the decision as his own, emphasizing “peace through strength” in Operation Epic Fury, which began on February 28 with joint U.S.-Israeli strikes that killed Iran’s Supreme Leader Ali Khamenei and targeted nuclear, missile, and military sites.
2. Lack of Engagement with Allies and Partners
The claim that Trump failed to “first engage US allies and partners” holds up. Reports indicate the administration bypassed traditional consultations with key allies like Japan, Europe, and even some NATO members before the strikes. Trump did not seek congressional approval, as required by the U.S. Constitution and the War Powers Act, nor did he obtain UN Security Council authorization.
This has led to accusations of the U.S. “going rogue,” with critics like the American Friends Service Committee and legal experts labeling it a “war of aggression.” Allies have expressed frustration; for instance, Japan has had to release strategic oil reserves to mitigate shortages, and global leaders have condemned the unilateral approach.
3. Legality of the War
The post calls it an “illegal war,” which aligns with widespread expert consensus. Under international law, the strikes violate Article 2(4) of the UN Charter, which prohibits the use of force against another state’s territorial integrity without self-defense justification or Security Council approval. No imminent Iranian attack on the U.S. was cited, and the operation’s goals—destroying Iran’s missile capabilities, navy, and nuclear program, while aiming for regime change—exceed self-defense parameters. Domestically, it’s unconstitutional without congressional authorization, prompting calls from groups like the Center for International Policy for Congress to intervene.
Trump has defended it as necessary to prevent nuclear threats, but legal analysts reject this as “preventive” rather than “preemptive.”
4. Global Economic Impact
The assertion that “EVERYONE across the globe is now paying the economic price” is strongly supported by data. The conflict has disrupted oil supplies through the Strait of Hormuz, leading to blockades and attacks on Gulf infrastructure. Brent crude oil prices surged above $100 per barrel in early March, peaking at $105.19 on March 15 before settling around $102.98 as of March 17. Asia, heavily reliant on Middle Eastern oil, has faced acute shortages—Japan began tapping reserves on March 16, anticipating fewer tankers. This has caused gasoline rationing and high prices globally, with U.S. pump prices exceeding $3.59/gallon in some areas, fueling domestic backlash.
What This Tells Us
The fourth point is the most honest. No diplomatic nuance and no counter-framing to offer. Just data - and data doesn’t give Trump an out.
The first three points are something else. They are factually grounded, but they are structured to carry a subtle ideological payload. Not through lies. Through architecture - what gets said last. Through what gets the final sentence. That is how AI systems shape perception without anyone being able to point to a specific falsehood. And that is precisely why it’s so effective, and so difficult to challenge.
The question is not whether AI is smart. On raw factual recall, it often is. The question is whether it is neutral - and the answer, structurally, is that it cannot be. Every AI system reflects the values, biases, and incentives of its creators and trainers. The danger is not malice. The danger is invisibility. When millions of people receive the same subtly-framed outputs, on every conceivable political and social topic, from a system whose internal reasoning no one can audit - that is a civilisational-scale problem.
We should all be paying attention to this. Especially now, when the stakes of getting geopolitics wrong couldn’t be higher. Every nation should be taking monitoring this issue like a hawk. Even blocs like the EU with AI legislation is falling short in this regard.
PublicAI’s like Grok threaten the sovereignty of nations across multiple dimensions:
Information Sovereignty The most immediate threat. When a nation’s citizens form political opinions through AI systems built, trained, and governed by foreign corporations - primarily American - the country has effectively outsourced a portion of its cognitive infrastructure. South Africa, for example, has no meaningful say in how Grok, ChatGPT, or Gemini frames events relevant to its citizens. That’s a sovereignty issue in the same way foreign-owned media once was, except operating at incomparably greater scale and intimacy.
Regulatory Sovereignty Most nations cannot regulate what they didn’t build. The EU has tried hardest with the AI Act, but even that is largely reactive - governing the deployment of systems whose architecture and training decisions were made in San Francisco or Palo Alto. Smaller and developing nations have almost no leverage at all.
Economic Sovereignty AI is rapidly reshaping labour markets, financial systems, and industrial competitiveness. Nations that don’t control foundational AI infrastructure become dependent on those that do - a new form of technological colonialism that mirrors, and potentially deepens, existing global inequalities.
Military and Intelligence Sovereignty AI-driven surveillance, autonomous weapons systems, and signals intelligence increasingly advantage nations with advanced AI capabilities. This creates a hard power asymmetry that traditional diplomacy and arms treaties weren’t designed to address.
The Harder Argument Some scholars go further and argue that AI doesn’t just threaten sovereignty - it fundamentally redefines it. Classical sovereignty was premised on physical borders and the state’s monopoly on force. Neither of those frameworks adequately captures a world where a foreign AI system can influence elections, reshape public discourse, automate economic decisions, and advise military strategy - all without crossing a single border.
The honest caveat - sovereignty itself is a contested concept, and some argue that global AI systems could be governed through multilateral frameworks, much like nuclear technology or the internet. But that governance architecture doesn’t meaningfully exist yet, and the speed of AI development is far outpacing the diplomacy needed to build it.

