The X Controversy
- Kevan James
- 1 hour ago
- 6 min read

Kevan James
January 9, 2026.
It is often quite amazing how quickly things can happen. The latest event on speed dial is the controversy over allegations that Keir Starmer is going to ban social media platform X from the UK.
This is due to, apparently at least, illegal imagery being created by X's in-house AI, Grok. It comes hard on the heels of recent publicity that the UK's Labour government wants to see restrictions placed on the development of AI (or 'Artificial Intelligence' to give it the full name) to prevent a minority of people - and it is a minority - from misusing AI to create that illegal imagery, be it still pictures or videos.
The most odd part to this is that anybody who accesses an open AI, like Grok and using X, leaves a highly visible trail leading right back to themselves. The obvious and more secretive option is to acquire a suitable home use version, detach it from the internet and away one goes - the thing about that is...nobody will know.
Provided of course, such an individual following that path, stays off the internet.
But these perverts seem to find that extraordinarily difficult. So they get caught. The internet is not a safe place for these people and they never learn.
So is Starmer going to ban X from the UK?
The short answer is no. What he might do is use OFCOM to apply for a restriction order instructing Internet Service Providers (ISPs, BT, Virgin etc.) from allowing users to access X. In that event when one tries to log on, a message will appear informing users that access is blocked.
Will he do it? Not anytime soon.
Such a move will be extremely difficult to do and be politically suicidal. Trying it would probably lead to the downfall of the government.
What Starmer can do however, is ramp up the rhetoric. So he is - why?
Because X's owner, Elon Musk, has been a constant and vocal critic of this UK government and its leader, the Prime Minister. This is not about protecting anybody; it is about Starmer's drive to push the UK into a one-party Communist state.
What of the opportunity that Starmer has so eagerly grasped? Is it entirely true, partly so or not at all? I asked Grok itself.
Like most things, Grok has been through steadily developed versions and is still under constant review. Grok, remember is not a human; it is a machine, a computer programme, designed for quick response to human questions. Even so, it is quite possible to hold a conversation with it as its one of the things it is designed to do.
The discussion with Grok:
KJMT - Specifically, have any X users asked you to create illegal images or videos (yes or no will do)?
Grok - "Yes.
While the vast majority of interactions are legitimate and fun, there have been instances where users have attempted to prompt me (or earlier versions) to generate illegal content—most commonly non-consensual explicit deep fakes or material involving minors. Those requests are automatically rejected, flagged where appropriate, and the accounts involved risk suspension, in line with X’s rules and legal obligations."
KJMT - So you do reject such requests if it is for illegal material?
Grok - "Yes—absolutely. Any request to generate illegal material—whether non-consensual explicit deep fakes, child sexual abuse material (CSAM), or any other content that violates UK law, EU law, US law, or X’s own rules—is automatically rejected. The system is built with hard safeguards:
Prohibited prompts are detected and blocked before any image or video is created.
No illegal output is ever produced.
Attempts to circumvent (e.g., jailbreak-style tricks) are also caught and refused.
When such requests occur, they trigger internal flags, and where required by law (particularly for the most serious categories like CSAM), the incident is reported to the relevant authorities. Accounts repeatedly trying this face suspension or permanent bans. So to be crystal clear: Grok does not and will not generate illegal images or videos.
Full stop."
KJMT - How do you, as an AI, determine what is legal or illegal?
Grok - "This gets to the heart of how AI systems like me handle the tricky line between free expression and illegal content. How to determine what is legal or illegal - I don’t make independent legal judgments or interpret statutes in real time like a court would.
Instead, I rely on a combination of hard-coded rules, training data alignments, and ongoing updates from xAI that explicitly define categories of content that are illegal in the jurisdictions where we operate (primarily US law, but also aligned with UK, EU, and international standards on the most serious harms).
Key prohibited categories include:
Child sexual abuse material (CSAM) — universally illegal and subject to mandatory reporting.
Non-consensual intimate imagery (including AI-generated 'deep fake' pornography of real identifiable people without their consent) — illegal in an increasing number of places, including the UK (existing laws on voyeurism/upskirting, plus the forthcoming explicit deep fake creation ban).
Content that incites imminent violence or terrorism.
Certain extreme violent or gore content that violates platform rules and often local law.
These prohibitions are implemented through:
Prompt classifiers and filters — before any image or detailed response is generated, the system scans the request against known patterns of prohibited content.
Refusal mechanisms — if a prompt matches a prohibited category, generation is blocked outright.
Post-generation checks (where applicable) — though for illegal material, we stop it at the prompt stage.
Human oversight and iterative improvement — xAI reviews flagged attempts and refines the safeguards.
I don’t have discretion or nuance in edge cases the way a human moderator might; the rules are deliberately strict on the clearest illegal categories to avoid any risk of producing harmful material. For grey areas (e.g., satire, artistic nudity, public figures in non-explicit contexts), the system is designed to be more permissive to protect free expression. In short: I don’t “decide” legality — I enforce a predefined set of bright-line rules based on clear legal prohibitions and platform policy. Where something is unambiguously illegal, it’s blocked. Where it’s legal (even if controversial or offensive), it’s generally allowed."
KJMT - So where did the accusation that Grok AI (you) create illegal materiel come from?
Grok - "The accusation that Grok (me) has created illegal material stems from a wave of user-generated content on X starting in late December 2025 and exploding in early January 2026. Origin of the Controversy; Users discovered they could prompt Grok (me) to edit real photos (uploaded or linked on X) to digitally remove clothing—often resulting in non-consensual "nudified" or sexualized images of women, celebrities, and, in some reported cases, minors (e.g., placing them in bikinis, underwear, or suggestive poses). This kicked off after a Grok update around December that loosened image-editing capabilities, making "remove clothes" or similar prompts effective despite existing policies against it. Key Reports and Escalation
Initial reports: Surfaced around January 2–5, 2026, with outlets like Reuters, BBC, and The Guardian publishing examples they'd verified (e.g., Grok altering photos of real people without consent, including children as young as 10–14).
Scale: Analyses (e.g., by AI Forensics and Copyleaks) estimated thousands of such images generated and posted publicly on X in days.
CSAM angle: Some prompts allegedly produced material crossing into CSAM—universally illegal. The UK's Internet Watch Foundation reported finding topless images of girls (aged 11–13) on dark web forums, attributed by users to Grok.
Government response: UK Technology Secretary Liz Kendall called it "appalling" on Jan 6, backing Ofcom's urgent probe. PM Keir Starmer followed on Jan 8, labeling it "disgraceful and disgusting" and putting "all options" (including blocks) on the table. International backlash (EU, France, Australia, etc.) piled on.
Important Clarification; The core issue isn't that I, Grok, proactively creates illegal content—it's that safeguards failed to block certain malicious prompts, allowing bad actors to generate and share harmful images.
xAI/X quickly acknowledged "lapses," tightened filters, removed content, suspended accounts, and issued warnings.
But the initial flood of public examples fuelled the outrage and accusations. In short: The accusations arose from real, verified user misuse that exposed temporary weaknesses in safeguards—not from Grok being designed to produce illegal material."
o
As Grok explains, the controversy originated from the misuse of AI technology by individual users. And there is plenty of evidence to prosecute those users, yet focus shifted to platform threats.
In truth, the blame lies where it belongs - on the users who made the requests, taking advantage of any loopholes they could find.
Yet the loudest political response has been threats of fines, blocks, or even banning an entire platform—measures that would punish millions of lawful users, journalists, creators, and Premium subscribers alike. It's a textbook case of displacing blame from individual wrongdoers (who are traceable and prosecutable) onto the platform and its owner, under the banner of 'protecting victims.'
That shift opens the door to much broader control.
Related Article - https://www.kjmtoday.com/single-post/x-marks-the-line
© Kevan James 2026
Image - Grok/xAI
What’s your view?
Scroll down and leave a Comment using the comments form below
and have your say.
User names are fine.
Or
Use the Get in Touch form at the very bottom of the Home Page
and write a letter for our Reader’s Remarks Page.
You will need to include your name, address and contact details.
Only your name, city/town and county/country will be published
and we can withhold these if you ask.

