I’ve written a paper AI’s War on Truth which you can download here. It’s in four sections, leads on the unprecedented threat generative AI chatbots pose to truth, democracy and reality, covers how that’s happening, what Civil Society might do about it, some framing and communications stuff, and the spellbinding hold Silicon Valley has over our politicians. There’s a shorter paper of section extracts (but not the conclusions) here. Do let me know what you think by posting a comment or contacting me direct
Civil Society campaigns are about contested versions of reality, so truth matters. So too is politics, and often the law and justice, science, education, news journalism and other domains where a capability to establish the truth by testing it with evidence is central to modern civilisation, society and democracy. It’s got gradually more like this ever since the Enlightenment but now things have gone into reverse.
Our ability to know what’s true and false, real or fake, is under attack from artificial intelligence, or to be more precise, the operation and outputs of LLM-based AI chatbots such as ChatGPT. They fabricate content which passes as real but isn’t. OpenAI acknowledges that 1 in 10 of ChatGPT’s outputs is a ‘hallucination’, a techy euphemism for a lie. With 2.5 billion user ‘prompts’ every day, leading to 2.5bn ‘inferences’ or responses, that’s 250 million fakes a day, just from one AI chatbot. (And that’s just the start – see Part 3 for bad behaviours which make such models so untrustworthy and unreliable that if actually human, they’d be arraigned as conmen, fraudsters, snake-oil salesmen or threats to national security, and I’m not joking).
To say it puts the misinformation impact of Social Media in the shade, would be an understatement, and it’s even more addictive to users.
This coming Sunday 30th November marks the third anniversary of OpenAI’s release of ChatGPT into ‘the wild’ of the internet, and every corner of it is now being polluted with AI ‘synth’, or synthetic content, which appears to be human-generated but is not. Such synth pollution or info-pollution now makes up most of the content online, a lot of it from ChatGPT as it has over 60% of the chatbot market and is closing in on gaining a billion users. It’s even a threat to AI development itself as when fed content to learn from, if it’s synthetic, models can collapse: see the story of the Church architecture which became a colony of Jack Rabbits (content list below).
This is why thousands of active and former AI researchers have repeatedly called for regulation, and a pause on the ‘race’ to AGI or ‘Artificial General Intelligence, as the chosen stepping stones to that goal of intelligence ‘better than human’ is the development of LLMs, or Large Language Models. Current versions of those run the AI-search boxes which pop up on Google and other search engines, and are available in app form, on tech company websites and as paid-for versions.
So far those calls for regulation have failed because politicians are conflicted. Some have chosen to believe (eg the UK Government) that such AI will produce an almost miraculous increase in productivity, and so are mandating the vast datacentres which scaling-up LLMs requires (though not a lot of other AI technologies with far fewer issues and a much better track record of being useful). Others such as Donald Trump, explicitly see winning a race to AGI, as a competition with China for global dominance.
Some economists and the financial media are far more sceptical and point out that LLM generative AI chatbot tech in particular has failed to improve bottom lines except for companies involved in building the datacentres, which paying them to dig very expensive holes and then fill them in again would also do. Nobody has informed the public of the real pro’s and cons of LLM powered chatbots and then asked them if they want the technology in unregulated form, or the race to AGI, at which point it would probably be impossible to control. (It’s not really under control at the moment – see Part 3). This chatbot AI has no Social Licence.
Yet the investment markets have so far poured vast sums into AI stocks and private equity, and politicians, like many businesses, fear missing out – FOMO.
The explosive growth of such AI has left potential regulators standing and governments who’ve gone all-in, taking a big gamble. There are many mostly small and specialised advocacy efforts to promote AI ‘safety’ but as yet no large campaigns to rein in LLM based-AI chatbots of the sort which the wider public would notice. So this AI boom has largely enjoyed a political free ride.
It’s shot through the stages which took years and decades for Civil Society engagement to develop on an issue like climate change, in three years. But the social impacts are starting to emerge. The first few court cases brought by parents of troubled teens who committed suicide after LLM-based AI chatbots affirmed and reaffirmed their suicidal ideas, for example.
In Part 4 I suggest ten areas which might enable Civil Society to engage the public with tangible real world evidence (not speculation about AGI) and cajole politicians into action. I doubt much will happen to make a difference without that.
If you are in the AI business, or a user of other types of AI with more defined, contained and genuinely useful functions, you might consider that this is a threat to you too. Very few politicians or members of the public understand much about AI and if LLM-based chatbots are allowed to run amok un-contained, and all “AI” per se is damned as a result, the toxic backwash could affect you too.
If you don’t do anything else, watch this encounter between ChatGPT and Sam Coates, Deputy Political Editor of Sky News, in which it fabricates an entire programme transcript and denies it six times before eventually conceding that it made the whole thing up. (The ensuing comments online – see Part 1 – are a fascinating insight into what might play out in any public debate on regulating this sort of AI).
Contents list of AI’s War on Truth


