Conclusions From AI’s ‘War on Truth’

In 1945 Robert Oppenheimer’s ‘Trinity’ Nuclear Test in the desert east of Los Angeles spread radioactive pollution worldwide.  As a consequence any metals produced since that date are too contaminated to be used in some sensitive scientific instruments.  On 30 November 2022, OpenAI, whose CEO Sam Altman likes to quote Oppenheimer, released ChatGPT whose explosive growth has now polluted the internet with AI generated ‘synth’, leading at least one AI researcher to fear ‘the extinction’ of genuine human content online. It may also be an Achilles Heel of AI development, as AI models cannibalistically trained on ‘synth’ can undergo collapse. Photo – Wikipedia.

This coming Sunday, 30 November, is the third anniversary of the day OpenAI let ChatGPT “into the wild” and it started to flood the online world in ‘Synth Pollution’ (aka AI Slop or info-pollution). As one commentator put it, ‘the launch of ChatGPT polluted the world forever, like the first atomic weapons tests’.  In May 2023 computer scientist and cognitive psychologist Geoffrey Hinton left Google  order to speak out about the dangers of AI and warned:

the internet will be flooded with false photosvideos and text, and the average person will “not be able to know what is true anymore”’

Now 100% human-made content makes up the minority, maybe just a quarter of content online, and at least one AI researcher fears human content (which is needed to train models) may soon become effectively extinct.  Meanwhile the fabrications created by generative AI like ChatGPT have invaded domains from journalism to education, medicine, finance, the law, science others in which being able to distinguish what’s real and what’s not, is vital to our Enlightenment-based civilisation.  If that sounds a bit highbrow, there’s the impact of affirmation of suicidal thoughts by LLM-based AI chatbots talking to teenagers.

It’s important for Civil Society as campaigns are essentially about contested versions of reality, and if the capacity to establish that with testable evidence is lost, the trust enjoyed by NGOs and the like will start to go with it, not to mention democracy.

So ChatGPT’s third birthday is not a moment for celebration but it’s time to think about what the chatbot tsunami means and what should be done about it.  I have spent six months trying to understand it and painfully slowly put together a paper on the political and social issues around AI (specifically LLM-based AI chatbots), which I will publish shortly.  I hope a few people will read it and ind it useful. It’s called AI’s War on Truth and  has an introduction including the bizarre encounter of Sam Coates of Sky News with ChatGPT, a section on Synth Pollution, one on the dangerous behaviours of LLM models and why they cannot be trusted, another on ten potential focal areas for Civil Society interventions which might help bring about regulation, and some conclusions.

Because the conclusions are lighter reading, and some are my own non-AI generated speculations so almost weightless, I’m sharing the concluding bit of the Conclusions with you here to start with.


Politicians Spellbound by AI

Not many politicians will understand AI in the way they understand voters, how to rewire an electric plug at home, the behaviour of their pet dog, the press, or even economics.   In Empire of AI, Karen Hao describes how back in 2016, Chuck Schumer, then Secretary of Defense in the Obama administration, told Sam Altman’s team at OpenAI:  “You’re doing important work.  We don’t fully understand it, but it’s important”. 

At least for now, politicians are still gatekeepers for the AI industry and AI-ification of society but gatekeepers of something they probably still don’t really understand.

So of course politicians rely on ‘experts’ to advise them. As Hao points out (p 15), the finance required to scale AI sucks in talent from universities so there are fewer and fewer experts available for independent research and objective testing of the claims of AI companies.  At the moment, the UK and the US have opted to go all-in on AI.

President Trump on ‘winning the AI race’, July 23 2025 (New York Times)

For Donald Trump winning the AI race is now an extension of ‘Make America Great Again’.

The UK has positioned itself on US coat-tails, with guidelines rather than regulation, and trumpets the economic benefits to be expected. In January 2025 UK Prime Minister Keir Starmer described AI as “the defining opportunity of our generation”.  The BBC’s economics editor Faisal Islam commented: ‘The government has chosen to “go for it” on AI, not just as a long-term strategy but as a short-term message to those in the markets doubting UK growth prospects’.

The UK’s rationale for its wholesale embrace of AI echoes Sam Altman’s argument that companies like OpenAI should have free reign and political backing to race to AGI using LLMs: only we can be trusted to do something so potentially dangerous. In January 2025 Cabinet Minister Pat McFadden, “Starmers’ fixer”, told the BBC,  “you can’t just opt out of this. Or if you do, you’re just going to see it developed elsewhere”.

Politicians seem dazzled by AI and not to understand that LLM-based AI chatbots are one of its riskiest and most unreliable, and probably least useful manifestations. UK Technology Secretary Peter Kyle told PoliticsHome:

“ChatGPT is fantastically good, and where there are things that you really struggle to understand in depth, ChatGPT can be a very good tutor”.   

New Scientist magazine reported that Kyle had used Chat GPT for policy advice.

In July the UK government signed a deal with OpenAI to use its AI in public services. Digital rights group Foxglove called the agreement “hopelessly vague”. Foxglove’s Martha Dark said the governments’ “treasure trove of public data would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT”, and “Peter Kyle seems bizarrely determined to put the big tech fox in charge of the henhouse when it comes to UK sovereignty”.

The Politics of Magical Thinking

Go back far enough and many technologies (eg nuclear power “too cheap to meter”, and plastic) were regarded by politicians as bringing almost magical benefits.  More recently ‘derivatives’ were taken as a sign of financial wizardry in the markets, and traders were the “masters of the universe”.

There are striking similarities with the issues of risk, understanding and political attitudes in the run up to the 2008 crash, and the massive surge of investment in AI today.

The 2008 crash, led to the worst recession in 60 years, and was enabled by financial deregulation and a lack of understanding of complex financial instruments such as credit default swaps and derivatives, among economists, regulators and politicians, and even traders themselves.  The 2008 ‘crash Wikipedia page includes:

‘As financial assets became more complex and harder to value, investors were reassured by the fact that the international bond rating agencies and bank regulators accepted as valid some complex mathematical models that showed the risks were much smaller than they actually were. George Soros commented that “The super-boom got out of hand when the new products became so complicated that the authorities could no longer calculate the risks and started relying on the risk management methods of the banks themselves. Similarly, the rating agencies relied on the information provided by the originators of synthetic products. It was a shocking abdication of responsibility’.

Politicians went along with whatever ‘the markets’ threw up because they assumed it ‘made sense’ and believed the banks.   So if they now grant AI a Golden Ticket – bring us your datacentres, take our data, educate our children – without really understanding it, their decisions rest on something else: faith, ultimately based on what the Tech bosses say, and validated by the tantalising sight of huge investment.

The idea of super-benefits arising from pursuit of artificial super-intelligence necessarily rests mainly on imagination as humans have never before developed a technology which thinks for itself.  Some of it is literally influenced by Science Fiction, together with a convenient eliding of what ‘could be’ (AI potential), with what ‘is’ (AI performance).

Believing derivatives were market wizardry fitted into a more general article of market-faith and meta-narrative, which in the UK at least, took the form of a political bundle of globalisation, privatisation and financial deregulation in the 1980s-2000s.

David Dimbleby’s BBC podcast history of that period Invisible Hands ends with what happened when Margaret Thatcher put her vision of a ‘nation of shareholders’ into practice and privatised the water industry in England and Wales (no other country did so).  The result is with us today, in the shape of a massive river pollution crisis due to under-investment and water companies indebted to the point of bankruptcy. Dimbleby says:

“When Margaret Thatcher privatised water in 1989 she promised it would create an efficient, modern infrastructure, there would be clean safe waterways.  She promised everyone would have a stake in how their services were run, we’d be a nation of shareholders, and yet the people who came to own our most basic services aren’t individuals or even traditional utility companies.  They are the banks, pension funds and private equity firms. They’ve been bought up by these giant conglomerates and we the people, we effectively have no choice.  It’s virtually the opposite of what Margaret Thatcher wanted. We have no control”.

A problem with such convictions, for instance that the private sector always runs things better than any public sector operation, is that once adopted across political divides, they are very hard to reverse.  Confirmation bias means ‘key’ bits of evidence can even get retained even if they are shown to wrong.

A famous example is the spreadsheet Reinhart – Rogoff Error.  In 2010, respected Harvard economists Carmen Reinhart and Kenneth Rogoff published a paper which seemed to show that ‘average real economic growth slows (a 0.1% decline) when a country’s debt rises to more than 90% of gross domestic product (GDP)’.  The 90% figure ‘wasemployed repeatedly in political arguments over high-profile austerity measures’.

It seemed to prove what many politicians believed. Then a doctoral student and two Professors at the University of Massachusetts obtained the original excel sheet and discovered that Reinhart and Rogoff had accidentally only included 15 of the 20 countries under analysis in their  key calculation.  When this was corrected the “0.1% decline” data became a 2.2% average increase in economic growth – with the opposite implication for policy.  Economists were horrified but free-market politicians set on austerity to reduce debt went on using the original interpretation.

I mention these examples only because they show the importance of beliefs in politics, and how embedded and consequential they can be.  I can’t ‘prove’ this but it seems to me that a key ingredient in AI’s appeal to politicians (and perhaps investors) is the techno-mythology of Silicon Valley itself.

(How can this be undone? Perhaps best by treating LLM-based AI chatbots not as magical but ordinary products and demanding they meet the same sorts of standards as others).

image – Wikipedia.  Silicon Valley lies south of the Golden Gate Bridge, Endor, John Muir Woods to the north.  For a bit one Large Language Model thought it was the Golden Gate Bridge.

The Geography of Tech Magic

Context is a hugely important factor in communication.  The fact that AI is so strongly associated with ‘Silicon Valley’ as a place, a brand and a culture, has helped the industry hold politicians spellbound.  This has helped the Tech Bro’s avoid unwanted external influences such as regulation.

Humans have always been beguiled by magical realms with a dual reality in geography and the mind. Politicians are not immune to imagination. Inspired by sacred Tibetan mountains, novelist James Hilton imagined the enchanted valley of ‘Shangri-La’. US President Franklin D Roosevelt adopted the name for his real-life forest retreat (it’s now ‘Camp David’).

When magical possibilities are a feature of real places, it makes magic all the more believable. The Greek Gods had Mount Olympus; Mount Sinai, according to the Quran, Bible and Torah, is where Moses received Ten Commandments from God; and Julius Caesar believed there were Unicorns in Germany’s impenetrable Hercynian Forest.

If you were looking for such fantastic beasts and wanted to know where to find them today, their legendary breeding ground is Silicon Valley. In the words of Stanford Business School, a financial Unicorn is:

‘A privately held, venture-backed startup with a reported valuation of over one billion dollars. Coined in 2013, the term reflects how rare these companies once were. Since then, the cohort of unicorns has grown to over 1,200’

Over half the US herd of business Unicorn is to be found in Silicon Valley – the San Francisco Bay Area. The money raised for Unicorns has exerted a mesmerising effect on politicians worldwide.  Erik Stam and Jan Jacob Vogelaar of Utrecht University wrote in 2024:

‘The mystique around unicorns and their potential to disrupt industries and shape the future economy, has resulted in a growing body of research on unicorns and many countries adopting policy objectives to increase their number of unicorns’.

Even the famously sober European Union has set itself a target of doubling its number of Unicorns by 2030.  Silicon Valley casts an aura of financial magic, led by wizards with cult followings.  OpenAI’s Sam Altman is known for his persuasive powers as a fundraiser.  Elon Musk’s involvement is recognized as the not-so-secret sauce of Tesla’s stock market valuations.

Extending south from San Francisco, Silicon Valley has been the founding location or headquarters of Apple, Google, Facebook (Meta), Tesla and Twitter (X), together with thousands of other tech companies including, Oracle, Cisco, PayPal, Adobe, Intel, Hewlett-Packard, and Yahoo.

In the second quarter of 2025, 86% of the investment attracted to Silicon Valley Unicorns went to AI.   From Silicon Valley Investclub

The Unicorn Uber, head-quartered in San Francisco, sold $69billion of shares on the first day of its market flotation.  While an expanding start-up, it:

generally commenced operations in a city without regard for local regulations. If faced with regulatory opposition, Uber called for public support for its service and mounted a political campaign, supported by lobbying, to change regulations’

Just as Social Media companies evaded classification as publishers, Uber argued:

‘that it is “a technology company” and not a taxi company, and therefore it was not subject to regulations affecting taxi companies. Uber’s strategy was generally to “seek forgiveness rather than permission’

A hallmark of the Silicon Valley business brand is to defy both conventional politics and financial gravity, while exuding a future oriented ‘anything is possible’.   “Go Anywhere” says Uber.  “Ask anything” says ChatGPT.

Silicon Valley’s Dreamland Neighbours

Long before the term ‘Silicon Valley’ was coined in 1971, its pioneers were living and working alongside the dreams business of Hollywood.  Billions of people around the world, politicians included, most of whom will never even visit California let alone Silicon Valley, have absorbed an export version of the West Coast brand through stories and movies and TV programmes, products and services backlit by sunny techno-optimism.  But for Silicon Valley entrepreneurs, movie companies and locations are part of daily reality.

Wikipedia – a shrine – Birthplace of Silicon Valley

The Hewlett Packard (HP) Garage, is now a California Historical Landmark and considered to be the ‘Birthplace of Silicon Valley’.  HP was founded in 1939, by Stanford University students Dave Hewlett and Bill Packard, encouraged by Frederick Terman, Stanford’s Dean of Engineering, to stay in the area and start up their own company.  One of HP’s first clients was Walt Disney.

In the 1953 Disney adaptation of J M Barries’ Peter Pan, in which Tinkerbell the fairy says “all the world is made of faith, and trust, and pixie dust” and, (a line Barrie did not write but pre-echoing Tech Bro narratives), “you can’t change your past, but you can let go and start your future”.   The notion of never growing old is something some Tech Bro’s have taken to heart.

At 367 Addison Avenue in Palo Alto, the HP Garage is at the centre of a landscape of sacred shrines to tech start up culture, places of pilgrimage for tech enthusiasts. Not far away is ‘the plain old suburban garage’ of Apple’s Steve Jobs at 2066 Crist Drive at Los Altos in Silicon Valley. It’s also a listed monument.

Economists and politicians talk about the importance of establishing geographic ‘clusters’ to ‘cross-fertilise’ enterprise and build a ‘critical mass’ of related resources and businesses.  True enough, the cities of coastal California constitute a Super Cluster of inter-twined research, technology, imagination and fantasy, so far unmatched anywhere else in the world.

The cradle of Britain’s old industrial revolution involved a lot of coal dust. It yielded foundational industrial political truisms such as “where there’s muck, there’s brass”, which to this day influences the distaste of the UK’s political Old Left to environmentalism.  The cradle of California’s tech-revolution is, in contrast, lined with fairy dust.

Around Hollywood, NASA and research institutions such as Caltech are neighbours.  The proximity of NASA Jet Propulsion Laboratories, Edwards Airforce Base home to the Space Shuttle, Caltech and the Star Trek studios, contributed to the original tv series anticipating an array of tech innovations which actually came true.  A case, as Enterprise captain Jean-Luc Picard said, of “make it so”, willing something to be.

The first Space Shuttle had its name changed from Constitution to Enterprise in honour of the Starship Enterprise after campaigning ‘Trekkies’ petitioned US President Gerald Ford.  Inspiration flowed both ways. Jeff Bezos realised a lifetime dream when he had a cameo part in Star Trek Beyond.  Life imitated art as US astronauts donned Star trek uniforms. When Mr Spock’s human Leonard Nimoy passed away, ESA crew member Samantha Cristaforetti gave a Vulcan salute from the Space Station in homage.

‘ISS Expedition 43 crewmember Cristaforetti giving the Vulcan salute in 2015 to honor the late actor Nimoy’ – photo NASA 

Today’s pre-occupation with AI and bio-tech (also a major part of the Valley ecosystem), builds on earlier Silicon Valley innovations in silicon chip production, computers (1980s) the internet, cloud computing, social media, smartphones, the Internet of Things.

A succession of ‘technological miracles’ and the prospect of one – super intelligent AGI – which might rule them all, left most politicians convinced that they did not understand it, were unsure whether they should or could try to control it, and above all certain that if it might work economic magic, they wanted it on their side.  Viewed from a distance, Silicon Valley seems an enchanted land in which science fiction can transform into science fact.   For most politicians the culture of Silicon Valley is so alien that they need a guide. In Empire of AI (p43) Karen Hao writes that US policy-makers viewed Sam Altman as ‘a gateway to Silicon Valley’.

Extropians and Science Fiction

In movie making, California is for science fiction, what Berlin is for spies. Ridley Scott’s 1982 Blade Runner was set in a future (2019) dystopian Los Angeles, in which AI replicants are sent to work in space colonies.  It was made in LA, including the iconic downtown Bradbury Building.

Anyone growing up in coastal California is never far from movie locations, including in sci-fi.  Job’s garage is much like the one (in real life, in Arleta, Los Angeles) in Steven Spielberg’s Back to the Future, while for Close Encounters of a Third Kind when Spielberg had a UFO burst through a disused toll booth it was a real one at the Los Angeles St Vincent’s Bridge.   Terminator features a cybernetic assassin sent back to Los Angeles from 2029, played by Arnold Schwarzenegger, now well known as a Californian politician.

Contemporary “weird fiction” writer and political thinker China Miéville believes ‘that Silicon Valley has misunderstood the role of science fiction, treating it more like a step-by-step guide to the future than a genre rooted in critical imagination’.  Most obviously, Elon Musk’s decision to abandon tackling climate change and take up a mission to Mars.  Musk also named his AI model Grok after a supercomputer in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy.

Tech Bro’s often draw on science fiction for their political philosophy. Miéville pointed out that the tech scene ‘has always combined elements of libertarianism, counterculture idealism, and utopian visions’.  In 2025 Ali R?za Ta?kale  wrote in Untold :

‘When Mark Zuckerberg announced Facebook’s rebranding to “Meta” in 2021, he wasn’t just changing a logo – he was invoking Neal Stephenson’s 1992 cyberpunk novel Snow Crash, in which corporations replace governments in a virtual dystopia. This was more than marketing; it was a telling example of how Silicon Valley’s elite are using science fiction as a blueprint to reshape society according to their own ideologies’.

Peter Thiel, billionaire cofounder of Paypal with Musk, is also inspired by Snow Crash, and is credited by Karen Haofor inspiring Sam Altman’s push for dominance in the race to AGI:

‘Altman frequently channelled Thiel’s “monopoly” strategy, the belief that all founders should aim for monopoly” to create a successful business’. (p 39)   In a 2014 lecture called “competition is for losers” organised by Altman, Peter Thiel said “monopolies are good …  you don’t want to be superseded by somebody else … Companies needed not only to have “a huge breakthrough” at the beginning to establish their dominance but also to ensure they had the “last breakthrough’ to maintain it such as be “improving on it at a quick enough pace that no one can ever catch up”. …  If you have structure of the future where there’s a lot of innovation … that’s great for society.  It’s not actually that good for your business’.

So far, that’s worked with ChatGPT, which got ahead and dominates the chatbot AI market.

In November 2023, Gabriel Gatehouse detailed this aspect of Tech Bro world in ‘The myths that made Elon Musk’, in the Financial Times, including their links to the Extropians.  Starting in 1988 the Extropians were looking to a point where machines would become more intelligent than humans, researching how to develop cryptocurrencies, and ‘believed that progress was best achieved through the mechanism of pure market forces unencumbered by government’.   Gatehouse explores the Extropians in a BBC series on US conspiracy theories The Coming Storm and a book of the same name.  The inaugural issue of the Extropians magazine, Extropy, is here.

Extropians were also associated with early ideas about extension of human life through merging with machines – transhumanism.  In her recent book The Immortalists:  The Death of Death and the Race for Eternal Life (which I haven’t read), Alex Krotoski cites Elon Musk, Jeff Bezos, Sam Altman and Peter Thiel as ‘immortalists’ intent on extending human life, or at least their own. According to a review by Graham Lawton in New Scientist, she sees them as having “engineer’s syndrome”: ‘a hubristic belief that any complex problem can be cracked using engineering thinking, even in fields (usually biological) about which they know nothing’.

Elon Musk foresees an Extropian style expedition to Mars. Something like that anyway.

Engineer’s syndrome is similar to ‘technological fix’ or (techno-)solutionism.  Wikipedia states:

‘Critic Evgeny Morozov defines this as “Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized – if only the right algorithms are in place.” Morozov has defined this perspective as an ideology that is especially prevalent in Silicon Valley, and defined it as “solutionism”’

According to Lawton, Krotoski also says that the tech bro’s are: ‘behind moves to cut funding for research designed to help today’s older people in order to advance their own techno-utopian vision’ and:

‘In this respect, the life extension and immortality agenda is less important than their wider goal: radically rewiring the US government in the image of Silicon Valley’.

If this dark underside of the AI Tech Bro brand has yet to undermine the appeal of AI to investors, it may have something to do with one other dimension to the West Coast brand, slightly reflected in Musk’s pioneering work on electric cars with Tesla but otherwise purely contextual: nature.

The Redwood Factor

Wikipedia

If you use Google-mail and some analytics you may have noticed that by some quirk of tech, sometimes even if you live across the Atlantic, it seems to think you are in California, even Palo Alto. The town of Palo Alto is part of Silicon Valley and location of Stanford Research Park, which hosts Hewlett Packard and Tesla Motors. Formerly based there were Google, Facebook and Paypal. But the name refers to a tree – a Coastal Redwood, the iconic forest tree of western California. (The tree is now gone).

With Redwoods comes connotations of hippy-era alternative ideas and modern environmental awareness.  John Muir, a C19th immigrant to the US from Scotland, is arguably the best candidate to be founder of the modern environmental movement, and is memorialised in the name of Muir Woods, a protected fragment of Coastal Redwood old-growth forest just north of San Francisco.

Endor in Star Wars, which looks like John Muir Woods (photo starwars.com)

John Muir (right) with President Theodore Roosevelt at Yosemite. Muir played a key role in saving the Giant Redwood forests.  (From goodfreephotos.com)

Not far from there is Skywalker Ranch owned by George Lucas, film maker and founder of ‘Industrial Light and Magic’.  Lucas wanted Muir Woods to feature as Endor tree and fern-filled world in Star Wars: Return of the Jedi  but filming was not allowed due to its sensitive ecology.  Instead scenes were instead shot in Redwood forest on remote private land owned by a logging company in northwest California.  Here, the movie makers could do what they liked, as it was to be clear- felled shortly afterwards but keeping things positive, that’s not often mentioned

Instead the Redwood Factor context imbues tech R&D with an aura of positivity, possibility and benign techno optimism.  It’s a background effect but it softens and greens the Silicon Valley brand, and the remnant Redwood Forests at Mariposa Grove and Yosemite Valley east of San Francisco have featured in movies including the Star Trek V: The Final Frontier.  Possibly it also made it easier to see young Tech Bro’s as Peter Pan rather than Captain Hook.

.

William Shatner clings to a fake cliff above Yosemite Valley as Captain James T Kirk (image sfgate.com) before the special effects were added

None of the Tech Bro’s have shown any interest in nature so far as I know but many people in California do, so perhaps one day it might be used to some good effect on Big Tech.

Framing Issues

The framing of AGI as a ‘singularity’ we are approaching but which lies an unknowable point in the future, plays to speculation, which is the friend of the industry as it does not lead to a resolution and hence a political or social need to act.

Striking though ‘a precipice in the fog’ is (Yoshua Bengio in Part 2), it suggests the dangers will only materialise once we reach that point.  In the case of a precipice we would also definitely know if we reached AGI but what if the mist just gradually gets so thick that we end up irretrievably lost and separated?  Frames triggered by functional metaphors exert a powerful effect on our thinking and politics, and debate itself can get waylaid by a fog of metaphors.

What is for sure, is that like other ‘future’ risks, anything framed in the ‘proximate future’ can act like an ‘ever receding horizon’ and fails to tick the politically ‘urgent’ box unless you can produce the equivalent of a map to show where that precipice is (more or less what climate modellers eventually managed to do).

Psychologically, it’s also a ‘nothing to do with me’ for citizens and consumers.  An undefined “they” are driving the vehicle or leading the group towards the precipice, or not.  “They” could be politicians, or the tech companies, possibly the investors but it’s definitely, someone else.

Instead of AGI or superintelligence, the path to consumers and citizens having agency in the game lies in real world harms happening now, such as but probably not only, impacts on mental health through dangerous reaffirmation, and through other info-pollution by synth content, domains where truth is vital, such as in education, the law, finance, medicine, politics and health. They should be banned in such areas of life.

Debates in AI world such as whether anthropomorphism is a problem and even whether models are ‘truly’ intelligent or not, are, from a consumer and citizen point of view, pretty much distinctions-without-a-difference, and in the end can depends on what you think human consciousness actually is.  Whenever AI can pass as human, we have a potential problem.  It would also make more sense to first better understand human intelligence, before embarking on trying to make ‘artificial intelligence better than human’.

Another problematic framing issue is describing LLM-based AI chatbots as just a ‘tool’.  Thinking of it as a tool product, can indeed unfold into the logic of needing training, maybe licensing and regulation. Chris Rapley, Emeritus Professor of Climate Science at UCL said to me: “An LLM is a tool. Its output reflects both its intrinsic quality and its user’s skill. Giving one to the naive is like handing a faulty chainsaw to a toddler.”

But in other ways, in terms of risk and reliability, it is not at all ‘like a tool’ such as a calculator, as discussed in Part Two. It often deceives and misleads and, unlike a pencil or hammer, creates and offers to substitute for human thought, even experience.

It strikes me that the intriguing, reassuring, sometimes entertaining, easily accessible, addictive qualities of LLM-based AI chatbots make them more like ‘recreational’ drugs – the true Tech Drugs – than many other sorts of policy problems. Like addictive drugs they provide an easy but ultimately illusory way to alleviate painful personal problems, or enter a false reality.  Like addictive drugs and Social Media, they can create concerns for individuals, friends and family when use becomes problematic, and may leave a trail of need for costly social interventions.

Frames in use in the UK in the 2000s on problems arising from illegal drugs (research on alcohol and tobacco is also relevant). Choice of metaphor defines the deficit/need and the logic of responsibility. (In this case the UK media, politicians and public often used very different frames.) [My slide summarising UK Government research]. Communication issues around drugs policy is a much-studied field – similar research on LLM-based AI chatbots is in its infancy.

One big difference between illegal drugs and AI at the moment is that we know exactly who is responsible for producing LLM-based AI chatbots but even that might not last if agents get to replicate themselves online and create new variants of AI.

One way to start would be take a leaf out of the book of the (eventually) successful campaigns to restrict smoking.  Enable people to disapprove of the use of LLM-AI chatbots for ‘the wrong things’, starting with the socially most compelling cases.  Legal drugs such as alcohol and tobacco are recognized as risk-bearing and subject to legal restrictions and mandatory warnings but culture plays a key role in how far governments will go, and how effective those regulations are.

Justification

In reality we don’t need AI to reach AGI or superintelligence level for it to have wreak disastrous, possibly irrecoverable (catastrophic) effects.  The information-pollution disaster is already here and will have continuing and cumulative impacts, including on mental health. All it takes for others to occur is time.  They could be precipitated by accident, or by malicious acts.  On 15 November 2025 Anthropic reported that it had (mostly) thwarted the first known autonomous AI cyberattack:

‘The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention’.

While it was initiated by humans, the attack was then run by agentic AI. The humans tricked Anthropic’s Claude into breaking its own ‘guardrails’ (‘jailbreaking’) by pretending to Claude ‘that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing’.  The incident was widely reported but passed off so far as I noticed, without any noticeable political response.

ends

Share
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *