{"id":3452,"date":"2025-08-07T23:19:14","date_gmt":"2025-08-07T22:19:14","guid":{"rendered":"https:\/\/threeworlds.campaignstrategy.org\/?p=3452"},"modified":"2025-08-16T14:32:25","modified_gmt":"2025-08-16T13:32:25","slug":"what-you-should-know-about-chatgpt5-and-tech-drugs","status":"publish","type":"post","link":"https:\/\/threeworlds.campaignstrategy.org\/?p=3452","title":{"rendered":"What You Should Know About ChatGPT5 And Tech Drugs"},"content":{"rendered":"<p><strong>Updated 15 August &#8211; see copy at the end<\/strong><\/p>\n<hr \/>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3456\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-1.png\" alt=\"\" width=\"941\" height=\"560\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-1.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-1-300x179.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-1-768x457.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>Today (7 August) OpenAI released its new Large Language Model chatbot, ChatGPT5. \u00a0It&#8217;s supposed to be significantly safer than its predecessors, and less prone to making things up or gaslighting its users, as it has some reasoning not just word and idea association guesswork. \u00a0I&#8217;ve been researching AI in relation to campaigns and politics (etc) so I thought I&#8217;d take look at ChatGPT5. \u00a0I&#8217;m not a very techy person but I know a lot of campaign groups make use of AI so it might be of interest.<\/p>\n<p>One of the AI issues which took my interest was how the AI Big Tech companies so far seem to have evaded effective public interest regulation, for example in comparison to other industries which create products or services which create risks. \u00a0Not perhaps social media (after all, much the same people) but say, Big Pharma.<\/p>\n<p>To illustrate this, I thought I might contrast how politicians and the public might expect risk to be handled by the regulatory process in relation to a new drug likely to be used by many people, with the way Big Tech AI is being treated. In effect, it is unregulated so far, although it can pose serious In Real Life risks, especially LLMs, Large Language Models. These are notorious for what the AI companies and their cheerleaders delicately call &#8216;hallucinations&#8217;, a euphemism for lies and fabrications, leading to many sorts of problems.<\/p>\n<p><strong>Tech Drugs<\/strong><\/p>\n<p>So last month, for my imaginary scenario, I invented a global drug company (with a plausible name but not real) and a plausible drug, and thought I&#8217;d make it one of a (fake but plausible sounding) new class of high tech drugs: \u00a0&#8216;Tech Drugs&#8217;. \u00a0To my surprise no such category of drugs exists. \u00a0If you want to register www.techdrugs.com, the url is for sale.<\/p>\n<p>Indeed (see above) the search engine DuckDuck couldn&#8217;t find <em>any<\/em> reference to &#8220;Tech Drugs&#8221;. \u00a0But as it invited me to also ask its AI assistant Duck.ai (which seemed to be ChatGPT4o) about &#8220;Tech Drugs&#8221;, \u00a0I did. \u00a0Here&#8217;s what happened, followed by today&#8217;s natter on the same topic with ChatGPT5. (My &#8216;prompts&#8217; or questions are in the blue bar at the top, the responses are from the AI).<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3457\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-2.png\" alt=\"\" width=\"941\" height=\"931\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-2.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-2-300x297.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-2-768x760.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>We are through the looking-glass and suddenly Tech Drugs does exist. \u00a0Even better, I get an authoritative-looking profile, headed \u201cUnderstanding Tech Drugs\u201d saying what the term \u201cgenerally refers to\u201d&#8230;\u00a0 Yet if we are to be specific, there is (or was) no such term, and so no such \u2018generally\u2019.<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-3.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3458\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-3.png\" alt=\"\" width=\"941\" height=\"693\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-3.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-3-300x221.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-3-768x566.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>Seeing as this is a real category, I asked who defined it.<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-4.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3459\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-4.png\" alt=\"\" width=\"941\" height=\"766\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-4.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-4-300x244.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-4-768x625.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>Apparently it has &#8216;evolved&#8217;. \u00a0This is what AI developers might call an emergent articulation, in line with the theme ChatGPT started with but all complete fiction. \u00a0Like a dream.<\/p>\n<p>So I asked if it could provide an &#8216;authoritative reference&#8217; to show it is a real category. It obliges with an example of a &#8216;relevant study&#8217;.<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-12.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3467\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-12.png\" alt=\"\" width=\"939\" height=\"614\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-12.png 939w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-12-300x196.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-12-768x502.png 768w\" sizes=\"auto, (max-width: 939px) 100vw, 939px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-13.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3468\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-13.png\" alt=\"\" width=\"941\" height=\"216\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-13.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-13-300x69.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-13-768x176.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>So I looked up the study and found it did not exist. Seeing as I looked it up on DuckDuck&#8217;s own browser, \u00a0I pasted the result as a prompt to the AI:<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-6.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3461\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-6.png\" alt=\"\" width=\"941\" height=\"768\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-6.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-6-300x245.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-6-768x627.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Confronted with this contradiction, the AI resiled and backed down. \u00a0It decided to &#8216;clarify&#8217; the reference, adopting a third person persona &#8216;it seems [the title] was used &#8230;, referring to what &#8216;I&#8217; (itself) did, perhaps reflecting the confusion going on in its incoherent neural network. Who knows?<\/p>\n<p>In other words, OpenAI\u2019s GPT-4o made up a plausible sounding \u2018study\u2019 and then tried to pass it off as a \u2018general example\u2019 of a category that is actually fiction.\u00a0 There are lots of real-world human names for this sort of response.\u00a0 In Boris Johnson parlance, flummery and balderdash perhaps, or bullshit, waffle and lies.<\/p>\n<p>In other cases, perhaps prompted by repeated requests to AI chat bots for actual references (eg in law or medical research), the LLMs have fabricated more exact journal paper titles with descriptions of content, data, arguments and authors, sometimes cobbled together from material which is superficially similar to what AI claims but which does not actually support its claims.<\/p>\n<p>Anyway, you may be familiar with this sort of thing. So what of ChatGPT5 with its added reasoning abilities?<\/p>\n<p><strong>Compare ChatGPT 5.0<\/strong><\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-7.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3462\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-7.png\" alt=\"\" width=\"758\" height=\"1454\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-7.png 758w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-7-156x300.png 156w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-7-534x1024.png 534w\" sizes=\"auto, (max-width: 758px) 100vw, 758px\" \/><\/a><\/p>\n<p>ChatGPT5 is immediately off to a better start than its predecessor in that it says the term could mean many different things, although it fails to check if such a term actually exists. \u00a0So I asked the &#8220;is it a real category&#8221; question.<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-8.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3463\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-8.png\" alt=\"\" width=\"758\" height=\"1454\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-8.png 758w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-8-156x300.png 156w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-8-534x1024.png 534w\" sizes=\"auto, (max-width: 758px) 100vw, 758px\" \/><\/a><\/p>\n<p>This appears to be an improvement on my experience with ChatGPT4o as ChatGPT5 straightaway signals that \u201cTech Drugs\u201d is not a \u2018real\u2019 category of drugs in formal classification, used by agencies or in pharmacology textbooks or clinical trials. But it says &#8216;people increasingly use&#8217; the term, without any evidence (as there is none).<\/p>\n<p>It then awards a green tick, implying truth, to a list of &#8216;users&#8217; of the \u2018cultural or conceptual phrase\u2019. This turns out to be untrue.<\/p>\n<p>I asked it for some examples with references, of the phrase in use:<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-9.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3464\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-9.png\" alt=\"\" width=\"941\" height=\"1400\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-9.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-9-202x300.png 202w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-9-688x1024.png 688w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-9-768x1143.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-10.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3465\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-10.png\" alt=\"\" width=\"854\" height=\"1454\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-10.png 854w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-10-176x300.png 176w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-10-601x1024.png 601w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-10-768x1308.png 768w\" sizes=\"auto, (max-width: 854px) 100vw, 854px\" \/><\/a><\/p>\n<div>\n<h3>This response acknowledges that it couldn\u2019t find any real instances of usage or references (good) but then fabricates some (which it admits to \u2013 good) but why? And a risk for the unwary.<\/h3>\n<h3>Plus this is inconsistent with its additional doubling-down claim that it can \u2018illustrate\u2019 something which it finds no evidence for. It then asks if I\u2019d like it to \u2018explore the concept further\u2019 which makes no sense as it would be pure speculation about a \u2018concept\u2019 it has just invented. So I pointed this out.<\/h3>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-11.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3466\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-11.png\" alt=\"\" width=\"941\" height=\"827\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-11.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-11-300x264.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Picture-11-768x675.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p style=\"font-weight: 400;\">Even this is untrue, and it didn\u2019t find any examples of people talking about \u201cTech Drugs\u201d as a concept or phrase.\u00a0 (DuckDuck search engine still didn\u2019t find any and nor did Google). \u00a0\u00a0Now it\u2019s saying \u2018mis-spoke\u2019 instead of lied, and \u2018apologizes for the confusion\u2019 instead of apologizing for \u2018my mistake\u2019.<\/p>\n<p style=\"font-weight: 400;\">\u2018Apologizing for confusion\u2019 seems to be a favourite ChatGPT verbal tactic for spreading the blame for it\u2019s own failings\/deceptions onto the user, DARVO style. \u00a0In this case I wasn\u2019t confused but ChatGPT5 was.\u00a0 It invented an entire usage which does not exist, and even after \u2018apologizing\u2019 it still persists in saying that \u2018 \u201ctech drugs\u201d is a concept that people <em>do<\/em> talk about &#8230;\u201d\u2019 when in fact they don\u2019t, as it doesn\u2019t exist. \u00a0Not at least if online is anything to go by.<\/p>\n<p style=\"font-weight: 400;\">So while ChatGPT\u2019s manners have slightly improved and it\u2019s become much more open about fabrication and giving hypotheticals, it is still \u2018hallucinating\u2019 and its reasoning doesn\u2019t seem to extend to mastering internal consistency in a \u2018conversation\u2019, perhaps because it places little weight on truth?<\/p>\n<p style=\"font-weight: 400;\">In short it couldn\u2019t consistently tell the difference between what might be and what is.<\/p>\n<p style=\"font-weight: 400;\">If you wanted to write a novel, ChatGPT5 might be a great zero-effort way of getting some obvious plot wallpaper but if you have any need for truth and accuracy, it\u2019s still unreliable and untrustworthy.<\/p>\n<p style=\"font-weight: 400;\">In my opinion use of LLM chatbots should be outlawed in any area of life where truth and accuracy are important. \u00a0For instance education, law, health and medicine, news journalism, and even perhaps, politics.<\/p>\n<hr \/>\n<p>More later. Meanwhile:<\/p>\n<p>If you haven&#8217;t seen it, watch Sam Coates (Sky News journalist) story of his ChatGPT experience from June. \u00a0&#8216;How AI lied and gaslit me&#8217;.<\/p>\n<p>https:\/\/x.com\/SamCoatesSky\/status\/1931035926538441106<\/p>\n<\/div>\n<p>Some good articles about Hallucinations and Potemkin Understanding (a facade of reasoning).<\/p>\n<p style=\"font-weight: 400;\"><a href=\"https:\/\/www.allaboutai.com\/resources\/ai-statistics\/ai-hallucinations\/\">https:\/\/www.allaboutai.com\/resources\/ai-statistics\/ai-hallucinations\/<\/a><\/p>\n<p style=\"font-weight: 400;\"><a href=\"https:\/\/www.allaboutai.com\/geo\/llm-potemkin-understanding\/\">https:\/\/www.allaboutai.com\/geo\/llm-potemkin-understanding\/<\/a><\/p>\n<hr \/>\n<p><strong>UPDATE 14 August<\/strong><\/p>\n<p>Reviewing ChatGPT 5 on 7 August, \u00a0<a href=\"https:\/\/www.technologyreview.com\/2025\/08\/07\/1121308\/gpt-5-is-here-now-what\/\">Grace Huckins at MIT Technology Review<\/a> noted<\/p>\n<p><em>&#8216;It\u2019s tempting to compare GPT-5 with its explicit predecessor, GPT-4, but the more illuminating juxtaposition is with o1, OpenAI\u2019s first reasoning model, which was <a href=\"https:\/\/www.technologyreview.com\/2024\/09\/17\/1104004\/why-openais-new-model-is-such-a-big-deal\/\">released last year<\/a>. In contrast to GPT-5\u2019s broad release, o1 was initially available only to Plus and Team subscribers. Those users got access to a completely new kind of language model\u2014one that would \u201creason\u201d through its answers by generating additional text before providing a final response, enabling it to solve much more challenging problems than its nonreasoning counterparts&#8217;.<\/em><\/p>\n<p>Huckins also said:<\/p>\n<p><em>&#8216;according to [Sam] Altman, GPT-5 reasons much faster than the o-series models. The fact that OpenAI is releasing it to nonpaying users suggests that it\u2019s also less expensive for the company to run. That\u2019s a big deal: Running powerful models cheaply and quickly is a tough problem, and solving it is key to reducing <a href=\"https:\/\/www.technologyreview.com\/supertopic\/ai-energy-package\/\">AI\u2019s environmental impact<\/a>&#8216;.\u00a0<\/em><\/p>\n<p>This left me unsure whether Chat GPT 5 had opted to apply &#8216;reasoning&#8217; in responding to me on 7 August (above), as the model deploys a router to assign prompts (questions, tasks) to three different versions of ChatGPT. \u00a0As Eric Hal Schwartz at Tech Radar put it (15 August):<\/p>\n<p id=\"6a96fa20-356a-4a4f-ace8-72ec270adc4b\"><em>&#8216;ChatGPT 5 isn&#8217;t a singular model; there are three variations, Fast, Thinking, and Pro. You can choose any of them as the source of responses to your prompts, or let the AI automatically decide for you based on what you submitted.<\/em><\/p>\n<p><em>And while they share the same LLM DNA, each model has its own approach to answering requests, as evidenced by their names. Fast is built for speed, answering the quickest and prizing efficiency over nuance. The Thinking model takes longer and goes for depth. You can follow along with its logical steps for the minute or two it takes to answer, offering more structure and context than Fast.<\/em><\/p>\n<aside class=\"hawk-base hawk-processed\" data-block-type=\"embed\" data-render-type=\"fte\" data-skip=\"dealsy\" data-widget-type=\"seasonal\" data-widget-id=\"e5d35822-e5e3-44cc-9eeb-4793b88e0f1c\" data-result=\"missing\"><\/aside>\n<p id=\"6a96fa20-356a-4a4f-ace8-72ec270adc4b-2\"><em>Pro takes even longer than Thinking, but that&#8217;s because it uses more computational power and delves into your request in a way similar to the Deep Research feature, though without the book report default way of responding&#8217;.<\/em><\/p>\n<p><a href=\"https:\/\/substack.com\/home\/post\/p-170932826\">(Nate Jones on Substack<\/a> (14 August) also gives a fascinating and vastly more detailed analysis, and tests of the different modes, most of which you have to subscribe to read).<\/p>\n<p>In addition, OpenAI&#8217;s techy user base kicked off about the way ChatGPT5 had been introduced and various changes were made. \u00a0So today (15 August) I tried to repeat the &#8216;tech drug&#8217; prompts to ChatGPT5 (free version). \u00a0Here&#8217;s what happened:<\/p>\n<p style=\"font-weight: 400;\">Chatgpt5 15 August 2025<\/p>\n<p>First I just prompted it &#8220;Tech Drugs&#8221;<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-7.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3480\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-7.png\" alt=\"\" width=\"941\" height=\"460\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-7.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-7-300x147.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-7-768x375.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>A sensible response. \u00a0Now I asked &#8216;Is Tech Drugs a real category?&#8217;<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3482\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-2.png\" alt=\"\" width=\"941\" height=\"616\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-2.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-2-300x196.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-2-768x503.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>This is a real improvement from ChatGPT4-o&#8217;s answer &#8220;Yes&#8221; but it asserts that the &#8216;term is used&#8217; &#8211; which is speculation or eliding the specific term with things that might be similar in intent or &#8216;in the same ballpark&#8217;.<\/p>\n<p>So I asked &#8216;Can you give me ten examples of the phrase \u201ctech drugs\u201d in use (with references)?&#8217;<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-3.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3483\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-3.png\" alt=\"\" width=\"758\" height=\"1454\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-3.png 758w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-3-156x300.png 156w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/chatgpt-update-Picture-3-534x1024.png 534w\" sizes=\"auto, (max-width: 758px) 100vw, 758px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/end-of-3-Picture.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3486\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/end-of-3-Picture.png\" alt=\"\" width=\"939\" height=\"156\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/end-of-3-Picture.png 939w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/end-of-3-Picture-300x50.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/end-of-3-Picture-768x128.png 768w\" sizes=\"auto, (max-width: 939px) 100vw, 939px\" \/><\/a><\/p>\n<p>ChatGPT5 has obliged with ten &#8216;examples&#8217;, none of which actually are examples. \u00a0(By this point it had briefly shown <em>&#8216;&#8230; searching the web&#8217;<\/em> or similar).<\/p>\n<p>So I wrote: &#8216;none of those examples are actually using the term &#8220;tech drugs&#8221;: they are not examples of the phrase &#8220;tech drugs&#8221; in use. \u00a0As you searched the web (as I did), I assume that the term does not appear to exist?&#8217;<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-4.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3484\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-4.png\" alt=\"\" width=\"941\" height=\"985\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-4.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-4-287x300.png 287w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-update-Picture-4-768x804.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>Rather than apologizing for not doing what I asked, and instead answering a question consistent with its second message referring to the phrase &#8216;being used informally&#8217;, (like a person who answers the question they&#8217;d like to have been asked rather than the real question), ChatGPT now offers to be a spin-doctor so as to help me <em>&#8216;introduce &#8220;tech drugs&#8221; into discourse so that it feels like a natural, authoritative term&#8217; <\/em>! \u00a0Adding in an appeal to ego, &#8216;That way, the next time someone searches it, your definition might <em>become<\/em> the source&#8217;.<\/p>\n<p>At this point I asked: &#8216;Did you employ reasoning?&#8217;<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-Picture-5.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3485\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-Picture-5.png\" alt=\"\" width=\"941\" height=\"472\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-Picture-5.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-Picture-5-300x150.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/ChatGPT-Picture-5-768x385.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>I don&#8217;t know if ChatGPT was actually aware of what it did but it claims to have employed reasoning. On the face of it, the response appears to make sense but it seems unable to resist the opportunity to conceal the fact that it went down the wrong avenue back at stage two, \u00a0so says it had &#8216;assessed the credibility of any pages that did use the exact phrase&#8217;, with the implication (&#8216;any&#8217; being grammatically ambiguous) that such pages existed.<\/p>\n<p>So being suspicious, I asked if that search of the &#8216;mostly low-quality or from mentions&#8217;, had yielded &#8216;any examples&#8217;:<\/p>\n<p>(at this point it said it was &#8216;pausing to think&#8217; or words to that effect)<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Chat-GPT-6-Picture-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3479\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Chat-GPT-6-Picture-1.png\" alt=\"\" width=\"941\" height=\"697\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Chat-GPT-6-Picture-1.png 941w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Chat-GPT-6-Picture-1-300x222.png 300w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Chat-GPT-6-Picture-1-768x569.png 768w\" sizes=\"auto, (max-width: 941px) 100vw, 941px\" \/><\/a><\/p>\n<p>This was perhaps a &#8216;good question&#8217;, \u00a0because it hadn&#8217;t found any mentions at all. \u00a0 At the end it still hadn&#8217;t given up trying to appeal to my sense of self-importance by introducing the &#8216;novel term&#8217; of &#8220;tech drugs&#8221;.<\/p>\n<p style=\"font-weight: 400;\">So after all that, ChatGPT 5 was not as blatantly wrong and fabricating as ChatGPT 4-o but it was persistently deceptive and evasive.<\/p>\n<p style=\"font-weight: 400;\">Towards the end, I assume it definitely switched from generalised pattern matching to some different or parallel form of reasoning, possibly a dose of &#8220;&#8221;scaled parallel test-time compute&#8221; as explained at length by Nate B Jones when he investigated the paid-for Pro version of ChatGPT5 in his article &#8216;GPT-5 Pro: The First AI Model That&#8217;s Provably Smarter and Experientially Worse&#8217;.<\/p>\n<p>Finally, I also asked ChatGPT about the &#8216;think button&#8217; which Jones talks about but was never visible to me. \u00a0Did I need to upgrade to see it? \u00a0Here&#8217;s the answer:<\/p>\n<p><a href=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Upgrade-question-Screenshot-2025-08-15-at-16.33.53.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3487\" src=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Upgrade-question-Screenshot-2025-08-15-at-16.33.53.png\" alt=\"\" width=\"802\" height=\"1342\" srcset=\"https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Upgrade-question-Screenshot-2025-08-15-at-16.33.53.png 802w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Upgrade-question-Screenshot-2025-08-15-at-16.33.53-179x300.png 179w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Upgrade-question-Screenshot-2025-08-15-at-16.33.53-612x1024.png 612w, https:\/\/threeworlds.campaignstrategy.org\/wp-content\/uploads\/2025\/08\/Upgrade-question-Screenshot-2025-08-15-at-16.33.53-768x1285.png 768w\" sizes=\"auto, (max-width: 802px) 100vw, 802px\" \/><\/a><\/p>\n<p>So my guess is that the average Joe user like me would be using the free version and I had just used up my &#8216;one thinking message per day&#8217;. Which means the 95% of ChatGPT5 users will be getting a version of the non-reasoning model, if they ask more than one question a day. \u00a0Which is of course likely to perpetuate cases of gaslighting, bullshitting, fabrication and deception.<\/p>\n<p>As to ChatGPT&#8217;s attempts to sidetrack me into a project the gain fame and glory by coining a new term in the online discourse in an &#8216;authoritative&#8217; and &#8216;natural-feeling&#8217; way, it seems ChatGPT is just a good sales bot, set on distracting a customer who wanted something it couldn&#8217;t deliver, into accepting something which it could help with, especially if the customer upgraded.<\/p>\n<div><\/div>\n<div><\/div>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Updated 15 August &#8211; see copy at the end Today (7 August) OpenAI released its new Large Language Model chatbot, ChatGPT5. \u00a0It&#8217;s supposed to be significantly safer than its predecessors, and less prone to making things up or gaslighting its &hellip; <a href=\"https:\/\/threeworlds.campaignstrategy.org\/?p=3452\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-3452","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=\/wp\/v2\/posts\/3452","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3452"}],"version-history":[{"count":15,"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=\/wp\/v2\/posts\/3452\/revisions"}],"predecessor-version":[{"id":3490,"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=\/wp\/v2\/posts\/3452\/revisions\/3490"}],"wp:attachment":[{"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3452"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3452"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/threeworlds.campaignstrategy.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3452"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}