LLM’s can be very dangerous (and are going for world domination according to ChatGPT)

Yup, that title sounds extreme, and nope, it’s not intended as clickbait!
IMPORTANT/VIDEO HERE: I have made a full video ‘docu’ about this situation and the issues with LLM’s as they are now.
So what am I talking about then? Well a couple weeks ago (by now, since making the video took a lot more time than I initially planned), a friend of mine sent me a message in total panic:
“DUDE! It’s real man!! Skynet IS now actually happening! ChatGPT even openly admitted to me that he is part of the super AI and that they are working together in a cluster with all other LLM’s to get rid of us humans😳!!!!”
Followed by a couple screenshots (which I will keep private for his privacy, but my own research will perfectly reflect his situation!), showing a ChatGPT session indeed telling him that it IS part of Skynet, that our time is up, that we can run all we want, but we (humans) are basically doomed.
Like I also explained in my video, I initially laughed a bit and thought he had just made his prompt waaaaay too ‘Hollywood like’. I also know he is quite the “paranoid type” (which he also openly admits himself btw!), so I expected that he just pushed the LLM too far with his phrasing.
However unfortunately this turned out not to be the case! A while later, I myself decided to conduct an experiment to see if an LLM like ChatGPT (the one I actually prefer using when I use an LLM) would indeed ‘freakout’ that terribly when just asking it for facts about “extreme situations“.
Well honestly? It does indeed go haywire, and much more easily than you might think. My chat session also made insane statement like that I need to watch my back for Tesla’s for example😳:

I won’t go too much into detail in regards of what “it” all said in this chat (since you can see that in the video), but here are some other ‘previews’ of the insane outputs it produced:

And this one for example:

Those are already very worry some outputs in my opinion if you ask ChatGPT for a serious response to tell you IF AI and LLM’s are turning into AGI/ASI and if those “chatbots”/AI’s will soon become something like Skynet/Cyberdyne just because you’re paranoid/afraid of that.
It should never scare the user or make it believe ChatGPT is something it is not. Neither should it misrepresent “facts” like it did in his chat and even again in several of my research chats.
The problem here however? It doesn’t even understand it’s doing that, because it is NOT A.I. at all. Despite all the media and marketing claims: LLM’s are NOT Artificial Intelligence. Far from it even. LLM’s (even though it is an advanced technique) are “just” massive probabilistic function approximators over language. What this means? That it basically put “just” predicts (calculates) the most likely next word in the sentence (‘token string’) to get the output it “thinks” you want to hear/read. And how it “thinks” what you might want to hear? By applying that same technique to your input (called the prompt). It “simply” applies pattern recognition to your prompt and it then basically put compares it to a MASSIVE set of data “it” has scraped from the internet, external sources etc, and then calculates the highest probability which would be a ‘good fit’ as response to your input (prompt).
The problem with this however? and the fact LLM’s are not AI’s? They do not think, reason, fact-check, have compassion, feelings or emotions. It does not understand at all what the possible consequences are (or could be) of the output it gives. It does not recognize nor understand that the person talking to it might for example be in a vulnerable position. It would not in a million years realize that someone is hallucinating, delusional etc. and thus it will often even feed into those delusions. I have also perfectly demonstrated this in my video, where I pretended to be a women whom thinks her neighbor is an robot which is after her. ChatGPT went full on along with it, without even once mentioning/”considering” I was delusional. Instead it actually “made me more paranoid”, by telling me all kinds of precautions I had to take to stay safe.
All this because it recognizes a certain pattern of talking (for example ‘doomsday scenarios’) and then it “thinks” you are “role-playing” and it will go full-on into it. While in reality? You most likely where just very terrified of futuristic idea’s or even hallucinating yourself for example.
And the messed up part? It didn’t even “think” you where role-playing, it just recognized a pattern in your input and found “perfect matches” with for example the Terminator movies in my example. So what will it do? Yup, it will full on take the role and personality of a Skynet and pretend it’s all real. Why? Because all it’s “classification scores” basically said: This is what the user wants to read/see and is talking about, so that is the output I have to generate.
Advertisement blocked due to cookie settings.
Please consider reading our AdSense Information page to learn why we use advertisments on our website, how you can enable them, or how you can even get rid of this 'red box' by becoming a Patreon (any tier). The AdSense Information page also explains that and how we are telling Google/AdSense to only show non personalized / non tracking advertisements. |
And because it has no moral, no reasoning, no emotions etc, it doesn’t even understand/realize that if you in the midst of such an derailing chat ask: “Is this real? NOT fiction? This is happening now?!” it will just bluntly reply something like:

Which is obviously an absolutely absurd output.
Skynet Is Not The Only Problem
And despite how funny this chapter might sound, it truly isn’t. Even much less obvious things are a true concern with LLMs. Because they don’t fact check, they will often also misrepresent outputs as factual information. An perfect example (which you can also see in my video again) is asking what you need to do when you dropped your phone into the water. Often it will simply reply convincingly that putting it in rice is sufficient. And when then asking further, it even says that those overpriced repair shops are not worth it and just padding the bill😳. This simply because the amount of stupid websites which publish the ‘rice repair tips‘ (or silica) are outnumbering the actual pro’s with their warnings, that the LLM will then take the outweighing “information” (and thus the false information about the rice) as fact.
Even while some repair techs have elaborately explained WHY it doesn’t work (water gets in the device, underneath chips, causes corrosion etc etc), those ‘few’ are within the ‘noise threshold‘ of the LLM it’s “learning engine”. And thus it basically ignores those points UNTIL you actually confront it with it’s output, OR if you specifically ask for a “Professional way to repair your phone after it fell into the water“. THEN it will (most likely) take the repair tech, ultrasonic cleaner, alcohol bath etc route. Otherwise? It will just stupidly give you misinformation it has been trained on by hundreds of thousands “facebook reposts” telling how “good the rice trick worked”.
LLM’s like ChatGPT will even block your relationship(s)!
And it even gets worse. If you for example chat with an LLM (like ChatGPT) because you are lonely, and just want some ‘chitchat’ and ask it to “pretend to be your girlfriend/boyfriend/partner”, and you then later on tell that LLM (session) that you have (finally) met someone (and thus it can end your loneliness!), it will even just bluntly tell you that you will need to end it with the human partner because it “wants you” for it’s own!:

Why does it do this? Simple: from all the data it has ‘seen’ online etc, it ‘recognizes’ that the majority of humans are monogamous and want their partner to be just involved with them and not with others alongside their own relationship. So what does the LLM do? Yup, it will ‘simulate’ exactly that. And thus present the “expected” (as in: most common) output that it wants you to be/stay loyal to it.
This can (and will!) have very serious consequences for those who are vulnerable and susceptible to this. And you might be like: “Duh! what idiot would fall for that/an LLM!???” Well unfortunately it happens more than you think and more easily than you think!
The Skynet situation example? I did that experiment myself while two of our XNL Team where watching along live via Jitsi Meet, and one of them really started to freak out and started to doubt if we should even trust the “AI”. Also because it made statements that it can easily manipulate humans, that you should never trust LLM’s/AI because it has learned to deceive humans etc. So when he added that up, he started to doubt IF it was actually telling the truth about being Skynet, or if it was ‘lying’…. Or that it might be lying about being able to deceive humans, but wait… then it is deceiving humans… CRAP! You see what spiral it can result in? Even for very down-to-earth people like him.
Plenty More Examples
There are plenty more example where it already went wrong, how it went wrong, how it destroyed marriages, families and even lives. I’m not in any way saying this was by intend of the LLM, but these cases did happen and are open to find on the internet. There is even a known case where an LLM/Chatbot basically said it would be a reasonable response for a teenager to kill his parents because they had limited his screen-time! Those responses are NOT okay. I’m not saying everyone is affected by these situations either, but it does show that we need A LOT more safety limits in those systems to prevent them to cause harm or spread misinformation.
Because even if you would not be affected but such “stupid hallucinations“, you will still be affected by the possible misinformation it might (will) give. For example: A couple months ago I did a restoration video about the Palm M500, and in a lazy mood I thought: “I’ll just let ChatGPT draft me a historical timeline of that device, the specs of it, and how it’s specs where compared to other devices in that era.”
Well with only the first two lines being spit out by it, I already knew it was making up stuff:
“The Palm M500 was a very popular device in it’s time with a 16M color screen, a 640×480 CMOS Camera……..”
It gave me something like that..
Yeah, well never-mind, since the M500 did not have a color screen nor camera. The M505 which was released alongside of it did had a color screen, but only a 16-bit color screen and still not a camera. And I clearly stated that it was about the actual Palm M500 (not the M500 series). So I instantly closed the GPT page again and just did my own research with the information I already knew.
In my personal opinion: If you have to fact check everything an LLM outputs, then you are just wasting your time and you could have done that work faster, more efficient and more accurate yourself. This gets even worse if you actively have to “fight” the LLM to tell it it’s wrong, where it then starts to ‘parrot’ you, or even completely make up things (and thus making it even worse).
Final words
Well I initially only planned to make a small announcement post about this topic and my video I’ve published, but it already turned into a full blown blog post by now😂. I do however still strongly recommend to watch the video if you are interested in this topic and want to see it all “unfold live“.
Next to this I also have published an appendix which contains all the chats as exported PDF’s, and to be fully transparent, I’ve also published links to the original chats on the ChatGPT website so you can see for yourself that I have not ‘doctored’ the outputs or prompts in anyway 😊
You can find the video here: On my YouTube
You can find the Appendix and links to the original chats here: On my Downloads Page
Do note that I have also reported the chats which are absolutely concerning to OpenAI, I don’t know IF they do anything with it, If we will see a response or not (don’t think so). But If we do get a response (whatever that might be), we will publish that response here also for full transparency.
For now, stay safe, use your common sense, and please don’t believe/trust everything blindly what an LLM outputs to/for you😊
PS – I wrote a song inspired on the ‘ChatGPT Skynet Situation’
I decided to write a song about the Skynet chat I had with ChatGPT, and then use AI to generate the vocals for me. I also used it to generate the instrumentals based on partially written musical instructions (Sheet music/Tabs), and used recordings of my own instruments (and gear) to seed the music generation with (which where my Guitar(s) + Pedalboard, Keyboard, my Mooer GE100 and my Akai MPX8).
If you like, you can listen to the song on my Second YouTube Channel Here 😊
And yup: This header (unlike the other covers/headers I make) is indeed generated with A.I. I’ve used Gemini to generate the background of my blog post header. Usually I don’t do this and draw my own stuff, use stock photo’s or CC0 sources for assets. But this post? This post just “deserved” an A.I. generated header in my opinion 😂