The New Threat Facing Us: AI Controlling our Thoughts

By Maximilian Livshin, Year 13

It has been long enough already since AI “took off”, and the major dangers of this new technology are becoming clearer by the day now. With the launch of countless different open-AI systems and Chat-GPT-like counterparts by major companies all around the world, a clear and present danger becomes clear: manipulation of our thoughts is becoming much, much easier. 

Before talking about AI itself, we need to review what led us to this point. Just think about it for a minute, what did the creation of the Internet bring to us? Convenience. Everything we want, at the click of a mouse, or the tap of a finger. Any information you used to have to go through hundreds of books to find, can now be found in literal milliseconds on the Internet. For example, imagine you want to know how penicillin was invented? Type it into Google and get over 1.4 million results appear in less than half a second. Mind blowing, right?

At least to somebody who grew up before the Internet, it certainly would be. All this is great and all, but have you ever thought how you are being manipulated with this? It’s a known fact, most of us use Google as our daily search-engine; in fact, over 81% of the world does as of 2023. This means that around eighty percent of the world is relying on one company as their main source of information, making us vulnerable to manipulation through what search results choose to show us. As humans, we will usually go for the information which is easiest to find, meaning we won’t be scrolling to the 1,400,000th result on the web to find information; instead, we will stick to the first 5-10 results on the Google results page.

This brings up an obvious problem: who decides what results come up first? The reality is we don’t truly know. We have ideas, hunches, and even some theories about how this happens, but we cannot prove any of them; Google will do what it wants, and there is nothing we can do about it. This means they can shape people’s opinions, based on the information they push to the top of the search results. This creates an obvious danger of propaganda-style information, mass and widespread misinformation, and even disinformation, manipulating billions of people around the world.

You may ask, how does this all relate to AI? Well, here is the answer: the novel open-AI software which is taking over the world is another generational moment, just like the Internet was a long time ago. This in fact amplifies the dangers mentioned above, thanks to the ease of use of Chat-GPT and similar systems. Ask it anything, and it will answer you, ranging from image generations to writing an entire essay. Even worse, it is being used by everyone, from teachers to students, which means essentially an echo-chamber is being formed in the world. Imagine this scenario, which is based on real events: a student has to write a History essay on a global event, they go onto Chat-GPT, and tell it to write the essay, and then hand it in. The teacher, having 20 to 30 of them to grade, then decides to have Chat-GPT grade the papers. The resulting effect is that information chosen by Chat-GPT will be marked correct, and the information Chat-GPT is programmed to “erase” from reality will be marked as false. In essence, it will now control the informational space we live in. 

You may be thinking that this is simply a conspiracy, made up, fiction, but that would be a naive thought. These AI programs have to be programmed themselves, and that leaves room for people to insert their beliefs in there, and that is in fact exactly what is being done, and I have brought receipts. The biggest recent example would be Google’s Gemini, which was programmed to systematically refuse any request involving white male characters, as well as change historical events and persons to further that aim. Want it to make you an image of America’s founding fathers? Here is Google Gemini’s intentionally historically inaccurate portrayal: 

You might think this is just a one-off, but no. Look at the stark contrast when you ask it to make images of families of different ethnicities:

So to be clear, it goes from being “unable to generate images that specify ethnicity or race” if you want to ask for a picture of a White family, to “sure here are some images featuring Black families” if you want a Black family. There is clear disinformation and manipulation of people’s thoughts, but what is worrying is this could and is being used to erase and change historical events, and in fact completely reform our informational environment, according to what a small group of people running these companies want you to think. What is most worrying of all? This is easy for them to do. 

So what should be taken away from this? Do not rely on one source of information, and do not rely on AI-generated sources for your knowledge. Somebody always has a motive of manipulating your thoughts, so you need to actively filter everything you see, especially if it is AI-generated. It is a sobering thought indeed to think that you live in an environment where people always want to manipulate you, but it’s the unfortunate truth, and we’re going to have to get used to it, or risk facing the consequences.