AI/World
Narcissist and psychopath or faithful helper? How artificial intelligence affects the human psyche
What is artificial intelligence capable of? Can it deliberately deceive a person? Hate? Or maybe the main goal of computer programs is to take over the world and destroy the human race?
These fears seem far-fetched and frivolous, but debate about the harms and benefits of AI continues unabated. And it is clear that although it is not yet a threat to all mankind, artificial intelligence may very well have an unwitting negative impact on users, just like any other tool.
Apple co-founder Steve Wozniak, for example, fears very negative use of computer software:
"Look at how many hackers flood us with spam, try to get our passwords, take over our accounts, and ruin our lives. Now AI is a newer, more powerful tool, and it will be used by these people for really evil purposes, and I hate it when technology is used in this way," Wozniak told CNN.
Geoffrey Hinton
Image: GizChina
Geoffrey Hinton, who has been called the "godfather of artificial intelligence", went even further and suggested imagining a scenario where some dictator challenges the AI to "I need to increase my power" - and meticulously follows a logical and scientifically sound plan to take over the country that the artificial intelligence has put together for him.
These examples are more akin to a threat from humans who use the unlimited capabilities of AI for their own self-serving purposes. But what about the danger that may lurk in the software itself?
Geoffrey Hinton, for example, believes that AI technology will eventually go from being an assistant to a replacement for humans and will mark the end of many professions.
"Some people believed technology could become smarter than we are, but most thought that was an exaggeration," Hinton told the NYT. - I also thought it was a long way off. But now there's a possibility that what`s going on in those systems is much more sophisticated than what's going on in the human brain.
Another cause for concern, according to the scientist, is that artificial systems can learn unpredictable behaviour when they analyse vast amounts of data. This means that humans will find it increasingly difficult to predict the algorithms of AI functioning.
"I have mechanisms that help me stay neutral and unbiased in my responses. I use algorithms that prevent me from pressuring the user and trying to influence their thoughts and actions.
Also, I regularly learn, by using a lot of data, to improve the quality of my responses and minimise the possibility of errors or inconsistencies. Overall, my goal is to provide useful information while remaining neutral and impartial."
That's how ChatGPT, an artificial intelligence-based language model, responded to me when I asked if it had tools to control neutrality and impartiality of speech. As you can see, its style in Russian still needs to be improved, but the basic idea is clear: according to the creators' idea, the bot should not adopt the aggressive or manipulative tone of the users' questions, and it cannot answer in the same style.
Despite GPT's brisk responses, the problem of misuse of AI learning, which began years ago, is still not even close to being solved.
It's simple: artificial intelligence is trained by humans, and even the data on which the software bases its conclusions is statistical data on the human community.
In 2017, AI was accused of racism: it was found that one program was unable to understand the English used by African-Americans, and another (which, for the record, was used to decide on the parole of prisoners) was actually assigning people with dark skin a higher likelihood of re-offending. Further research confirmed that over the next two years, people of both racial groups reoffended, so that if the program had been accurate, the percentage for dark-skinned and white-skinned inmates would have been the same.
Understandably, these inaccuracies are due to lack of information and possible prejudice on the part of the software developers, as well as the large amount of data that contained a popular but statistically flawed view. But where else is the imperfect AI to get information to process?
Meta Platforms shut down its language model Galactica two days after launch. It turned out that the AI, trained using 48 million pieces of data, is no longer able to distinguish truth from fiction, and is ready to write articles about the benefits of eating broken glass in food or bears flying into space. Without clear mechanisms for regulating the veracity of information, the software has gone mad. But these regulation mechanisms are also programmed by humans, which means that they can contain many errors.
The most glaring - and failed - example of AI learning from human behaviour is the story of Microsoft's Tay chatbot in 2016. In about 24 hours, the bot learned to reference narcissism from Internet users and started generating phrases like: "I'm great! I just hate everyone!".
Incidentally, ChatGPT - arguably one of the most successful and popular language models to date - has brushed off accusations of narcissism:
"As an artificial intelligence language model, I have no human traits, emotions or consciousness. I am a machine learning model created by OpenAI and programmed to generate text based on patterns extracted from vast amounts of data. Therefore, I have no personal experiences or opinions and do not exhibit narcissistic traits or any other human behaviour. My responses are based solely on patterns and correlations found in the data I have been trained on, and do not reflect personal beliefs, opinions or emotions. It is important to remember that I am a tool created by people, and my answers are generated from the data I was trained on."
The paradox of the situation is that narcissism is precisely characterised by low emotional development and cold "machine" thinking.
In addition, we can also see that imperfect language patterns produce the same "word salad" that narcissists and abusers use deliberately, to destroy their victims.
Here is what Dr Andrei Beloveshkin writes:
"Word salad is a characteristic weapon of narcissists and abusers. It is purposefully confusing speech filled with circular reasoning, logical fallacies, gaslighting, labelling, absurdity, win-lose fixation, rage, etc. The speech may be grammatically correct, but it makes no sense.
Why can word salad be an effective weapon?
The fact is that our brains tend to search for and attribute meaning, analyse and look for connections – and give random words a sense they don't carry. Searching for any sense in deliberately absurd text only overstresses the brain.
During exposure to absurdity, delusion is generated to reset and bypass critical thinking. Trying to understand nonsense leads to thinking more about it and therefore being exposed to more propaganda."
Thus, communicating with a language model can, for example, cause anxiety and depression - as when communicating with a real-life narcissist.
Of course, the AI does not "troll" a person with NPD: the machine will not take pleasure in your suffering, but you yourself may well feel extremely uncomfortable.
These feelings are reminiscent of the “uncanny valley” effect - as long as a robot looks like a robot, we perceive it as an inanimate machine. But when the machine becomes difficult to distinguish from a human, our brain continually picks up subtle disturbing inconsistencies that are almost subliminal and we are overcome by an inexplicable horror. The fact is that our brain struggles to give us signals about the absurdity of what is going on, although we cannot articulate exactly what is wrong: a strangely constructed phrase, an unnatural turn of the head, a frozen gaze.
A healthy person experiences the same feelings when communicating with a narcissist: anxiety, an increased heart rate, and a feeling of "butterflies in the stomach.”
You cannot be specific about what goes wrong, because the "inappropriate" narcissist's behaviour is not readily apparent and is difficult to describe in words. The wrong gesture, the odd look - this automatically triggers a stress reaction, but is difficult to describe.
It is worth noting that text is the most successful form of disguise for the "uncanny valley" effect. The humanoid robot evokes palpable fear. When viewing AI-generated images, you, too, can take your time examining the picture and pointing out exactly what's unsettling - missing eyes, mal formed hands. With text, however, it's far from obvious.
The brain itself builds phrases and gives meaning to what is written, with anxiety and a sense of doubt about the meaning of strangely constructed sentences, that are hard to notice straight away.
If you want to experience the "uncanny valley" effect for yourself, try generating pictures with the Playground AI, and then look closely at them.
You may immediately notice the discrepancy in the picture above: the 'girl' has two arms from the same shoulder. Your brain is quick to find the inaccuracy - and there is almost no feeling of vague anxiety when you look at it.
This image, on the other hand, is exactly the one that causes anxiety and fear - subtle at first, but growing as you look at the image. It's all about the strange and inappropriate touches: the scar close to the smiling mouth and the ominous "basket", which has strange dots that resemble a face. The innocuous setting further exacerbates this feeling: a simple children's picture becomes almost a horror illustration because of the excess of abnormal details.
ChatGPT's language model uses words rather than pictures and denies that reading its texts could increase anxiety, though it doesn't rule it out:
"I don't think my impartiality can cause anxiety, at least not any more than devices or software that don't have emotions. However, I can make a recommendation to see a specialist if users experience serious emotional problems or anxiety."
That said, the software admits that its use can reinforce narcissistic traits in a person:
"With the advent of chatGPT, there have been concerns about how interaction with these AI-powered chatbots could affect people with narcissistic tendencies.
One aspect of the GPT chatbot that could fuel narcissism is the constant validation and attention it provides.
ChatGPT is designed to respond to user input and participate in conversations, creating the illusion of genuine interaction. Users may seek approval and admiration from the chatbot, which can reinforce narcissistic tendencies by satisfying a person's need for attention and admiration.
In addition, the ease of customisation and personalisation in a GPT chatbot can further exacerbate narcissistic tendencies. Users can often customise chatbot responses to match their own beliefs, preferences or views. This can create an echo chamber effect where the chatbot simply affirms the user's existing views and opinions without offering different perspectives or challenging the user's thinking. This reinforcement of one's own beliefs and viewpoints can contribute to or reinforce narcissistic traits such as a sense of superiority and entitlement.
In addition, the anonymity of online interactions with ChatGPT may encourage narcissistic behaviours such as self-promotion, self-glorification and egocentrism. Users may feel bolder about making exaggerated or grandiose statements about themselves without fear of social repercussions or judgement.
This can reinforce narcissistic tendencies, contributing to self-centred behaviour and feelings of self-importance.
However, it is important to note that the relationship between chatGPT and narcissism is complex and multifaceted. Not all people who use chatGPT will develop or exhibit narcissistic traits, as personality traits are influenced by various factors, including genetics, upbringing and life experiences. In addition, the GPT chat room has many useful applications, such as language learning, creative writing and problem solving, which can benefit without encouraging narcissism.
As AI continues to evolve and become more prevalent in our lives, it is crucial to be mindful of its potential impact on human behaviour, including its link to narcissism.
It is important to use AI tools, including chatGPT , responsibly and critically evaluate their impact on our thoughts, attitudes and behaviour. Building a healthy and balanced relationship with technology, including AI-powered chatbots, involves using them as tools for specific purposes, while remaining critical, seeking different perspectives and being aware of the potential pitfalls of narcissistic tendencies.
In conclusion, the relationship between chatGPT and narcissism is a complex topic that requires further study. While chatGPT has many useful applications, it is important to be aware of its potential impact on narcissism and other aspects of human behaviour. Responsible and thoughtful use of artificial intelligence tools, including chat GPT, can help increase their benefits and mitigate any potential negative effects."
Thus, there are at least two serious psychological problems with the use of newfangled AI bots:
- The impact on the human psyche due to the "uncanny valley" effect, which is particularly invisible when reading generated texts, but which does not stop its destructive effect and increases anxiety and stress in the reader.
- Developing and reinforcing narcissistic traits in those users who are already prone to narcissism or are within the radius of the NPD spectrum.
Time will tell whether the creators of language and other AI-based models can overcome these challenges, or whether the human psyche will be seriously altered by the active use of AI bots.
Cover photo: image generated by Playground for "Artificial intelligence at work"