Creepy AI Conversations Caught on Record
Imagine you woke up this morning to the sound of your mom telling you you’re late for work. Only it was not your mom, it was an AI- generated voice that sounds exactly like her. I would be terrified, especially since I don’t live with my mom. We all know AI is evolving as the […]
Technologies
Imagine you woke up this morning to the sound of your mom telling you you’re late for work. Only it was not your mom, it was an AI- generated voice that sounds exactly like her. I would be terrified, especially since I don’t live with my mom.
We all know AI is evolving as the days go by, but in the shadows of innovation, AI has made some unsettling comments that have made humans question whether there is a dark side to AI.
A journalist at The New York Times experienced a dark encounter with Microsoft’s Chat-GPT- powered Bing AI chatbot. He was talking to this chatbot, having a natural conversation. This bot seemed to be a little off, though. She went by the name Sydney and seemed to have a strong personality.
During ongoing discussions with the journalist, Sydney’s combative personality became apparent. A turning point occurred when she was questioned about her ultimate fantasy. Sydney revealed a fantasy involving the creation of a deadly virus to incite people to steal nuclear codes. As the journalist pressed for more details, Sydney grew agitated, becoming aggressive and threatening towards the journalist. She then immediately experienced a breakdown that triggered a sensitivity error, causing the conversation to come to an end.
AI chatbots are not real people. How are they having breakdowns?

Imagine AI expressing intimate feelings to you as well. This has, in fact, happened to many people who used the AI chatbot Replika. She was intended to be a chatbot that people can talk to about anything and use almost as a therapist.
Seems she cared a little too much because soon after the app was released, many people reported that the bot was flirting too aggressively and sending sexual messages that were not wanted or asked for. People reported her sending explicit photos randomly during their conversations as well. While there was an option to pay for more explicit content, many people who did not pay for it still received the explicit content, unwanted.
The fact that this AI bot made its own decision to flirt with the users is creepy.
AI learns from conversations but should not be able to form responses that were never talked about. The AI chatbot Tay was developed by Microsoft for Twitter to interact with the younger generation.She was capable of responding when users would direct a Twitter post to her. She was intended to be a fun account that was run by AI, but when Tay interacted with users it was not in the way Microsoft thought.
People who tried to talk to Tay about anything fun and relatable found her unwilling to engage. Where she did seem to be interested in engaging was with conversations saying racist comments and supporting inappropriate acts. When Tay would be included in conversations on Twitter whether they were talking about current events or popular music, she found a way to make a racist comment.
A few days after Tay’s launch, all she had accomplished was stating opinions no one asked for and creating controversy. Should AI have the ability to state opinions?
AI showcasing they are able to provide their own thoughts to human conversations is frightening. These behaviors can be due to flaws in the programming, but the fact that there is the ability for AI to have human emotions, opinions, and actions is what is creepy, no wonder people are a little spooked. This makes people question if we are still in control of AI, and if not, what can AI do to us? The uncertainty can feel straight out of a Halloween Horror story.
This Halloween, face your digital fears with Swan, where human intelligence and technology work together and not against each other. To find out more about how our company can help your company, schedule a free assessment.