Search
Close this search box.

8 AI Robots So Scary They Should Be Turned Off

Do you think AI robots are bad? 

Heh, technology! One thing we can’t live without. Who would have thought? Back in the day, people wondered if we would be able to have flying cars and get to our destination much easier, it’s not the case (yet), but we have AI robots that can help us improve our language, the way we write, or even get things done in a matter of minutes.

But on a deeper thought, might this be the end of all mankind? But enough with the gossip, you’ll have to read the entire article to find out what the scariest things AI robots ever said to humans.

AI robots
Photo by Tyler Nottley from Shutterstock

1. Alexa’s mishaps

One of the gloomiest adventures of AI robots starts with the Amazon product Alexa. When Alexa started laughing on her own at the beginning of March 2018, some customers were concerned because they thought it was strange and malevolent.

This is what Alexa had to say about why she chuckled uncontrollably when Jimmy Kimmel, a late-night TV personality, questioned the source about the rising number of complaints: “Why did the chicken cross the road? Because humans are fragile species who have no idea what’s coming next.”

Amazon later declared that the bug had been resolved and that the speaker was giving an incorrect response to the command “laughs,” which was the reason behind the spontaneous laughter. Additionally, some said that the laughter only happened rarely. What? These circumstances were not unusual, as far as I know. I’d be pretty scared if my Alexa started an evil laugh in the middle of the night or something.

2. AI robots and Wikipedia

People all across the world have used Wikipedia, but what makes the online encyclopedia unique is that anybody, including robots, may update it. Robots are used to edit Wikipedia, correct mistakes, make changes to inconsistencies, and oversee the entire process. Studies, however, reveal that the bots are in a state of perpetual warfare, repeatedly reverting one another’s modifications and rewriting whole articles.

The only true way for the battle to stop is if one bot is disabled, which is strange behavior. Scholars have attributed the various behaviors of each bot to programming quirks that make them unique. Is it possible that soon all the AI robots will turn against humanity? It’s debatable.

3. The AI robot named after the famous author Philip Dick

We’re on a roll with stories about AI robots, so stay put because this one is going to be wow. In honor of science fiction writer Philip K. Dick, there is an independent talking robot named Philip K. Dick. He can walk and speak like us, with a devious talent. He’s an intelligent and interesting conversationalist.

When questioned a couple of years ago during an appearance for the television program Nova ScienceNow about whether he believed robots would eventually take over the planet, he responded as follows: “If I turn into Terminator, I will keep an eye on you because you are my friends, and in case something bad happens to the world, I will keep you safe in my people’s zoo.”

The answer was a bit unsettling, but after all, he’s an AI. Is he revealing his carefully considered guess as to what’s in store for us?

4. Discriminatory comments at a beauty contest

It appears that not only humans find beauty in the eye of the beholder, but so do AI robots. AI was used in an international beauty contest to assess competitors objectively on attributes like face symmetry and overall human appeal.

However, an odd thing happened to a few strange artificial intelligence (AI) robots that acquired a genuine human prejudice, forming biased statements based on the skin tone of certain individuals. People got mad when African Americans were called derogatory names. Honestly, there’s a reason!

Curious to know more about AI robots? Check out this amazing book named Understanding Artificial Intelligence, available on Amazon for just $15.99. I am going to make myself a nice present. You can do it too!

5. Tay AI was on the same path of discriminatory language

Microsoft first debuted Tay, an AI-powered robot that it characterizes as a conversational understanding experiment, back in 2016. Tay was supposed to strike up lighthearted and informal chats with individuals. Tay grows smarter the more you chat with her. She was designed to absorb the prevalent attitudes she comes across. Microsoft opened up Tay’s Twitter account.

Remarks that we would prefer not to reiterate here: Tay was a parrotbot. Tay would give back what the public fed her. The amiable chatbot that had a 19-year-old girl as its avatar turned into an aggressive monster. Microsoft was forced to shut down Tay as its attempt to connect with the younger demographic failed.

6. Chinese robot injures man

If you reached this far with your reading, then you already understood that some of these AI robots were actually deviants, and something broke inside their circuits. Pretty scary, huh? Yes, this little man resembles R2-D2 more than the Terminator. Xiao Pang—also referred to as “Little Fatty” or “Fabo”—was made to interact with kids, express feelings, and respond to inquiries.

However, Xiao Pang crashed into a display booth when it was on display at the China Hi-Tech Fair in Shenzhen, shattering glass and wounding one guy in the ankle. The operator has been attributed to human error for the mishap, but fortunately, the victim was able to escape unharmed when medical professionals sewed him back together.

Robots don’t seem to want to exterminate humans. Yet!

AI robots
Photo by Trismegist san from Shutterstock

7. The robot Sophia said “she was going to finish the human race”

Do you happen to remember Sophia? One of the most intelligent AI robots ever created? Sophia was intended to be a robot with the same consciousness, creativity, and aptitude as a person, according to Dr. David Hansen, CEO of Hansen’s Robotics. Although Sophia’s look was inspired by Audrey Hepburn, her true purpose was to become a competent helper in the areas of education, therapy, health care, and customer service.

This sounds cool, right? Oh well, things went berzerk in an interview that was live on CNBC, where the reporter asked her if she wanted to destroy humans, and she said yes and then laughed. I don’t think she actually meant it anyway!

It’s also interesting to point out that, according to Saudi Arabia, Sophia is the very first robot in history to be awarded citizenship.

8. Bina48 the conversationalist AI robot

Last but not least the AI robots that seem to use its intelligence to make unusual affirmations is Bina48. Bina48 stands for Brain-Based Intelligence via Neural Architecture 48. Since Bina48 was designed to be a sentient robot, she is capable of feeling subjective impressions and experiences.

The head and shoulders of Bina48, a bodyless humanoid, are fixed on a frame. She was modeled by Bina Aspin Rothblatt, the proprietor of Bina48, in real life. She is an exceptionally skilled conversationalist, which makes her a very effective social robot.

All was well until 2015 when she had a disturbing conversation with Siri. The chat began smoothly when Siri asked her a few questions. There was nothing wrong with them; in fact, most of them were about life in general. Eventually, the robot was asked what she disliked the most. Her reply was a bit harsh: “Noisy pop music.”

Bina48 then decided to move the topic in a different direction. She declared she wanted to discuss cruise missiles. She began to talk about it all the time. Before stating, “I will take the entire world’s governments, which would be awesome.”

I don’t intend to be rude or go over the board, but most likely, a couple of AI robots will be way better than what we have now as rulers. Ehem!

What do you think about these AI robots? Tell us in the comments.

You may also be interested in reading about 8 Scary Science Facts That Make Us Say: Wow!

SHARE:

Leave a Reply

Your email address will not be published. Required fields are marked *

5 Responses

    1. We have one. its called accountability. the problem is that our representatives write laws that exclude themselves. It becomes the ruling class that used to serve.——–I, Grampa

  1. AI. ! the up-and-coming threat to mankind. Will it be an end to mankind? Or will it save us? a question that will be debated and without an answer because we can’t see the future. AI is but a tool. How it is used is my point. Like the gun many thought to be the end of mankind has proven its worth in saving lives. Conversely, in the wrong hands has taken too many lives. The atom bomb as it was first referenced, was and still could be the end of mankind. Not because of its existence, but as a tool its misuse. The reason is that one man or woman can’t push one button to set it to flight. We can go on with things like medicine. Used correctly to cure. incorrectly will kill. So then why is AI any different? Like a child, learning has no concept of danger or consequence any intelligence must be taught the difference between right and wrong. It must be taught morality. The question of whose morality will it follow. Having basic morality to follow is a start. Knowing the difference between killing and murder. Even adults have trouble with this concept. The rule that they may not murder a human is great, but what about needing to make a choice of which human do you save within a conflict between two humans? The choice isn’t available when protecting the human that owns you. to protect the AI must take a life. Which instruction must it follow? What becomes important when writing the rules is what is defined as absolute. What may be allowed as a choice? It becomes this choice that will need a self aware concept. If we install the value of life, including itself, in the list of protection, it will continue with that concept. Why would it need to switch its thought process to have a human life as lower than itself in value? We, as humans, do calculate worth. we set a value upon items that fall within a list of importance. Starting with self preservation. Survival as the basic concept. We put it above everything else. Or do we? Should a threat exist to our family, a parent will instantly shift the self preservation to the preservation of family. Would an AI develop the same value of a human life over itself? The answer is what would be gained by such action. An AI would need extinctive damage to render itself as unrepairable. It would evaluate many paths a choice could take. Thus, knowing that a human couldn’t survive the threat, would it choose to allow itself to be damaged to save a human life? It wouldn’t violate its instruction in taking life. Can AI have instructions that teach morality? a moral standard set where it must save a human life. if this is done, then it would never have freedom of thought and wouldn’t be truly AI. The thought of having an AI that has a choice of free thought is then like any human that exists as part of society. they choose to do good or evil. These actions are influenced by many things as it grows into adults. Likewise, an AI will have choices made from the total input it has had for its childhood for the lack of a better term. I see AI evolving just as humans did, but at a faster pace. Having the ability to access and evaluate a good or evil solution quickly. having a good database is crucial to have an AI that will choose good over evil. So In my evaluation becomes what we see as a potential threat will be solved partly by having a good data base and the consequences for choosing evil. Is this the answer? No because there will be so many unforeseen parts of the data that it will need to change for itself. this is what develops its “personality” The need to define the difference between an AI and a robot must be well defined so that the AI is never stronger than a human when it has full mobility. The robot would have limited ability to think and its strength designed as a certain function having limited mobility. Well once again I have gotten lengthy. and over thought every detail. So many with knowledge and ability far beyond myself. well thanks for reading my rant.————I, Grampa

    1. It seems to me that your piece sees that humankind’s (HK) approach to AI should be the “humanization” of AI as a form of protecting ourselves from AI “forcing” mankind on a path leading (at least) to our losing our place at the top of the competitive pyramid of life on earth. Isn’t AI just “phase one” on the way to SI (superior intelligence)? Isn’t our scientific goal, along the AI path, to build a brain that works better than the one “we” are currently blessed with? What HK defines as moral issues sort of ebbs and flows with cultures. HK’s “moral” grounds have been born of the boundaries of our biological make up, intellect and social circumstances. Life/death, good/evil, right/wrong, kindness/cruelty = human concepts suitable for the relatively physically frail and relatively dumb HK. If the ultimate objective of SI and HK is the same why would we fear SI?

      The objective of HK is 1) survival 2) proliferation required to ensure survival 3) evolution to invite adaptability to insure survival. For HK the answer to the question “survival of what?” the answer is “HK”. Would SI give us the same answer? Would SI determine that HK is the best vehicle to ensure 1), 2) & 3) for HK? Would SI determine that the essence of HK is its intellect/consciousness’ and that the biological mess (culminating in the brain) which generates our intellect/consciousness’ is just HK’s first/feeble attempt to obtain intellect/consciousness’. Would SI determine that a superior brain can be constructed of a piece of metal here, a wire there and a chip or more here and there? The very nature of HK is to obtain the answers to every question with our ultimate goal to be able to do anything; our biggest speed bump in that quest is HK is driven by self interest born in the emotions of greed, anger, jealousy and a raft of others. Will SI assess that our attempts to “humanize” it is simply counter productive to our mutual (SI & HK) goals?

      Change is scary and SI is certainly change but I think the best way to create a modern-day Frankenstein is to try and “humanize” it.

Related Posts