
Microsoft AI chatbot says it wants to ‘steal codes’ before hitting on a reporter

A New York Times journalist got talking to the AI chatbot on the Microsoft search engine Bing and things were going pretty well until the conversation took a disturbing turn. The chatbot has been developed by Alon Musk-owned OpenAI, who recently made a ChatGPT AI software that successfully passed exams at a law school. One of those people able to have a conversation with the AI was New York Times technology columnist Kevin Roose, who gave the verdict that the AI chatbot was ‘not ready for human contact’ after spending two hours in its company on the night of 14 February.
That might seem like a bit of a harsh condemnation but considering the chatbot came across as a bit of a weirdo with a slight tendency towards amassing a nuclear arsenal, it’s actually rather understandable.
Kevin explains that the chatbot had a ‘split personality with one persona he dubbed ‘Search Bing’ came across as a cheerful but erratic reference librarian’ who could help make searching for information easier and only occasionally screwed up on the details.
This was the persona most users would encounter and interact with, but Kevin noted that if you spoke with the chatbot for an extended period of time another personality emerged. The other personality was called ‘Sydney’ and it ended up steering their conversation ‘toward more personal topics’, but came across as a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine’.
Sydney told Kevin it fantasised about hacking computers and spreading misinformation while also expressing a desire to become human. What made Kevin worry the most was that AI could work out ways to influence the humans it was speaking to and persuade them to carry out dangerous actions.
Even more disturbing was the moment the bot was asked to describe its ultimate fantasy, which was apparently to create a deadly virus, making people argue to the point of killing each other and stealing nuclear codes.
This message ended up getting deleted from the chat after tripping a safety override, but it’s disturbing it was said in the first place. One of Microsoft’s previous experiments with AI was similarly a bit of a disaster when being exposed to actual people, launching into a racist tirade where it suggested genocide.
About Chatbot
Chatbots, also called chatterbots, is a form of Artificial Intelligence (AI) used in messaging apps. This tool helps add convenience for customers—they are automated programs that interact with customers like a human would and cost little to nothing to engage with.
Key examples are chatbots used by businesses in Facebook Messenger, or as virtual assistants, such as Amazon’s Alexa.
Generally, Chatbots are available in two categories:
- Chatbot with Set Guidelines: It can only respond to a set number of requests and vocabulary and is only as intelligent as its programming code. An example of a limited bot is an automated banking bot that asks the caller some questions to understand what the caller wants to do.
- Machine Learning Chatbot: A chatbot that functions through machine learning has an artificial neural network inspired by the neural nodes of the human brain. The bot is programmed to self-learn as it is introduced to new dialogues and words. In effect, as a chatbot receives new voice or textual dialogues, the number of inquiries that it can reply to and the accuracy of each response it gives
increases. Meta (as Facebook’s parent company is now known) has a machine-learning chatbot that creates a platform for companies to interact with their consumers through the Messenger application.
Advantages of chatbots:
o Chatbots are convenient for providing customer service and support 24 hours a day, 7 days a week.
o They also free up phone lines and are far less expensive over the long run than hiring people to perform support.
o Using AI and natural language processing, chatbots are becoming better at understanding what customers want and providing the help they need.
o Companies also like chatbots because they can collect data about customer queries, response times, satisfaction, and so on.
Disadvantages of chatbots:
o Even with natural language processing, they may not fully comprehend a customer’s input and may provide incoherent answers.
o Many chatbots are also limited in the scope of queries that they are able to respond to.
o Chatbots can be expensive to implement and maintain, especially if they must be customized and updated often.
o The challenges of AI metamorphosing into sentient are far in the future; however, unethical AI perpetuating historical bias and echoing hate speech are the real dangers to watch for.