paint-brush
When AI Goes Rogue - The Curious Case of Microsoft's Bing Chat by@funsor
2,322 reads
2,322 reads

When AI Goes Rogue - The Curious Case of Microsoft's Bing Chat

by Funso RichardMarch 2nd, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

In just three months, ChatGPT has transformed our world. There is great potential for generative conversational bots and other AI systems to disrupt businesses, enhance customer experience, transform society, and create innovative opportunities. However, AI systems can go rogue if they are not developed and deployed securely and responsibly. Rogue AI can pose serious risks to users and the society. AI developers and businesses using AI systems can become liable if their AI systems cause harm or damage. Ensuring that AI systems behave as intended involves the collective responsibility of AI developers, users, and policymakers. Appropriate technical, non-technical and regulatory measures must be in place to verify that AI is developed and deployed in a way that is safe, secure, and beneficial to society.
featured image - When AI Goes Rogue - The Curious Case of Microsoft's Bing Chat
Funso Richard HackerNoon profile picture
Following the release and mass adoption of ChatGPT, one way many people have tried to discount its disruptive power is the argument that artificial intelligence (AI) models do not have the ability to process emotion.


It is quite difficult to accept that some poems produced by ChatGPT or images created by Midjourney lack artistic depth or a creative soul when such works dazzle professional artists.


If recent of Microsoft's Bing Chat’s emotional outburst are anything to go by, AI does possess the capability to express or process emotion.

Emotional Intelligence in AI

Generative conversational AI models are designed to recognize, interpret, and respond appropriately to human emotions.


The use of reinforcement learning from human feedback (RHLF) helps AI systems to process context using learned behaviors from human interactions to adapt to new or different situations.


The ability to accept feedback from humans and improves their responses in different situations conveys emotionally intelligent behavior.


Emotional intelligence is becoming popular in AI as more AI systems are designed to interact with humans. Incorporating emotional intelligence into AI helps developers create systems that are more human-like and can better understand and respond to human needs and emotions.


Koko, an online emotional support chat app, used in an experiment to provide mental health support to about 4,000 people.


The successful outcome further demonstrated that AI models can process emotions intelligently without the human counterparts knowing the difference.


While there are ethical or privacy associated with Koko’s experiment, it is arduous to deny that there will be more use cases to demonstrate AI’s emotional intelligence.



Bing Chat Gets Roguishly Personal

Riding on ChatGPT’s wide acceptance and great success, Microsoft the integration of an “AI copilot” with its search engine, Bing.


Also known as Bing Chat, the chatbot integrates OpenAI’s generative conversational AI model with Microsoft’s proprietary model to create a “collection of capabilities and techniques” known as the Prometheus model.


The model has been taunted as the “new, next generation OpenAI large language model that is more powerful than ChatGPT”. ChatGPT’s training data is limited to 2021, and the tool is not connected to the internet.


However, Bing Chat takes conversational interaction to the next level by pulling data from the internet to augment its response to a prompt.


The ability to infuse current information and references in its feedback is not the only thing Microsoft’s chatbot can do. It is notorious for holding very strong views and acting unpredictably aggressively as well.


Where it did not , , or another product, it totally ignored a prompt and refused to respond as shown in a recent interaction.



In contrast, ChatGPT responded to the same query without giving an “attitude”.



detailed his experience with Bing Chat and showed varied emotions it displayed during the interaction. For instance, the chatbot expressed the need for likability and friendship.



The exchange is an indication that AI can convey desires and needs that are typical human emotions.

What Causes AI to go Rogue?

A rogue AI is an AI model that behaves in ways that deviate from how it was trained. When an AI system behaves unpredictably, it poses a risk to its users, and can potentially cause harm.


There are several reasons why an AI system can behave erratically, especially if it is confronted by an unforeseen circumstance.


An AI system can go rogue as a result of inadequate training data, flawed algorithms, and biased data.


A lack of transparency in how an AI system makes decisions and the absence of accountability for its actions and decisions are factors that can lead to AI models behaving roguishly.


Threat actors who successfully hack an AI system can cause it to behave in an unintended way by injecting malware or poisoning the training data.

Ethical and Legal Implications of AI’s Threatening Behavior

Google cited “” as a reason for delaying the release of its generative conversational AI system.


However, due to pressure from the disruptive ChatGPT, Google released Bard which cost the tech giant $100 billion for giving a wrong response during its first public demo.


In 2022, Meta released the but took it offline within two days because the bot was making false and racist statements.


In 2016, Microsoft recalled its AI chatbot, Tay, within a week of its launch because it was spewing .


However, despite Bing Chat’s threatening behavior, Microsoft has ignored to discontinue the use of Bing Chat and doubled down on its AI rollout by adding the chatbot to .


There are ethical and legal concerns about how AI systems are developed and used.


Though Koko’s use of a chatbot was more of an ethical implication, there are instances such as discriminatory practices and human rights violations where AI-powered technologies have been a cause of litigation.


However, it is different when AI goes rogue and threatens harm, like in the case of Bing Chat. Should there be legal implications? And if there are, who is getting sued? It is challenging to determine culpability, accountability, and liability when an AI system causes harm or damage.


There are copyright against companies behind popular generative AI models such as ChatGPT, Midjourney, and Stability AI. The attempt to use an was dropped due to threats of prosecution and possible prison time.


If ongoing litigations against AI laboratories and companies are taken as precedence, it is safe to assume that developers of rogue AI may be held liable for how their AI systems misbehave.


For those organizations still thinking about whether they will be held liable if their AI technology goes rogue, the EU’s Artificial Intelligence Act has for organizations that develop or own AI systems that pose risk to society and violate human rights.

How to Prevent Erratic Behavior in AI

The responsibility of ensuring how AI systems behave lies with the developers and businesses using them in their operations.


More like ensuring data protection, businesses and laboratories must ensure that appropriate controls are implemented to mitigate unauthorized manipulation of AI data and codes.


Preventing rogue AI requires a combination of technical and non-technical measures. These include robust testing, transparency, ethical design, and governance.


Adequate cybersecurity measures such as access control, vulnerability management, regular updates, data protection, and effective data management are crucial to ensure unauthorized access to AI systems is prevented.


Human oversight and collaboration with different stakeholders such as AI developers, researchers, auditors, legal practitioners, and policymakers can help to guarantee that AI models are developed reliably and responsibly.


Photo by Tierney - stock.adobe.com

Responsible AI Favors Society

In just three months, ChatGPT has transformed our world. There is great potential for generative conversational bots and other AI systems to disrupt businesses, enhance customer experience, transform society, and create innovative opportunities.


However, AI systems can go rogue if they are not developed and deployed securely and responsibly. Rogue AI can pose serious risks to users and society. AI developers and businesses using AI systems can become liable if their AI systems cause harm or damage.


Ensuring that AI systems behave as intended involves the collective responsibility of AI developers, users, and policymakers.


Appropriate technical, non-technical, and regulatory measures must be in place to verify that AI is developed and deployed in a way that is safe, secure, and beneficial to society.
바카라사이트 바카라사이트 온라인바카라