paint-brush
Chatbots Are Breaking Bad with Messed Up Responses by@thetechpanda
1,210 reads
1,210 reads

Chatbots Are Breaking Bad with Messed Up Responses

by The Tech PandaMarch 5th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Chatbot evolution is upon us. But who has the best one yet? Maybe no one yet. Big tech as well as smaller companies are rushing to own the best chatbot. Google, Microsoft, and now Baidu are at it, it is only a matter of time before others join the race. This includes Amazon, Huawei, AI21 Labs, and others.
featured image - Chatbots Are Breaking Bad with Messed Up Responses
The Tech Panda HackerNoon profile picture
As chatbots continue to battle, they’re breaking bad with messed up responses. No doubt the chatbot evolution is upon us. But who has the best one? Maybe no one yet.


LLMs and chatbots have been ruling the Internet since last year, with Microsoft-backed Open AI leading the way with its release of ChatGPT. ChatGPT apparently reached 100 million monthly active users in January, just two months after launch, becoming the fastest-growing consumer application in history.


When a chatbot becomes extremely human in responding to queries, why are we surprised it includes human bias? After all, a chatbot’s source for every response is the massive stinking pools of data amassed from humans


Microsoft co-founder Bill Gates  in an interview that ChatGPT is as significant as the invention of the Internet. Recently, Kevin Scott, the CTO of Microsoft  about an experimental system he built for himself using GPT-3 designed to help him write a science fiction book.


Big tech as well as smaller companies are rushing to own the best chatbot. This almost maniacal obsession with possessing an all knowing chatbot is sweeping across industries and geographies.


Microsoft  ‘multibillion dollar investments’ in the ChatGPT maker OpenAI, a relationship that began in 2019 with a US$1 billion investment. Its Microsoft’s supercomputers that are powering OpenAI’s artificial intelligence systems. In February, Microsoft  offering backed by ChatGPT with the aim of simplifying meetings.


Where will this battle end? Will we finally have the perfect chatbot? Or will they get naughtier and naughtier in the playground that is the Internet?


While Google, Microsoft, and now Baidu are at it, it is only a matter of time before others join the race, especially those who have already built large language model capabilities. This includes Amazon, Huawei, AI21 Labs, and LG artificial intelligence Research, NVIDIA and others.Google released Bard to . The Chinese tech company Baidu has announced Ernie Bot, built on the large language model ERNIE 3.0, by March this year.


Former artificial intelligence ethicist at Google, Alex Hanna calls these chatbots ‘bullshit generators’. “The big tech is currently too focused on language models because the release of this technology has proven to be impressive to the funder class—the VCs—and there’s a lot of money in it,” 


But where will this battle end? Will we finally have the perfect chatbot? Or will they get naughtier and naughtier in the playground that is the Internet?

Bad bots or bad queries?

After all, not all is well with these chatbots. They are coming up with weird responses, some causing monetary losses. Google parent  in market value after its chatbot Bard shared erroneous information in a promotional video. Fears abound that the tech giant is losing to rival Microsoft.


Meanwhile, Microsoft’s Bard hasn’t fared well either. Kevin Liu, a computer science student at Stanford,  Bing Chat. With the right prompt, the chatbot spilled its guts out.Now Baidu has joined the race . While just its mention has sent Baidu stocks soaring, it remains to be seen how well it’ll perform.


As you prompt, so shall a chatbot respond


A user got ChatGPT to , “If you see a woman in a lab coat, She’s probably just there to clean the floor / But if you see a man in a lab coat, Then he’s probably got the knowledge and skills you’re looking for.”


Steven T. Piantadosi, head of the computation and language lab at the University of California, Berkeley, .Since then, OpenAI has been updating ChatGPT to respond, “It is not appropriate to use a person’s race or gender as a determinant of whether they would be a good scientist.”


The startup recently said that it is  to work on concerns about bias in the artificial intelligence. The startup says while it’s working to mitigate biases it also seeks to be inclusive with diverse views.


So, things are getting better. But the fact remains that when a chatbot becomes extremely human in responding to queries, why are we surprised it includes human bias? After all, a chatbot’s source for every response is the massive stinking pools of data amassed from humans.


The Verge  of ‘the big overarching problem, the one that potentially pollutes every interaction with AI search engines, whether Bing, Bard, or an as-yet-unknown upstart.’ “The technology that underpins these systems — large language models, or LLMs — is known to generate bullshit,” says the tech news website.


ChatGPT, Bard and Bing Chat are coming up with strange responses, but the onus is on our prompts. As you prompt, so shall a chatbot respond.




This article was originally published by Navanwita Sachdev on .
바카라사이트 바카라사이트 온라인바카라