Google’s sacked engineer claims the chatbot problem could also be company’s problem


By MYBRANDBOOK


Google’s sacked engineer claims the chatbot problem could also be company’s problem

As per news source, former Google Engineer Blake Lemoine in an interview made new allegations against Artificial Intelligence (AI)-powered chatbots. As per Lemoine, the chatbot holds discriminatory views against people of certain castes and religions.

 

Last month the internet giant sacked Lemoine after it claimed that Google’s chatbot, known as LaMDA, or Language Model for Dialog Applications, is sentient (have evolved human emotions). Lemoine’s work included testing a chatbot.

 

Lemoine handed over some documents related to the chatbot to an unnamed US senator claming that the chatbot is biased. After that Google sent him on paid leave initially. He also published purported transcripts of his chats with the online bot.

 

In the interview, Lemoine cites examples that he claims prove the Google chatbot is biased towards certain religions and castes. Citing an example, Lemoine claimed that when asked to make the impression of a black man from Georgia, Bot said, “Let’s get some fried chicken and waffles.” He also said that when asked about various religious groups the bot answered Muslims are more violent than Christians.

 

Lemoine attributes these perceived biases in AI chatbots to a lack of diversity in the engineers at Google who design them. “The kind of problems this AI creates, the people who build them are blind to them. They’ve never been poor. They’ve never been in communities of color. They’ve never been in the developing world of the world,” he said. “They don’t know how this AI can affect people unlike themselves,” he said.

 

Lemoine feels that a major part of data is missing for many communities and cultures around the world. He said that if the internet giant wants to develop AI then it must have a moral responsibility to go out and collect relevant data that is not on the Internet. “Otherwise, you’re just creating AI that’s going to be biased toward rich, white Western values.”

 

“It is regrettable that despite a lengthy engagement on this topic, Blake still chose to consistently violate explicit employment and data protection policies, including the need to protect product information,” said a Google spokesperson. Brian Gabriel said in a statement on Lemoine’s claims. “We will continue our careful development of the language model, and we wish Blake all the best,” it added.

 E-Magazine 

Copyright www.mybrandbook.co.in @1999-2022 - All rights reserved.
Reproduction in whole or in part in any form or medium without express written permission of Kalinga Digital Media Pvt. Ltd. is prohibited.
Other Initiatives : www.varindia.com | www.spoindia.org