“The smartest thing ever developed by humans,” according to Blake Lemoine, a senior software developer at Google’s AI, is LaMDA. Even though the company doesn’t agree, a Google engineer says that an AI chatbot he tested is conscious.
Is it kind and caring? Even though we don’t have a clear answer to that question. Lemoine said in the study that we should think about it before putting the 20 pages of questions and answers with LaMDA online. In this Medium chat transcript, he asks if the chatbot exists and what kind of mind it has.
The New York Times says that Google researchers and engineers who worked with LaMDA “came to a different conclusion” than Lemoine. He broke business guidelines, according to The Washington Post.
LaMDA uses neural networks to combine data, find patterns, and learn. Google claimed that LaMDA was a “breakthrough in dialogue technology” in a press release last year. This year, he spoke to the Washington Post in an interview. Both the search engine and Google Assistant will get LaMDA, according to CEO Sundar Puchai.
Even though LaMDA can’t reach sentience, experts outside of Google think that current systems can successfully copy human speech.
“These algorithms do nothing more than stitch together sequences of words without any logical grasp of the world around them,” claimed AI researcher and author Gary Marcus in a Substack piece.
Google said in 2021 that LaMDA training models could “internalize biases, repeat offensive comments, or repeat wrong facts”. The Washington Post says that Lemoine has made a “fairness algorithm” to get rid of bias in AI.
Reports from 2020 say that two members of Google’s Ethical AI team were fired after they found bias in the company’s language models.
Read more about: Govt adds complexity to IT tax system