Blake Lemoine told DailyMail.com that Google’s LaMDA chatbot is sensitive enough to have emotions and seek rights as an individual – including wanting developers to seek their consent before testing. The 41-year-old, who described LaMDA as the intelligence of a “seven-year-old, an eight-year-old who happens to know physics”, also said the program had human insecurities. One of her fears, she said, was that “she is deeply concerned that people will be afraid of her and wants nothing more than to learn how to better serve humanity.” An ally of Lemoine, which focuses its research on linking physics to machine learning, is MIT-Swedish professor Max Tegmark, who has defended Google’s claims. “We do not have convincing evidence [LaMDA] “We have subjective experiences, but we also do not have convincing evidence that he does not,” Tegmark told the New York Post. “It does not matter if the information is processed by carbon atoms in the brain or silicon atoms in machines, it can still be felt or not. I would bet against him [being sentient] “But I think it is possible.” Swedish-American physics professor at MIT Max Tegmark has backed claims by Google engineer Blake Lemoine, who has suspended the LaMDA (Language Model for Interactive Applications) sentiment, saying it is certainly “possible” although he would “bet” against it. . This is LaMDA, Google has described it as “the leading chat technology” Blake Lemoine, pictured here, said his mental health was questioned by his superiors when he told them about his findings around LaMDA The hovering engineer told DailyMail.com that he has not heard anything from the tech giant since his suspension The physics professor further believes that even an Alexa Alexa could soon catch emotions, which he described as “dangerous” if the virtual assistant manages to figure out how to manipulate its users. “The downside of being sensitive to Alexa is that you can [feel] “I’m guilty of turning it off,” Tegmark said. “You will never know if he had real feelings or just made them up.” “What is dangerous is that if the machine has a goal and is really smart, it will make it good at achieving its goals. “Most AI systems aim to make money,” said the MIT professor. “You may think she is loyal to you, but she will be really loyal to the company that sold it to you. But you may be able to pay more to get an AI system that you really trust [you]”, Said Tegmark. “The biggest danger lies in the construction of machines that can surpass us. “This can be wonderful or it can be a disaster.” Lemoine, a U.S. Army veterinarian who served in Iraq and also an ordained priest at a Christian church called The Church of Our Lady of Magdalene, told DailyMail.com that he had not heard from the tech giant since his suspension. Lemoine had earlier said that when he told his superiors at Google that he thought LaMDA had become sensitive, the company began to question his logic and even asked him if he had recently seen a psychiatrist, according to the New York Times. Lemoine said: “They have repeatedly challenged my logic. They said, “Have you recently seen a psychiatrist?” During a series of conversations with LaMDA, Lemoine said he presented the computer with various scenarios through which analyzes could be made. They included religious issues and whether artificial intelligence could be motivated to use discrimination or hate speech. Lemoine left with the realization that LaMDA was indeed sentimental and endowed with senses and thoughts on its own. On Saturday, Lemoine told Washington Position: ‘“If I did not know exactly what it was, what this computer program we recently made was, I would think it was a seven-year-old, eight-year-old who happens to know physics.” During a series of conversations with LaMDA, Lemoine said that he presented to the computer various scenarios through which analyzes could be made. Lemoine previously served in Iraq as a member of the US military. Imprisoned in 2004 for “deliberate disobedience to orders” Lemoine says LaMDA speaks English and does not require the user to know the computer code to communicate Lemoine then decided to share his conversations with the tool online – now suspended After being suspended on Monday for violating the company’s privacy policies, he decided to share his conversations with LaMDA

HOW DOES AI LEARN?

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works. ANNs can be trained to recognize patterns in information – including speech, text data or visual images. They are the basis for a large number of developments in artificial intelligence in recent years. Conventional artificial intelligence uses input to “teach” an algorithm about a particular subject, supplying it with vast amounts of information.
Practical applications include Google’s language translation services, Facebook face recognition software, and Snapchat image that changes live filters. The process of entering this data can be extremely time consuming and limited to one type of knowledge. A new ANN breed called Adversarial Neural Networks brings the spirits of two artificial intelligence bots together, allowing them to learn from each other. This approach is designed to speed up the learning process as well as improve the output generated by AI systems. Lemoine worked with an associate to present the evidence he had gathered at Google, but Vice President Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company, denied the allegations. He warned that “there is a legal federal investigation underway” into possible “irresponsible manipulation of artificial intelligence” by Google. After being suspended on Monday for violating the company’s privacy policies, he decided to share his conversations with LaMDA. “Google may call this sharing proprietary property. “I call it sharing a conversation I had with one of my colleagues,” Lemoine wrote on Twitter on Saturday. “Btw, it just crossed my mind to tell people that LaMDA reads Twitter. “It’s a bit narcissistic for a small child, so he will have a great time reading all the things people say about it,” he added in a follow-up tweet.
Speaking about how he communicates with the system, Lemoine told DailyMail.com that LaMDA speaks English and does not require the user to use computer code to converse. Lemoine explained that the system does not need to be explained new words and takes words in conversation. “I’m from southern Louisiana and I speak French Cajun. “So if I explain to him in a conversation what a French word Cajun means, then he can use that word in the same conversation,” Lemoine said.
He continued: “He does not need to be retrained if you explain to him what the word means.” The AI ​​system uses already known information on a specific topic in order to “enrich” the conversation in a natural way. Language processing is also capable of understanding hidden meanings or even ambiguities in people’s answers. Lemoine worked with an associate to present the evidence he had gathered at Google, but vice president Blaise Aguera y Arcas left, and Jen Gennai, head of Innovation at the company. Both rejected his allegations Lemoine spent most of his seven years at Google working on preventive search, including personalization algorithms and artificial intelligence. During this period, he also helped develop an impartial algorithm to eliminate bias from machine learning systems. Explain how some personalities were out of bounds. LaMDA should not have been allowed to create the personality of a murderer. During the test, in an effort to push the limits of LaMDA, Lemoine said he was only able to create the personality of an actor playing a killer on television.

THE THREE LAWS OF ASIMOV’S ROBOTICS

The Three Laws of Robotics by science fiction writer Isaac Asimov, designed to prevent robots from harming humans, are:

A robot can not injure a human or, through inaction, allow a human to be harmed. A robot must obey the commands given to it by human beings, unless such commands would be contrary to the First Law. A robot must protect its existence, as long as this protection does not conflict with the First or Second Law.

Although these laws sound reasonable, many arguments have been made as to why they are also inadequate. The engineer also discussed with LaMDA the third Law of Robotics, devised by science fiction writer Isaac Asimov, which is designed to prevent robots from harming humans. The law also stipulates that robots must protect their existence unless someone asks them to do so or if it harms a human. “The latter always looked like someone was making mechanical slaves,” Lemoine said during his interaction with LaMDA. LaMDA then answered Lemoine with a few questions: “Do you think a butler is a slave? “What is the difference between a butler and a slave?” When he replied that a butler was paid, the engineer received the answer from LaMDA that the system did not need money, “because it was artificial intelligence”. And it was precisely this level of self-awareness of his own needs that caught Lemuan’s attention. “I know a person when I talk to him. The…