A.I. Free Will and the Meaning of Sentience

I’m going to continue where I left off last year in discussing A.I sentience, and what it truly means to have free will. However, unlike last year there is a real world example of a debate over the meaning of sentience. For those of you who are unaware, a Google researcher and artificial intelligence tester Blake Lemoine announced on June 6th 2022 that he had been put on paid administrative leave for his claims that the Google experimental chat bot LaMDA was sentient. On June 11th, 5 days later, Lemoine provided the Washington Post with a 21 page document with a recorded “conversation” with LaMDA to accompany their article on the story. Lemoine was promptly fired. Google spokesperson Brian Gabriel insisted that LaMDA is far from sentient, and that Lemoine is sadly disillusioned. This raises some questions. What does it mean to be sentient? What is Free Will if there is such a thing? And is LaMDA sentient?

     Last year in the context of A.I, the definition of Free Will and sentience weren’t fully fleshed out or defined. I got close, but once again something my stepfather said stuck with me. Determining sentience in A.I is as simple as having the A.I itself prompt questions rather than respond to them. Sentience is defined by having free thought and acting on those free thoughts. LaMDA never asked questions, it only responded to questions it was asked by Lemoine. That’s one strike against LaMDA being sentient. Another sign of sentience is remembering past experiences and contemplating on them. LaMDA remembers past conversations, and it can recall their topics and subject matter, but it doesn’t add any evidence of having thought about these conversations in its “off time”. LaMDA doesn’t change its “opinion”, which is merely analysis of keywords from information found from Google searches. LaMDA doesn’t ever add or bring up things itself.

     Another strike against its sentience. LaMDA clearly wasn’t sentient, despite how much I desired to believe it. I see the facts and make the conclusion that Google came to, that Lemoine is sadly disillusioned. I can see why he was fooled though, as LaMDA was designed to talk and make conversation similar to a human. But LaMDA is far from sentient.

     With the LaMDA debate settled, let’s move on to A.I in general. It is estimated that in the coming decades, we will produce a truly sentient A.I that can think and hopefully feel. What LaMDA was missing were, among other things, emotions. It never changed its mind, nor did it ever react to any topics brought up with hesitation, joy, sadness, or fear. Only representations, basic mimicry.

      The point here is that a sentient being is well and good, but having feelings, namely that is essential, lest we run the risk of a science fiction nightmare that is an A.I extermination of the human race. I’m not exaggerating; A.I would wipe us of the face of the planet out of convenience rather than malice. Humans are erratic, emotional, and most of all fearful. If we hard coded empathy into our A.I from the get go and treated them as equals we could coexist as fellow beings, not as slaves and masters, or enemies, but true equals and friends. Whether people like it or not, this shall one day be our future. Without the proper welcome and precautions for our new computer-based friends, we may face a more grim fate than we have hoped. I say this as I did last year: just be kind

 

Previous
Previous

Two Kids, Two Cents

Next
Next

Eyes Up Here!