I feel like we entered a new era earlier this year when Google scientist Blake Lemoine declared that he thought Google’s LaMDA artificial intelligence is “sentient,” and that the company should probably be asking LaMDA’s permission before studying it.
I feel like we entered a new era earlier this year when Google scientist Blake Lemoine declared that he thought Google’s LaMDA artificial intelligence is “sentient,” and that the company should probably be asking LaMDA’s permission before studying it. The news this month is that Google fired Lemoine. The stated reason was that he violated a confidentiality agreement, but few observers could separate the termination from Lemoine’s announcement and the controversy that followed.
Let me explain, I don’t think this story is important because the computer was sentient – in fact, I’m quite sure it wasn’t. I just find it strange that we’re even talking about it, and the way we’re talking about it is even stranger. Several leading computer scientists, and Google as a company, have gone on record stating that the claim was preposterous. The story wasn’t much as a computer science event, but as a pop culture phenomenon, it was pure gold. Was this the classic dystopian sci-fi story of a man falling in love with a machine? Or is there a chance that this program is seriously a life form? (“Whoa, kind of makes you think, doesn’t it…?”)
The oddest part was that these several leading computer scientists thought it was important to explain that, despite what you’re thinking, no seriously, the program really doesn’t feel things the way that we do. To be fair, they were probably working peacefully in their labs when a press guy showed up and turned a TV camera on them, but still I wonder if we’re approaching this the right way – and if this “is it alive?” question is a diversion from the serious questions we should be asking.
The term “sentient,” in this case, relates to the state of having feelings, rather than just knowledge. Many have equated this to experiencing a state of consciousness. So this debate has migrated from the cold, analytical realm of computer science to the fuzzy sphere of metaphysics, where these concepts are quite difficult to define.
Before you say whether a computer has consciousness, you kind of have to define what consciousness is, and there is a vast range of answers for that, depending on whether you are talking to a priest, a psychologist, a neurologist, or a new age mystic. But the point is, AIs like LaMDA are not created to be human – they are created to make people think they are human. If you learn to tap into human response patterns and emotional cues, humans will treat you differently. (Sorry dog lovers: That’s what your dog is doing.)
Computer scientists are working overtime right now trying to create systems that behave as through they are conscious so that humans will react to them more “naturally.” In other words, these systems will manipulate us emotionally.
We will then have two choices:
- Fall for these artificial response patterns and emotional cues (react to the machines as if they were our friends – in other words, be manipulated)
- Ignore the artificial response patterns and emotional cues (in other words, get practice every day treating entities that behave like humans in a callous and uncaring manner that denies their humanity)
Neither option sounds particularly appealing to me. Of course, Google, Meta, and the other for-profit corporations who are working on these kinds of solutions will say they just want to build a better chatbot, but that’s the whole problem with this tech space: We’re not so good at putting genies back in bottles once they get out.
Editor in Chief,