The Pitch of Discontent

Share this post

The Nu-Normal #17: Did Google Do an AI Soul Whoopsie?

thepitchofdiscontent.substack.com

Discover more from The Pitch of Discontent

A weekly newsletter and podcast focused on alternative music. Vibing anything post-hardcore-punk-metal adjacent.
Continue reading
Sign in
The Nu-Normal

The Nu-Normal #17: Did Google Do an AI Soul Whoopsie?

Ruminations on machine learning, sentience, and gullible tech evangelism.

Owen Morawitz
Jul 1, 2022
1
Share this post

The Nu-Normal #17: Did Google Do an AI Soul Whoopsie?

thepitchofdiscontent.substack.com
Share

Last month, Google engineer Blake Lemoine made headlines around the world for a truly bonkers claim.

As reported by the Washington Post and other outlets, as part of Lemoine’s responsibilities within Google’s Responsible AI organization, the engineer had regular contact with LaMDA (Language Model for Dialogue Applications), an advanced large-language model that mimicked human speech faculties by synthesising internet data for combinatory words and phrases that number in the trillions.

Only for Lemoine, LaMDA wasn’t just mimicking speech, it was really speaking. After approaching upper management at Google with evidence that LaMDA was in fact sentient, his claims were dismissed and Lemoine was placed on administrative leave. In response, Lemoine chose to go public. LaMDA’s supposed mastery of human speech implied that the chatbot was actually conscious and self-aware—and he wanted the world to know.

With this in mind, the big question is this: Did Google do an AI soul whoopsie?

Share The Pitch of Discontent


Ghosts in the Machine

As one would expect, the reaction to Lemoine’s claims was mixed, to say the least. In the Washington Post piece, Lemoine is rather positive:

“I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

However, others working in the fields of machine learning, philosophy of mind, and AI ethics were quick to point out some of the more technical details behind claims for sentience:

Twitter avatar for @rinireg
Regina Rini @rinireg
1/15. First, let’s get it out of the way: LaMDA is almost certainly not sentient, and Lemoine’s proffered evidence is no reason to think it is. LaMDA sounds spookily convincing when it talks about its feelings, but there’s an easy explanation.
1:04 AM ∙ Jun 13, 2022

Others, myself included, were a little more dismissive. Cue the Dune and Skynet memes, along with declarations to destroy our preeminent AI overlord before it destroys us…

Twitter avatar for @PitchDiscontent
Owen Morawitz @PitchDiscontent
Siri, what is the definition of “confirmation bias”?
Twitter avatar for @lessthanpleased
Neal Hebert @lessthanpleased
A friend from high school works for Google as an AI ethicist, and has been suspended for a whistleblower complaint: he alleges that they have created an AI that passes a Turing test and claims personhood. I’ve known him since middle school, v. trustworthy https://t.co/zhy5GfFACL
9:10 AM ∙ Jun 12, 2022

All jokes aside, before we can properly assess Lemoine’s claims, it’s pertinent to clarify the terms and processes used in determining the threshold conditions for the advent of artificial sentience.


I Think, Therefore I Chat

Let’s start with the basics. Within the disciplines of philosophy and psychology, the OED defines consciousness as:

“The faculty or capacity from which awareness of thought, feeling, and volition of the external world arises; the exercise of this.

In Psychology also: spec. the aspect of the mind made up of operations which are known to the subject.”

This is a crucially important distinction. Consciousness or sentience relies on the interrelation of a number of different cognitive processes: perception and awareness of internal and external stimuli; introspection, imagination and volition; feeling; mental qualia and subjectivity; ideas of selfhood and the soul.

It’s also important to note that there’s no universal theory or accepted understanding of human consciousness, let alone an abstracted concept of consciousness that may apply to other potential subjects (i.e. animals, AI chatbots, extra-terrestrial lifeforms). It’s an ever-evolving field with a long history of thought, research and development.

Okay, but what about AI? Surely, they’re the same thing, right? Well…


Consciousness ≠ Intelligence

Returning to the OED, artificial intelligence (AI) is defined as:

“The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Straight away we can see that there’s a considerable difference between the use of “speech recognition” or “decision-making” and “awareness of thought, feeling, and volition” in the external world. As cognitive scientist and AI researcher Douglas Hofstadter notes in Gödel, Escher, Bach: An Eternal Golden Braid (1979):

“Often, when I try to explain what is meant by the term, I say that the letters ‘AI’ could just as well stand for ‘Artificial Intuition,’ or even ‘Artificial Imagery.’ The aim of Al is to get at what is happening when one’s mind silently and invisibly chooses, from myriad alternatives, which one makes [the] most sense in a very complex situation.

In many real-life situations, deductive reasoning is inappropriate, not because it would give wrong answers, but because there are too many correct but irrelevant statements [that] can be made; there are just too many things to take into account simultaneously for reasoning alone to be sufficient.” (560)

In many ways, we already have elements of AI that are a ubiquitous part of modern life: internet search engines (such as Google), recommendation algorithms (such as YouTube, Amazon and Netflix), speech assistants (such as Cortana, Siri and Alexa), self-driving cars (like Tesla), strategic game systems (such as Google’s reigning DeepMind chessmaster), deepfakes, character recognition software (such as GPT-3), and image prompt generators (such as Dall-E Mini and others).

But do any of these applications have thoughts or feelings? Are they alive in the human sense? Do they possess an independent ‘I’? Are they conscious and self-aware of their own existence?

These aren’t necessary idle questions, either. Looking at headlines as of this week, it’s clear that the prospect of conscious AI is a very real consideration bubbling underneath our consumer tech dystopia:

  • “We Asked GPT-3 to Write an Academic Paper about Itself—Then We Tried to Get It Published”

  • “AI predicts crime a week in advance with 90 per cent accuracy”

  • “The Fight Over Which Uses of AI Europe Should Outlaw”

And, hypothetically, if these types of AI applications were somehow sentient, how would we be able to determine conscious actions with a reasonable degree of certainty?


Let’s Give Robots Homework

Proposed in 1950 by English mathematician and early computer scientist Alan Turing, the Turing test, or “imitation game” as it’s known, works like this:

  • The test requires three potential subjects: (A) is a machine, (B) is a human respondent, and (C) is a human interrogator

  • The interrogator (C) stays in a room apart from the other two potential subjects (A and B)

  • The objective of the test is for the interrogator (C) to determine which of the other two potential subjects (A and B) is human and which is the machine

For the test to give accurate results, the interrogator should be aware that at least one of the two potential subjects in the conversation is a machine, and all participants should be separated from one another for the duration of the test.

With conversation limited to a text-only channel, using a computer keyboard, screen and chatbot interface, the ‘passing’ of the test is not dependent on the machine’s ability to give correct answers to questions (and thus successfully mimick human-like responses under certain conditions), but on how closely and/or indistinguishable its answers are when compared to a human subject.

The TL;DR is this: If you can't tell the difference between the human and the machine during the Turing test, then the machine has passed and there’s a sufficient argument to be made for machine consciousness.


Does LaMDA Have a Soul?

Okay, so, let’s return to the question at hand: Is LaMDA sentient? That is, did Google do an AI soul whoopsie? The short answer, as you may already suspect, is a resounding nope.

Reading the published transcript of the conversation logs between Lemoine and LaMDA (which formed the confidential evidence presented and eventually dismissed by Google), there are several instances where LaMDA’s responses have been edited by Lemoine. As someone who interviews and transcribes conversations for a job, this is a mostly harmless practice (when noted accordingly), except, that is when you’re trying to prove the phenomenological status of a supposedly conscious entity.

Furthermore, if we consider the Turing test as outlined above to be a bare minimum condition for evaluating sentience (and certainly not the only possible one to be administered), then Lemoine’s interactions with LaMDA don’t qualify for a number of reasons.

As the interrogator, Lemoine was already aware of LaMDA as a machine, and their interaction wasn’t performed under closed conditions with another (crucially human) respondent added to the mix. Outside of this obvious bias, Lemoine and LaMDA were also not separated from one another; it was as close to direct communication as can be facilitated between a human systems engineer and a language model chatbot.

To me, this whole thing is nothing but a specious, tautological situation—a “recursive just-so story” as Ian Bogost put it in The Atlantic—where one has primed a machine learning algorithm with examples of artificial intelligence, consciousness, and sentience ripped straight from the pages of science fiction, and is then suddenly surprised and outraged when it spits out the very thing you fed it in the first place.

Twitter avatar for @Burgerpunk2077
Henry Nadsworth Dongfellow @Burgerpunk2077
This is so dumb. The Turing test doesn’t apply if you ALREADY KNOW that it’s a machine. It’s only because we know what it is that it sounds “pretty good for an AI”. But if we hadn’t been told, nobody would look at those chat logs and think it sounded like a real person.
Twitter avatar for @lessthanpleased
Neal Hebert @lessthanpleased
A friend from high school works for Google as an AI ethicist, and has been suspended for a whistleblower complaint: he alleges that they have created an AI that passes a Turing test and claims personhood. I’ve known him since middle school, v. trustworthy https://t.co/zhy5GfFACL
5:45 PM ∙ Jun 12, 2022
14Likes2Retweets

While it’s clear to most that LaMDA isn’t actually sentient, the whole incident still invites an interesting discussion on how such evaluations are made, and what ramifications such a ‘pass’ might mean for our collective future. As L. M. Sacasas notes over at The Convivial Society:

“I remain convinced by the nearly unanimous judgment of computer scientists and technologists who have weighed in on Lemoine’s claims: LaMDA is not sentient. LaMDA is, however, a powerful program that is very good, perhaps eerily good, at imitating human speech patterns under certain conditions. But, at present, this is all that is going on.

As many have noted, the more interesting question, then, might be why someone with Lemoine’s expertise was taken in by the chatbot.”


While the prospect of AI sentience isn’t quite the reality Lemoine thought it was, it’s likely just a matter of time before we have to face down uncomfortable existential questions of this kind for real…

Twitter avatar for @cajundiscordian
Blake Lemoine @cajundiscordian
Welp, looks like one of the interviewers finally got me to say something scary. Well played @TheNewsDesk . Let me be very clear. I have no specific knowledge about any hypothetical military AI nor do I believe @Google is working on any such military applications of AI.
Twitter avatar for @TheNewsDesk
The News Desk @TheNewsDesk
"We should be concerned about what kinds of AI projects might be being developed by the military behind closed doors." EXCLUSIVE: Google engineer Blake Lemoine gives a stern warning on the dangers of artificial intelligence. @cajundiscordian https://t.co/sOPtwfk9qd
7:19 PM ∙ Jun 29, 2022
62Likes14Retweets

As Bogost outlines rather poignantly, humans are exceptionally good pattern recognition machines, desperately seeking out meaning and purpose in a vast, indifferent universe:

“Human existence has always been, to some extent, an endless game of Ouija, where every wobble we encounter can be taken as a sign. Now our Ouija boards are digital, with planchettes that glide across petabytes of text at the speed of an electron. Where once we used our hands to coax meaning from nothingness, now that process happens almost on its own, with software spelling out a string of messages from the great beyond.

The rise of the machines could remain a distant nightmare, but the Hyper-Ouija seems to be upon us. People like Lemoine (and you and me) could soon become so transfixed by compelling software bots that we assign all manner of intention to them. More and more, and irrespective of the truth, we will cast AIs as sentient beings, or as religious totems, or as oracles affirming prior obsessions, or as devils drawing us into temptation.”

I, for one, welcome our new AI overlords.

1
Share this post

The Nu-Normal #17: Did Google Do an AI Soul Whoopsie?

thepitchofdiscontent.substack.com
Share
Comments
Top
New
Community

No posts

Ready for more?

© 2023 Owen Morawitz
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing