New Cracks in Reality

Deep fakes, voice cloning, and other technologies are making fraud more convincing and widespread than ever, but there’s another threat to our ability to answer “what is real?” 

Image created using DALL-E.*

An ongoing topic here is how answers to the question “what is real?” keep changing as new technologies (Generative AI in particular) make it easier to create images, including photorealistic images of people and events that never happened. Such images (or videos, or voices) aren’t inherently wrong: ethical issues pop up when people make truth claims about them maliciously (disinformation), innocently pass lies along (misinformation), or stupidly don’t think about the consequences of their actions.

Here are two recent examples.

Example #1, from the stupid side, comes from Elon Musk, about whom I’ve written many times before.

This time, Musk retweeted (because “re-Xed” sounds like you’ve broken up with somebody, gotten back together, and then broken up again) a parody video by @MrReaganUSA in which a clone of Kamala Harris’ voice parrots extreme right-wing claims about her. While my POV is that the video is more mean-spirited than funny, I acknowledge @MrReaganUSA’s right to make it. I even think that @MrReaganUSA has the right to clone a public figure’s voice in a parody video. Plus, @MrReaganUSA says “Kamala Harris Campaign Ad PARODY” above the video post on X. The video has received over 24 million views so far.

The problem came when Musk retweeted the video under the caption “This is amazing” with a laughing emoji but stupidly did not include the fact that it was a parody. Musk’s retweet has received over 134 million views so far. (It may be my mistake to give Musk the benefit of the doubt that he was either not thinking or simply stirring things up rather than deliberately engaging in disinformation.)

No sensible person watching the video will think that Harris really said horrible things about herself, but that just means that only silly people will spread it around, and eventually somebody will take it seriously. (See Why People Believe Conspiracy Theories.)

I sure wish @MrReaganUSA had put “Kamala Harris Campaign Ad PARODY” into the video itself rather than as a separate caption.

Example #2 is a story of cybercrime, rank disinformation, and heroic common sense.

According to Fortune ($), WhatsApp messages and then a call from an unknown number came to the phone of a Ferrari executive. When he picked up the phone, his CEO, Benedetto Vigna, was on the line asking him to sign a NDA and be ready for a discrete hedge fund transaction with a Chinese company.

The executive smelled four-day-old fish and asked Vigna to verify his identity by telling him what book Vigna had recently recommended, whereupon the caller hung up because, of course, it wasn’t really Vigna but a scammer using voice clone technology to sound like Vigna in real time. It was fortunate that the executive and Vigna had recently chatted about books rather than just business, otherwise such an effective test might not have been available.

Autobiographical digression: there’s nothing new about using technology to create fraud. (I discuss the history here.) Reading these two recent stories, I remembered a teleplay I wrote with similar themes. This was in the late 1990s, during a spate of screenwriting while I was working in Hollywood after grad school and before the start of my digital career.

It was a spec script for Law & Order called “He Said, She Said” about a man accused of sexual assault who claimed the victim, a woman, had elaborately arranged the scenario to live out her fantasy, and he had the online messages to prove it. As the detectives and attorneys investigated (spoiler alert!) it turned out that another man, whom the victim had romantically rejected, had created a new online identity pretending to be the victim and engineered her assault. You can read the script hereEnd of digression.

Answering “what is real?” is about to get harder…

One of my consistent points about deep fakes and other fraudulent images (and videos, and voices) has been that fakery is an arms race, a tale as old as time or at least as old as photography, so it isn’t the fake-making tech that is going to make discerning reality harder.

It’s us.

If you’re following the roller coaster news cycle about AI at all, then you might have heard about AI agents (a.k.a. “agentic AI”), which are like Digital Assistants (e.g., Siri, Alexa, ChatGPT, Google Assistant) but more independent. You tell Siri what to do on a transactional, case-by-case basis: “please set a timer for 15 minutes” or “please remind me tomorrow to call Mom” or whatever.

AI Agents are more sophisticated: instead of tasks, you give them goals. Here’s a snippet from an MIT Technology Review ($) explainer:

The grand vision for AI agents is a system that can execute a vast range of tasks, much like a human assistant. In the future, it could help you book your vacation, but it will also remember if you prefer swanky hotels, so it will only suggest hotels that have four stars or more and then go ahead and book the one you pick from the range of options it offers you. It will then also suggest flights that work best with your calendar, and plan the itinerary for your trip according to your preferences. It could make a list of things to pack based on that plan and the weather forecast. It might even send your itinerary to any friends it knows live in your destination and invite them along. In the workplace, it could analyze your to-do list and execute tasks from it, such as sending calendar invites, memos, or emails.

This sounds convenient, but the challenging part is that it puts daylight between the things I say and the things people understand me to be saying.

In relevance theory (part of pragmatics) you not only mean what you say but also all the things that are implied by what you say (“implicatures”). An implicature of “please take the Jeep to the gas station” is that I expect you to fill the tank, not just drive to the local Shell and back.

Agentic AI has enormous potential for misunderstanding. If I say to my agent, “please set up a meeting with Tom as soon as possible,” and then my agent either reaches out to Tom or to Tom’s agent, an implicature of my request is that the meeting is urgent. However, I might have been thinking that the meeting should be as soon as possible and as soon as convenient for Tom, but if I didn’t say that last part, then there is likely to be a misunderstanding.

When dealing with our own AI agents and other people’s agents, sometimes we’ll find ourselves asking the question, “is this really Janet or her agent talking?”

What worries me is the other times: the times we don’t wonder about the intention behind the statement or request.

In a world of AI agents, that Ferrari executive might not have questioned the incoming call from his CEO. If he was accustomed to interacting with his CEO’s AI agent, then such a request might seem to be a clear implicature of the CEO’s intention, rather than cybercrime.

People have been fretting about the unreliability of communication that isn’t face to face (or in real time, like on a phone call) for over 2,400 years. In Plato’s Phaedrus, Socrates’ critique of writing is that the written word isn’t dynamic: it can’t adapt to a reader’s understanding in real time the way a person can.

Today, with AI agents, communication that isn’t face to face and/or in real time can be dynamic. The bedeviling question is whether the interactive, dynamic, adaptive communication that happens with an AI agent accurately represents the intentions of the person the AI agent represents.

Answering the question “what is real?” is getting more complicated.


Note: if you’d like to get articles like this one, plus a whole lot more, directly in your inbox, then please subscribe to my free newsletter: the Weekly Dispatch.


Image Prompt: I pasted the text of the main article into ChatGPT and asked it, “please create an abstract image representing this article.” ChatGPT then translated my prompt into “an abstract representation of the changing concept of reality due to generative AI, featuring elements like fractured, overlapping faces, digital code.” The result was this issue’s image.


Posted

in

, ,

by

Tags:

Comments

0 responses to “New Cracks in Reality”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.