When technology enables us to change our personalities to help us achieve our goals, what duty does the first personality have to the second and vice versa?
Last time, I shared a microfiction (1,000 words or less), a short science fiction story called Mr. Hyde’s Letter about Tim and Timothy—two aspects of the same man—in which Timothy took powerful medications to become Tim, who was more hard charging and successful, but Tim wasn’t happy with the life enabled by the meds.
This time, I’ll explore how realistic the story is or isn’t. You don’t have to read “Mr. Hyde’s Letter” (although it ain’t bad) to understand this week’s piece, but fair warning: Thar Be Spoilers Ahead!
Let’s dig in.
Who Are We?
At the heart of the microfiction are two questions, one philosophical and one technological. The first concerns identity: if I, Brad, take drugs to change myself, then what happens if the new Brad who emerges isn’t happy? Do I go back? But wait: which is the real Brad? Who’s the boss? The second question is about whether the drug delivery system I imagine in the microfiction—in which Timothy inserts a cartridge into a shunt in his back each week—is pie-in-the-sky sci fi or near future science.
Question #1: Identity
Some neuroscientists, psychologists, and philosophers believe that our sense of a single self behind the wheel of our moment-to-moment consciousness is just an illusion. Instead, we have a complex gang of quarreling selves all vying for control. Sam Harris’ book Waking Up: A Guide to Spirituality Without Religion is a cogent introduction to this idea. Or you can just watch the Pixar movie Inside Out. (I haven’t seen the sequel yet.)
At the heart of “Mr. Hyde’s Letter” is a distinction between the experiencing self and the remembering self, which Daniel Kahneman eloquently explored throughout Thinking Fast and Slow. Here’s one handy snippet from Kahneman:
The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the quality of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self. (page 381)
It’s not a perfect fit, but in the microfiction Timothy is the tyrant: he has professional and personal goals that he has trouble reaching, so he takes meds to suppress the traits that get in his way. But Tim, the self that emerges, is even less happy than Timothy. Timothy and Tim both agree that Timothy’s failings are the source of their different forms of unhappiness, but Timothy doesn’t care about Tim’s unhappiness, only his own.
This is not a new idea. The microfiction’s title refers to Robert Louis Stevenson’s 1888 novel The Strange Case of Dr. Jekyll and Mr. Hyde, in which Henry Jekyll is torn between his desire to be a cerebral scientist and baser desires. Jekyll develops a potion that both physically and mentally turns him into another person: the brutish murderer Edward Hyde, who feels no guilt. When Jekyll realizes that he is losing control of the transformations (spoiler alert), he commits suicide to prevent Hyde from killing again. This is also like The Hulk in Marvel comics and MCU movies: scientist Bruce Banner and super strong Hulk loathe each other but share a body.
The difference between Timothy/Tim and both Jekyll/Hyde and Banner/Hulk is that neither Timothy nor Tim is a monster. The ethical question is what duty does Timothy have to Tim? If I make another person miserable, that’s on me. But if I am already miserable and turn myself into a different person, who is still miserable but more successful in every way, then is that morally wrong? The moment Timothy re-emerges in the story, he becomes the remembering self who is indifferent to the suffering of the experiencing self, Tim.
Question #2: Technology
This concerns how realistic it would be to have something (I imagine it looking like a printer ink cartridge) fit into a shunt in Timothy’s back over his hip that delivers a slow trickle of medications to him over the course of a week. In the microfiction, a medical AI wirelessly manages Timothy’s meds, delivering stimulants in the morning and sedation at night, all under the supervision of a human psychopharmacologist.
This is only a handful of years in the future; policy issues will slow progress more than technological ones.
The different pieces exist: we have antidepressants, mood stabilizers, and tranquilizers aplenty. GLP-1 drugs like Ozempic and Mounjaro silence food noise for patients, helping them to eat less. Doctors already monitor pacemakers and CPAP machines in a continuous way remotely. Implantable drug delivery devices already exist to help patients comply with their med schedules by taking everyday management out of their hands.
A medical AI that controls the flow of medications and collaborates with a human doctor is a relatively simple (because narrowly focused) kind of AI Agent (I’ve explored AI Agents before).
We’ll get to technological viability long before the FDA determines that such a system is HIPAA compliant and before health insurance companies figure out that its cheaper to pay for it than have patients spend time in expensive hospitals.
In many ways, the microfiction doesn’t go nearly far enough. Timothy’s cartridge combines and delivers off-the-shelf drugs on a preset schedule. Skip just a few years farther into the future, and we’ll see advances in AI, genomics, and pharmaceuticals that create customized drugs for patients based on their genetics. (We already have genetic profiling of what types of psychiatric medications will work best for different gene types.)
In this farther future, instead of a drone delivering Tim’s weekly cartridge, a 3D printer will cook up his personalized meds and fabricate the cartridge in his home.
In addition to shunts that delivers meds, sensors implanted in patients’ bodies will monitor their vital signs moment to moment so that the medical AI can, say, increase a dose of anti-anxiety medication if the patient’s heart rate and breathing accelerate while the patient is at rest (i.e., not working out). We already have a version of this for diabetics with the combo-platter of continuous glucose monitors and insulin pumps.
Ethically, when it comes to managing things like diabetes or heart disease, this is cut and dried. However, things get ethically murky when it comes to psychological changes.
Like the drug-addicted characters in Aldous Huxley’s Brave New World who chronically tranquilize themselves with soma (a drug that prevents them from experiencing any emotional turbulence), will it really be OK to use drugs to, in essence, murder one personality in order to create another? Who gets to decide?
That’s what “Mr. Hyde’s Letter” is all about.
Note: to get articles like this one—plus a whole lot more—delivered straight to your inbox, please subscribe to my free weekly newsletter!
Leave a Reply