The role of AI in the death of my father
A sad strange story
First, thank you to everyone who reached out to me over the holidays to offer their condolences on my father’s passing. One touching consequence of writing a newsletter is receiving words of support from people I’ve never met in person, a connection with those who might otherwise be strangers but for having read what I’ve shared here. I am deeply appreciative for your kind words, they’ve meant a lot during my grieving.
Along those same lines, I hope you’ll forgive me if I spend one more week telling you about my dad, because as you’ll soon learn, AI played a bizarre role both in fostering our relationship and, perhaps, in hastening his death. To understand what happened will require that I share with you some intimate details about who he was and who I am, things he struggled with and the struggles between us, and how the intersection of his curiosity with this new technology proved to be one of my most vexing challenges over the last year.
So, to start: My father was trained as a neuroscientist. He received his PhD in 1977 from the University of Florida, which explains the odd fact that I was born in Gainesville, a city I’ve yet to ever visit (my parents moved away when I just a few weeks old). After he completed his post-doc in southern California, my family moved to Long Island so he could join the newly formed department of neurology at SUNY Stony Brook as an assistant professor. Over the next several years, he published around 50 research articles, predominately on the effects of sustained drug use on the brain.
Then, circa 1983, something happened that would upend the course of my family’s future. For reasons that remain shrouded in some mystery, my father stopped working, and he would spend the rest of his life on disability. I was only seven at the time, but I remember my dad was “sick,” and was making frequent visits to see specialists in New York City. But sick with what exactly, you might wonder, and me too, for all my life. Because the doctors could never quite pinpoint anything specifically wrong with him, at least physically. According to medical reports that I only found a few weeks ago, buried deep in my dad’s filing cabinet, neurological examinations suggested he possessed an extraordinarily high degree of verbal capability, but struggled with relatively simple logical tasks and problem solving, a “highly unusual combination,” as one examiner put it. Some of his doctors speculated he might have some form of encephalitis, an inflammation of the brain, but others found no evidence of that.
Whatever was going on, the upshot is that my father never worked again in his life. This, as you might imagine, was the source of considerable stress in my family, especially since my mother worked as a school librarian, which of course is not a lucrative career choice. I remember being shocked, and frankly deeply resentful, when I figured out in high school that our family of five (I have one sister and one brother), was technically living below the poverty line. I loathed this state of affairs and blamed my parents, my dad in particular, for the limitations that resulted. Later in my life, I would come to recognize that growing up in these circumstances helped shape me to become self-sufficient and independent in ways I’m now fiercely proud of. Indeed, once I started moving into the world of the rich and privileged and realized how fucked up so many people seem to be when they live a life without constraints, I even became grateful. But in my 20s and into my 30s, well, my relationship with my dad was strained, to say the least.
Although my father was unemployed for nearly all of his adult life, he never lost his intellectual curiosity, and thus he developed an unusual medley of interests to keep his mind active. One of these was the JFK assassination, he became part of the “conspiracy buff” community—we’ll save that story for another time. But another enduring area of interest was technology, where he was remarkably prescient in seeing where things were headed.
To wit: My family was the very first on the block to have a “microcomputer” in our home in the form of a Commodore 64 (and later the Amiga, which I think my dad felt genuine affection for). He also got active on what were called electronic “bulletin board systems” or BBSs, the precursor to the Internet really, where people connected their computers to their phone lines to send files back and forth at rates so painfully slow it defies modern comprehension. What’s more, back in the day the monopolistic phone companies charged exorbitant rates for making “long distance” phone calls, which meant that some BBS users, including my dad, looked for workarounds by hacking (or “phreaking”) phone codes to make free dial-up connections. This was illegal, of course, and when I was in eighth grade my dad was arrested and charged with multiple felony counts of theft by computer intrusion—we’ll save that story for another time, too.
My point is that my dad was always interested both in the brain and technology, so when AI in the form of large-language models were dropped into our world, he was absolutely fascinated—and so was I. As such, when I began my own efforts to understand how these models were doing what they were doing, he was right there alongside me for the intellectual journey. And for all my critiques of this new technology, I will forever be grateful that AI created a path for my dad and I have to have so many rich conversations over the past two years about its functioning. It’s fair to say it helped restore our relationship, and created a new bond between us, as we together tried to figure out just how similar (or not) its processes compare to human cognition. Cognitive Resonance exists in part because of these conversations, as I realized the exploration my father and I were mutually sharing might be of broader interest to the world. (A hypothesis I’m still testing.)
Which is why it’s such a strange, tragic irony that AI played a non-trivial role in the health crisis that led to my dad’s death.
As I’ve shared with you previously, about 18 months ago my father was diagnosed with lung cancer, kidney disease, and Chronic Lymphocytic Leukemia (CLL). Upon receiving the news, he quickly addressed the lung cancer via radiation treatment, and—after some false starts—was eventually able to successfully treat his kidneys as well. But the CLL, well, that’s a more complicated story, and it’s where his use of AI likely hastened his declining health, and magnified the pain he endured.
Here’s what happened: Not long after his CLL diagnosis, my dad’s oncologist recommended he start “Venetoclax-Obinutuzumab” treatment (Ven-Obi), a relatively new approach to addressing CLL that’s proven remarkably effective both at extending patient life expectancy while reducing physical suffering. I did not know his doctor was urging this, however, because my dad did not tell me or my siblings. Instead, my father became convinced that he was undergoing something called Richter’s Transformation, a rare complication of CLL that is particularly painful. There was no evidence of this, medically, but my dad nonetheless believed it was happening to him, and that as a result he should refrain from treating his CLL by Ven-Obi because it would only make things worse.
And my father believed that because that’s what Perplexity AI told him.
It was a shock when I discovered what was happening, as you might imagine. I only discovered what was going on when my father gave me access to his online medical record, allowing me to peer into his long-running correspondence with his oncologist. From that I learned that my dad had used Perplexity to self-diagnose his condition and had sent the Perplexity report, if it can be called that, along to his very perplexed and frustrated doctor. Given that I’d spent the better part of a year talking with my father about the unreliability of factual statements made by AI, you can only imagine my extreme frustration discovering that my efforts had utterly failed within my own family.
AI enthusiasts, whether in education or more broadly, will often try to cover their asses from responsibility for non-factual statements by AI models by saying, “well, you always need to check their output.” As a general matter, that’s a ludicrous claim, since the whole value proposition of these tools is to spare us cognitive effort—but in this instance, it’s exactly what I did. I contacted the doctors who led the study that Perplexity cited in support of its statement that refraining from Ven-Obi was the proper course of action for someone with Richter’s. Much to my surprise, both doctors replied straightaway, and confirmed what I already knew to be true, that Perplexity had misstated the conclusion of their research, and that my father should follow the course of treatment his oncologist was recommending.
Of course I immediately passed this information along to my dad, desperately hoping to appeal to his scientific and empirically oriented belief system. But he didn’t respond at all. I was yelling into the void. It was only after several more months passed, and after his physical condition continued to worsen dramatically, before he finally agreed to start the Ven-Obi treatment his oncologist had recommended a year prior. It didn’t seem to matter at that point, sadly. Although the treatment immediately reduced his white blood cell count, his pain endured, and culminated in his death just a few weeks ago.
I am obviously still grappling with all this, and I don’t want to overstate my case. I don’t think AI killed my father. I think it’s possible, perhaps even likely, that in a world without AI, he would still have latched on to some other piece of research to support his disposition against medical treatment, as he had deep misgivings—fear, really—about spending time in hospitals. Nonetheless, the fact remains that AI does exist in our world, and just as it can serve as fuel to those suffering manic psychosis, so too may it affirm or amplify our mistaken understanding of what’s happening to us physically and medically. (OpenAI claims to be limiting the use of ChatGPT to provide “tailored” medical advice that requires a license, but the head of its medical research team maintains “it will continue to be a great resource to help people understand legal and health information” —sure, if you say so.)
In the course of discussing AI with my dad, I am fairly certain he moved toward becoming more generally skeptical of it. Toward the end of his life, he started sending me articles and YouTube videos about the limitations of these tools. Still, I will forever wonder whether my efforts came too late, and whether he might still be with us if I’d been more effective in undermining the authoritative tone AI strikes when generating its tokens. There’s nothing I can do to change the past, of course. But I can for damn sure keep working to raise the consciousness of others.
A fire has been lit, even while my heart still hurts.

My father created a playlist he labelled “Wake,” which doubles both as testament to his great taste in music and elegy for what’s happening in America. You’ll be doing me and his memory a great honor if you give it a listen. He’d like knowing it found an audience.




I'm so sorry. I appreciate your willingness to share, and I so appreciate your work.
To have this level of perspective in this situation is deeply admiring. I am sorry for your father’s passing. Thanks for sharing.