When I think about the field of computational psychiatry, I imagine a robot sitting opposite a highly emotional teen, cross-legged with a pen and notebook in arm. The robot tries to console the kid with its mechanic voice while scanning the patient’s facial expressions and body language to diagnose their mental disorder. This picture feels impersonal and distasteful — a trivialization of the fundamentally human doctor-patient relationship that lies at the core of psychiatry. However, I have come to understand that this picture is wrong.
I recently interviewed Beth Semel, a postdoctoral associate at MIT who is conducting research in the field of computational psychiatry from a historical and anthropological angle. Semel addresses a major misconception around computational psychiatry — one that equates computation with an automation of treatment choices. Rather, the role of AI is not one of replacement but of augmentation in the decision-making process. For example, Ginger.io, a MIT Media Lab Spinoff, provides “on-demand access to behavioral health coaching, video therapy, video psychiatry and self-guided content that’s clinically proven to reduce symptoms of stress, anxiety, and depression.” And at Harvard Medical School, Professor Justin P. Baker is exploring whether we can gain a richer understanding of a patient’s psychiatric state by deploying video cameras to psychiatric offices and analyzing patients’ facial expressions over the course of a consultation. In both industry and academia, we see that AI takes an assistive, augmentative role — not one in which a robot replaces a psychiatrist.
Computational tools that play an augmentative role for psychiatrists are much needed in today’s mental health landscape. Many of us reading this would experience mental health care as an extremely intimate, one-on-one interaction with a professional therapist for an hour a week. Furthermore, our insurance would cover a substantial portion of that visit, resulting in very little co-pay for us. However, for large swaths of the American population, mental health care is simply not accessible. Many people who experience homelessness, for example, cannot get their hands on any type of psychiatric care or medication at all. Many supporters of computational psychiatry argue that it is this very underfunded and inaccessible American mental healthcare system that makes computational psychiatry not just helpful but necessary. These computational tools have huge potential for democratizing access to mental healthcare if they find their way into urgent care facilities, drop-in clinics, and mobile smartphones.
With that said, there are certainly ethical ramifications to consider when building these tools out. For one, a computational approach to psychiatry requires reducing complex mental health disorders to a standardized and objective set of diagnostic categories. To date, the most comprehensive effort to categorize psychiatric illness has been the Diagnostic and Statistical Manual of Mental Disorders (DSM), a big bible of codes that psychiatrists use to diagnose their patients. In theory, computational tools would aim to categorize patients into these DSM codes as well. However, mechanically reducing people to diagnostic buckets comes with a slew of problems. First, every person experiences illness in a different way — people present symptoms and respond to medications uniquely. And while psychiatrists train for years to attune themselves to the variability amongst patients, computational tools may struggle to capture this level of nuance. Furthermore, diagnostic codes have social and economic ramifications for patients: As Semel mentins, “Clinicians see the DSM as medical documents that determine how a patient moves through the healthcare systems and other systems in the US.” While some codes give patients preferential access to beneficial resources and treatments, others may stain the patient’s record in future dealings with the law. Thus, psychiatrists are careful in weighing the downstream biological and social impacts of different codes before making a diagnosis, accounting for complexities that computational tools may find difficult to capture.
Data is another big constraint for implementation of computational psychiatry. Computational tools are only as good as the data that they are trained on. Unfortunately, the population that currently receives psychiatric care is typically more affluent, so the data that we have at our disposal for training computational tools is not representative of the entire population. The solution isn’t clear, as data collection is an especially sensitive task for at-risk individuals needing psychiatric care. Big technology companies like Google, Apple, Amazon, and Facebook are best positioned to collect data, as their platforms collectively reach almost every American on a daily basis. However, should big tech companies be able to read private text messages to categorize which ones might indicate mental illnesses? Should they be able to learn from data collected via in-home speakers? What about search queries? Until we can address the ethical ramifications of collecting psychiatrically relevant data at scale, computational tools will face a huge hurdle to becoming truly inclusive.
In an ideal world, Semel advocates for participatory design: a bottom-up approach that involves psychiatrists and engineers coming together and working with patients to design computational tools well suited for their needs. This would be the first step to repairing two major disconnects. The first disconnect exists between the engineers who collect data and the “psychiatrists [who] know what it’s like when the data is human, and who can bring humility and more advocacy for the patients.” The second disconnect exists between psychiatrists and overlooked patient groups — like minorities and less affluent individuals — that are not the typical demographic to receive mental health care. Computational psychiatry has great promise, but until we can more intelligently address issues around its ethical and inclusive deployment, we should be very wary of launching it at scale.