Loading

Editorial Open Access
Volume 6 | Issue 7

Artificial intelligence and the depersonalization of medicine

  • 1Oregon Health & Science University, 3455 SW US Veterans Hospital Rd, Rm 517 Portland, OR 97239, USA
+ Affiliations - Affiliations

*Corresponding Author

Barry Swerdlow, swerdlow@ohsu.edu

Received Date: February 25, 2026

Accepted Date: February 27, 2026

Editorial

I have only written one other editorial that was not purely scientific. It followed shortly after my fortieth medical school reunion and lamented for a past time when the “primary, secondary, tertiary, and quaternary” [1] goals of medicine were patient care and physicians expected (and therefore made time) to talk with patients and examine them in a manner commensurate with those goals. The editorial ended on a relatively positive note. It recalled Arthur finding significance in long-lost Camelot where myth has it there was relative equality among men in the form of a round table with no king seated at its head.

In a sense, the current editorial picks up where the last one left off and includes reflections that have become more powerful in recent years. In that time, artificial intelligence (AI) has taken a front seat in the transformation of healthcare delivery. AI is currently involved in nearly all aspects of that activity including the diagnosis, treatment, research, and design of processes that intimately affect patient well-being. AI systems analyze medical data and assist in interpreting X-rays, CT scans, and MRIs. AI-powered software offers rapid diagnoses with reduced error and options for relatively early intervention in such conditions as diabetic retinopathy and heart disease. Furthermore, AI analyzes data associated with genetics, medical history, and lifestyle to design treatments (e.g. in oncology) with reduced adverse effects. AI may enhance surgical precision via robotic systems that function daily to provide for minimally invasive surgery. AI accelerates pharmaceutical research via the analysis of large data sets and companies like Deep Mind have developed protein-structure prediction tools that aid in the research of diseases at the molecular level.

AI, in fact, serves an increasingly irreplaceable role in nearly all aspects of healthcare delivery today. It also serves a predictive function and assists hospital management in optimizing staffing and resource allocation. AI-powered chatbots and virtual assistants are ubiquitous and are touted to increase accessibility of and reduce the burden on healthcare providers. AI automates many routine tasks, including coding, billing, and documentation involved in the business of healthcare administration.

Unfortunately, “there is no free lunch,” and this transformative role of AI has come with a price. In my opinion, this price is very high and involves the depersonalization of medicine. No longer does the initial step in the diagnosis of illness commonly consist of taking a detailed history and performing a physical exam. Instead, these relatively simple processes are curtailed, often severely, in favor of data collection via a more sophisticated scan or another AI-assisted process. Depersonalization involves the substitution of AI for a person at many levels of care, and while such a role for AI often offers advantages, it also serves to remove humans from an important role in the ministration of healing to their fellow men, women, and children.

A simple example comes to mind. My cardiology colleagues have repeatedly asked why I continue to use the stethoscope that they often see hung from my neck. In their opinion, this instrument represents an antiquated relic from the past whose function has been replaced by more scientifically advanced, detailed, and accurate scans. This reasoning is not fallacious, but it ignores an important role for this instrument.  By using a stethoscope during my preoperative visits, I am required to touch the patient and therefore make human contact that is otherwise often skipped in following a diagnostic AI-generated algorithm.

The depersonalization of medicine represents a foundational shift in a paradigm which has persisted for centuries. While I do not propose – as some luddites may – a ban on all AI technology in medicine, I do believe that it behoves us to discuss the pros and cons of such introduction and thereby approach this issue with both eyes open. The alternative is simply to accept an inevitable digital future where decisions regarding our patients’ well-being are reduced to the implications of Bayesian theory as expressed by zeros and ones. And there are at least two major potential categories of problems with such a transformative process. One concerns our appreciation, and perhaps more importantly our patients’ appreciation, of what constitutes healthcare.

This is a fundamental question that has real life consequences that extend well beyond an emotional level. For example, what does a patient envision in terms of care who seeks medical attention because of an illness, whether acute or chronic?  Does he envision talking to a healthcare provider, being examined by that provider, and then planning a series of diagnostic and therapeutic interventions in conjunction with that provider which are then put into action?  Or does he envision a series of tests being performed according to an algorithm that optimizes patient care according to a prescribed set of metrics and provides an efficient, cost-effective outcome? There is more than an aesthetic difference between these choices representing extremes of non-AI healthcare and AI-driven medicine. They represent fundamentally different approaches to healthcare. The former is consistent with medicine as it has been practiced for centuries, and the latter is a description of current trends involving AI.

Not only does AI-driven medicine potentially depersonalize healthcare delivery and potentially remove healthcare providers from their monopoly regarding the ability to plan and implement strategies to improve patient well-being. In so doing it also changes our collective perception of what constitutes competent and expected medical care. And since many illnesses have no clearly effective treatment, how do AI-driven systems console and comfort patients with such “no solution” problems? When the human element is modified or removed from the therapeutic equation, it may be critical to be certain that uniquely human options are also not eliminated.

In addition, medical education may suffer. One of the fundamental aspects of such education involves teaching future healthcare workers how to critically think. However, when AI software like ChatGPT can offer a complete differential diagnosis in seconds for any problem, how do we teach students of medicine to formulate a broad differential (ideally based on physiology and/or anatomy that is relatively easy to recall during moments of stress) and then to narrow that list to a smaller subset of possibilities given the specifics of a given patient-event? In other words, how do we teach students to critically think? Employing AI-driven software for the purpose of diagnosis and treatment of medical conditions may prevent error and promote efficiency. In the process, however, it also may undermine key educational activities that are foundational not only to our current medical system but also necessary to ensure safety and competency in that system.

This is likely not an all or none phenomenon. In other words, choices may be individualized. The role of humans in conjunction with AI in medicine often needs to be clearly defined to maintain the benefits of both systems. Ideally humans should not be excluded from the many processes involved in healthcare delivery, and AI systems ideally should be used to minimize error, promote efficiency, and in general to tweak the system in our favor.  While these are not mutually exclusive goals, they represent goals that need to be consciously considered when designing the role of AI in medicine today. Otherwise, in my opinion, a significant price for the depersonalization that accompanies the introduction of this technology unfortunately likely will be paid by future generations.

References

1. Swerdlow B. Time Stamp. Harvard Medicine. Autumn 2019:6.

Author Information X