Says Who??

Verstehen, through shared perspectives


2 Comments

I AM NOT A ROBOT (not a Luddite, either)

robot pt

Last summer my beloved Mercury Milan decided to give me mechanical problems, for the first time in the five years I had owned it. It simply refused to start occasionally, without any causality that my mechanic or I could discover. After several nerve-wracking months of this (along with the inevitable and infuriating responses from mechanics: “It starts just fine for me”), I was ready to drive it into the Ohio River. It probably would not have started so I could get it there, though.

I finally convinced a mechanic at the dealership to put the car on the computer for diagnosis. As both a former nurse and present patient, I liked that word diagnosis, and had no qualms about using it for the vehicle I had anthropomorphized by naming it Mahitabel, projecting both positive and negative emotions and reactions on its “behavior,” and more recently developing a love/hate relationship with it. The diagnosis, according to the computer, was that on several occasions in the past few months someone had tried to start my car with a key that did not belong to the car. Therefore, it did not start.

It took about ten more minutes of questioning by the mechanic, who proposed the possibility that someone was trying to steal my car, and answers by me insisting that this made no sense at all, before he looked carefully at my car key. It was bent, and one tiny place may have been chipped. He made me a new key and my buddy Mahitabel and I have traveled together predictably and smoothly ever since.

My point? The computer (a machine) understood more about my car (a machine) than both the mechanic and the owner. Yet both the mechanic and the owner had to engage in some research and analysis on the human level before the “diagnosis” could be corrected and treatment applied. The computer supplied data based on its programmed knowledge of the vehicle; the humans provided the ability to utilize both inductive and deductive reasoning, applied to real-life, present-world situations, to ascertain the actual problem.

This brings me at last to the reason I am writing this post. Two years ago, I posted “The Healers,” in which I compared the observations and insights of an African traditional healer with the best of today’s physicians, noting that in each case the healer was most effective when working as a caring and observant human healer to a human patient. I concluded that computers could not take the place of any physician true to his or her calling https://www.maryleejames.com/2014/08/08/the-healers .

Two years later, I have more reason than ever to challenge the efficacy of computers in the exam room of a physician’s office. In fact, I would go so far as to say that along with insurance company rules and overreaching legislation intended to make physicians toe the (sometimes contradictory) lines drawn by groups of people who lack the training and calling of the physician, the present demands of computer program doctoring have the capability of being the last straw that finally destroys medicine as we know it.

As the title to this article insists, I am not a Luddite. I love technology, especially when it works. I love the capabilities of the internet, and the ability to keep up with friends and relatives both far and near. I enjoy being able to get online on a busy day and save myself hours of shopping, and have the desired object delivered to my door within 24 hours. I love needing an answer quickly, and finding it; needing an outline of resources for research, and locating them with ease. But it is also these answers and resources that become the problem. I have to exercise my ability to discern the junk from the credible; the scams from the honest reports, because all that this wonderful piece of machinery can provide me with is the data that has been entered, just like my experience with the computer at the car dealership. It can’t make human judgements for me. Without my education and my experience, the overwhelming amount of unquestioned data could get me into a lot of trouble.

Therefore, I am concerned about the time my physicians must spend entering data about me into a limited machine. I am a sociologist, after all, and acutely aware of the reality that whenever humans are the subject of analysis, results are immediately complicated by a lack of predictability, and of psychological understanding; accuracy is also complicated by the uniqueness of every human being and his or her response to a given situation, whether physical or otherwise. And no situation for any patient is completely within the realm of any one discipline. We are affected by more than our pain—we are emotionally affected by its consequences, or by outside considerations that have nothing to do with the pain, but that affect our lives. We are affected intellectually by our understanding of the meaning of the pain, and what it may mean for ourselves and those closest to us; this translates again into emotional effects, which may or may not complicate the situation of the pain and therefore any understanding of the real diagnosis and best treatment.

That only considers the tip of the iceberg. It is dehumanizing to both doctor and patient to reduce medical practice to the inadequate data that can be acquired from, or placed into, a machine. It is dehumanizing to try to explain one’s most frightening and intimate problems to someone who may never look you in the eye, or ask a question not required by the computer program. Especially when that computer operator is frustrated because he or she is not familiar with the program, or because it is not working properly. How do you know if the diagnosis or treatment is going to be safe and effective under these circumstances?

Worse, how do you trust that the information entered into that computer is correct? I can’t tell you how often I have read reports of my office visits only to wonder whose record has been confused with mine. I have read “patient states” something I not only did not state, but that wasn’t true. I have read reports of findings of physical examinations that never took place….and also failed to accurately reflect my physical condition at the time. Yet in years past—even after computers were commonplace—when doctors simply dictated their reports of office visits the results were informative, correct, and usable. I know, because for years I used to type up those dictated reports, and saw letters of thanks from recipients like other physicians, insurance companies, and physical therapists who were able to understand and make use of them.  I even learned a lot of medicine from their logical presentation of cause, effect and treatment.

Even more important, however, is the effect of human touch: The caring hand on a shoulder while explaining a difficult prognosis; the gentle holding of the hand of a terrified patient. The healing effects of caring human touch cannot be measured, and certainly cannot be replaced by a machine of any kind.

I do not propose to take computers away from medical practice, but only that the computers not take the physician away from medical practice. We were intelligent enough to invent computers, and I would hope that we would be intelligent enough to discern the times when their data gathering and sorting capability can be used to best advantage, while the very human, intuitive and caring abilities of our physicians remain in the human realm where they are most effective. Perhaps then physician suicides might drop from more than 400 each year, and more brilliant young people might consider the medical field desirable.

 We need human physicians, because we are not robots.

robot doc