Thursday, February 25, 2016

Cleveland Clinic and Patients First!

You have a week or so to post and reply to comments about Healthcare's Service Fanatics and the newer HBS Case: Cleveland Clinic: Growth Strategy 2104 (pdf of the cases). Due by Thursday, Mar. 3.  We will discuss this in class  

Synopsis: From the HBR article and the Case Study, the CEO at the Cleveland Clinic, Dr. Delos "Toby" Cosgrove’s central message to employees had been Patients First!, which demanded relentless focus on measurable quality.
“This included constant attention to patient safety, respect for the patient’s dignity, excellence in housekeeping services and facilities, and genuine concern for the patient’s emotional wellbeing and care experience.”
Dr. James Merlino became Chief Experience Officer in 2009. Merlino defined the patient experience as "everyone and everything people encountered from the time they decided to go to the Clinic until they were discharged.” He worked to make patient experience insights more tangible by asking the question: “How can processes and metrics drive improvements in the patient experience.” As you read the case, consider the following: (You can write it as a paragraph and include any issues that stood out for you as you read the case. These questions are mostly to serve as prompts for your thinking.)
  1. What is the Cleveland Clinic’s overall strategy for improving value for patients?
  2. Are there examples of what Cleveland Clinic are doing well, or areas where they may still need improvement?
  3. What do you think of other efforts linked in the class schedule, or examples of your own, that make you feel like there is a “patient first” transformation in place or are you skeptical? Explain.

Thursday, February 18, 2016

Health IT Usage Behavior and Patient Safety

I recently read a report that offered a theoretical model of health it usage behavior and implications for patient safety.   The authors chose to focus on theories that could explain clinician HIT usage behavior because of the widely observed underuse and misuse of HIT and the associated patient safety consequences.  They claim it is important to understand that poor system design, through its effect on behavior, is the root of the problem.  That is, poorly designed systems facilitate or even encourage behaviors that may be contrary to expectations, policies or goals, and the models presented here make clear that the exogenous variables are system design factors.

Perhaps the most obvious case of a HIT whose efficacy suffers from underuse is that of medical error/incident reporting systems: up to 96% of medical errors are estimated to go unreported. Briefly, reporting systems are paper-based or electronic systems used by health care providers to report in some detail the occurrence of safety-related events. These events, depending on the system, may be instances of patient harm, near-misses, preventable errors that lead to harm, detected hazards that may lead to future harm, or combinations of these. Although reporting can have various purposes, the two main ones are learning and system improvement. As an example, consider a health care organizations that foster a culture of blame and shame, not only are needs not being met, but reporting may threaten vital needs. This is illustrated in Figure 1 below, where primary needs on the hierarchy are jeopardized when one reports in a blame culture. Examining Figure 1 provides a motivational explanation as to why many studies find that fear of punitive consequences deters many clinicians from reporting, whereas ethical obligations, small rewards, and a positive reporting culture tend to be motivators.
Figure 1. Needs met and jeopardized by the reporting of medical errors in a blame culture 

Here is a video that explains what happens when cultures move from blame to identifying root causes of the problem.

When the Hospital Fires the Bullet

Here is an amazing account by Elisabeth Rosenthal and covered last week on the radio show, This American Life.  Here is a synopsis and links to the article and radio show:
A student I know at the Icahn School of Medicine got in touch to say that a young man with bipolar disorder had been shot in a Houston hospital room by off-duty police officers moonlighting on the security staff. I was skeptical.
I’m a full-time journalist now, but I am also an M.D. Twenty years ago, as a physician, I worked in a very busy New York City emergency room. I’d treated patients with mental illness, patients who were high on drugs or delusional from illness, but I’d never seen weapons of any kind. Tasers and guns in a hospital room?!
...But the tip about the patient shot in Houston was not just hearsay or rumor: The student who tipped me off was a classmate of Christian Pean, the older brother of the young man who’d been shot in Texas with both a gun and a Taser. I got some contacts for the Pean family and began a sixth-month reporting odyssey. That resulted in a New York Times article about Alan’s shooting and a companion radio piece and podcast produced by “This American Life.”

Thursday, February 4, 2016

Can a Computer Replace your Doctor??

Can a Computer Replace your Doctor?

Link: http://www.nytimes.com/2014/09/21/sunday-review/high-tech-health-care-useful-to-a-point.html

As, the title suggests this article is can or will a computer (ever) replace a doctor? Has the Artificial Intelligence research and in general Computer Science algorithms developed to an extent, that a doctor with a minimum of 8 - 10 years of medical school education be replaced? The article focuses on this question from two aspects. One is the aspect from Silicon Valley and the other is from a doctor's point of view.

Silicon Valley has transformed several fields of life and is now onto transforming medicine to a point where an AI algorithm could potentially replace your doctor. The spearheads of Silicon Valley agree that they would trust and rather use a computer algorithm/robot over a doctor. Innovation of healthcare products are in such a fast pace that people might even believe it might be true. From the technologist perspective it seems to be in the right direction and since they also look into their revenue, healthcare devices seem to be giving a pretty good revenue model.

However, when this question is addressed from the doctor and Elisabeth Rosenthal's point of view, you see the bigger and deeper picture. A computer algorithm or an AI robot is just a mere tool that would guide you to a diagnosis; a bridge between a patient and his symptoms. The data provided by health tracking devices is so huge that it is hard to make sense out of it by a robot; only a doctor's presence would help make "sense" out of it for a corresponding treatment. On the other hand, if the doctor's role is given to a machine it might interpret normal data as "normal", when the patient has can actually be suffering from a condition. The article particularly emphasis a patient suffering from arrhythmias could have a normal heartbeat but actually be in trauma of knowing that, normal testosterone level might lead to a condition like Low T, etc.

In conclusion, though useful technologies has helped several people at diagnosing health ailments better especially blood glucose level and getting broader knowledge on the same, it still does not replace a doctor, rather it is a tool that would assist both you and your doctor. An analogy that the article mentions that I would like to re-iterate: Apple Maps had severe hidden bugs and lead people to "nowhere", clearly hinting it was a tool for them to use and not trust! If only they had asked someone?