Dr. Eric Topol on How Artificial Intelligence Can Improve the Doctor-Patient Relationship

Eric Topol
I’m Alan Alda, and this is C+V, conversations about connecting and communicating
Eric: The more deeply we’ll understand each person as an individual, the better we can deliver the right medicine. So let’s say biology, like the genome, their anatomy through scans, their physiology through sensors, their environment through sensors. So, collecting all that data about a person is going to get us to a point, which no human could do. That’s why we need help from machines.
That’s Eric Topol, one of our country’s leading visionaries in the field of medicine. And he offers a fascinating shift in perspective.
I’ve talked with a few people on this show about how digital technology might be threatening our privacy, our ability to relate, in some cases even our health. Doctor Topol comes at it from a completely different direction.
Remember when a visit to the doctor took the time to go over all the factors that might impinge on your health? Well, now it’s recognized that there are many more factors to consider, and our doctor has much less time to consider them.
Surprisingly, Eric Topol comes to the rescue with artificial intelligence.

Alan: 00:00 I am so delighted to be talking with you today because I find you brave, courageous, because here you are, so dedicated to empathic medicine, and yet brave enough to mix in artificial intelligence. It sounds brave to me. It probably doesn’t sound brave to you.
Eric: 00:22 Well, that’s really kind of you, Alan. I think that we’re looking for some help here because we’ve lost so much over the years with respect to the patient-doctor relationship, and so I know it’s ironic. It’s counterintuitive to think that AI could help us, but I really think that’s the ultimate goal. That’s what we should be thinking about.
Alan: 00:45 So, how can it help? They seem so disparate.
Eric: 00:49 Yeah, yeah.
Alan: 00:50 One is data numbers, and the other is looking the person in the eye, listening to their story, applying the medical touch, all the things that we associate with the medical experience in the past. Now, we’re adding number crunching. How does that help?
Eric: 01:09 Well, I guess I’d summarize it in just four words, the gift of time. Basically, what has happened over the years is that doctors became data clerks, and they largely… Two thirds of their activity is not patient-related, but typing and keyboards, and working on administrative tasks.
Alan: 01:29 That high a figure, two thirds of the time. During that time, they’re not looking at the patient.
Eric: 01:35 No.
Alan: 01:35 They’re looking at the keyboard.
Eric: 01:37 That’s right.
Alan: 01:38 And often, with their back to the patient.
Eric: 01:40 Yep, and that’s a real serious problem because patients are feeling short-changed, and they wait so long to get an appointment. Then the time with the doctor is so limited, and there’s not even eye-to-eye contact. One part of the gift of time is keyboard liberation and also each person now has so much data. It’s growing really exponentially. It’s not just what’s in the electronic record or prior paper records, but it could be the sensors they’re wearing. It could be their genome. It could be a lot of other fields of data, and no human being could assimilate all that data. So, that’s where AI could tee it up and really make it much easier and quicker for a doctor to have a handle on what’s going on with a patient, which is often lost because of the lack of time. So-
Alan: 02:29 So, speaking of time… Pardon me for interrupting, but let me just get a handle on this. The time you’re talking about, my impression is that it’s way less than the time doctors traditionally, in the old days, spent with the patient, right?
Eric: 02:44 That’s right.
Alan: 02:45 What’s the difference? How much time do they spend in, say, a routine examination compared to now?
Eric: 02:52 Well, the average is about 10 minutes.
Alan: 02:54 Now?
Eric: 02:56 Yeah. It’s about 7 minutes for a return visit, about 12 minutes in the US for a new patient, which is extraordinary. It used to be, when I was in medical school in the late ’70s, it was an hour with a new patient and at least 30 minutes for a return visit. The amount of squeeze for doctors has been extraordinary, and that accounts for why in large, there’s such severe burnout, depression, and the high rate of suicides like we’ve never seen in the medical profession.
Alan: 03:29 I’m always shocked to see statistics on burnout. More than half of physicians experience some kind of burnout, which leads to a sense of inadequacy and sometimes making mistakes, right?
Eric: 03:44 Well, that’s right, Alan. In fact, it’s been shown that doctors suffering burnout have a doubling of medical errors, and that [crosstalk 00:03:53].
Alan: 03:53 Doubling, oh my God.
Eric: 03:54 Yeah, doubling. That’s such a vicious cycle because then they have an error, and then they’re even more demoralized. So, this is what something… We have to break this vicious cycle, and we have to get time back with patients so that clinicians feel like they’re executing their mission of why they went into medicine in the first place.
Alan: 04:14 In this 10 minute session that they have, is most of that taken up typing? Do they have even less time, or are they relating during those 10 minutes?
Eric: 04:26 Well, like you said, predominantly, they’re not looking at their patient. They’re often their back to the patient, so the lack of a human bond is just so flagrant. Largely, the keyboard is the common enemy of, both, patients and doctors.
Alan: 04:44 So, how does the present, the introduction of AI help? Because doesn’t the doctor have to enter something? Where’s all the…
Eric: 04:54 Well, yes.
Alan: 04:55 … data going to come from?
Eric: 04:55 Right, but not by pecking on a keyboard. Just the voice. Just a conversation of that encounter, a synthetic note on that basis has already been done in places in the UK. There’s clinics now in the US that are piloting this. It’s also being done in China. This is a far better way to get a note that’s meaningful, that is the true exchange of what’s happened in that encounter. What’s also sad is that notes today that are in electronic records, 80% of the notes are cut and pasted from prior notes, so they propagate all kinds of errors. Also, because doctors don’t like to type, they don’t have a lot of the useful information incorporated in the note. So, voice is fast. It’s passive. It doesn’t require any active effort…It’s all based on natural language processing and machine learning for the doctor.
Alan: 05:51 You’ll have to excuse me for challenging this a little bit because I’m trying to understand it. I’m not really challenging, but I’m just trying to understand it.
Eric: 05:59 Sure.
Alan: 06:00 When I talk to my iPhone or my laptop and I dictate an email, I got to check it really carefully because I’ve often asked people I’m in business with to send me their porridge.
Eric: 06:17 Right. Well, you’re right. What we have today in the… Like, Siri and Alexa, these are weak compared to what’s been developed for the medical voice analytics. The processing of data from voice, there’s now over 20 different companies, some of which are tech titans, Microsoft and Google, but also lots of startups that have been working on this for the past few years. And so now, the level of transcription is as good as professional medical transcriptionists, so it’s come a long way. It’s not what you’re seeing when you talk to your iPhone.
Alan: 07:01 That’s a terrific relief to hear that.
MUSIC BRIDGE
Alan: The title of your recent book is Deep Medicine, which seems to contain three main elements. One of them is some of the data we’ve been talking about, but apparently even more extensive data. You seem to be able to expect AI, or perhaps AI is already delivering a kind of data collection that includes almost everything you can know about a person.
Eric: 07:37 Yes. The point, I guess, that we have been missing is how the more deeply we’ll understand each person as an individual, as a unique person, the better we can deliver the right medicine, whether it’s prevention, whether it’s a medication. Whatever it is, it’s going to be far more accurate, precise, better for the person with this so-called deep phenotyping… which is what [crosstalk 00:08:05] gathering all this data.
Alan: 08:05 What’s phenotype mean, again?
Eric: 08:07 Yeah. Phenotyping is characterizing a person at multiple layers. Let’s say biology, like the genome, their anatomy through scans, their physiology through sensors, their environment through sensors. So, collecting all that data about a person is going to get us to a point, which no human could do. That’s why we need help from machines to a far better way to deliver a medicine, as compared to today, which is very shallow, laden with errors, and it’s just not acceptable relative to where data and processing that information can take us.
Alan: 08:46 Is there pushback from people concerned with privacy? I mean, we’re already worried about cities covered with cameras, and while that can reduce crime or apprehend criminals, it also knows everything relatively innocent people are doing.
Eric: 09:04 Right.
Alan: 09:04 If somebody hacks into that aspect of the deep knowledge of my whole life, are there safeguards against that? How do you deal with that problem?
Eric: 09:17 The best safeguard would be that you owned all your data. The problem we have today where we see all the cyber thievery and hacking is because your data sits in places that you don’t have control, and it’s a target for the cyber thievery and hackers because it’s very valuable, your medical data. So, what we want to do is shift that whereby it’s all under your control, your ownership, which then doesn’t become a target for thievery. And so, privacy and security is fundamentally here. We’re not going to be able to really advance this until we sure that up. It can be done. It has been done in places like Estonia. We just need to do it here.
Alan: 10:07 What’s the next element in deep medicine? Is that diagnosis?
Eric: 10:13 Well, diagnosis is a real big part of this, but besides the gift of time that we’ve talked about with the doctor side, it’s also giving patients much more charge. We’re already seeing, now, the use of AI to make accurate diagnosis, as you’ve touched on, for things like urinary tract infection, ear infection in children, skin rashes, skin cancers, and so that’s all doctorless. That’s all done accurately and validated without the need for a doctor. So, for-
Alan: 10:49 So, how does the information get into the system so the diagnosis can be made? I don’t quite get that.
Eric: 10:54 Yeah. Let’s say you’re in the UK and you think you may have a urinary tract infection. You go to a local drugstore. You get a kit, which is an AI-based kit, test your urine, tells you whether you have an infection, and it’s accurate. The point there is, you don’t ever see the doctor. In that country, you get a prescription without a doctor. Here, you do have to contact a doctor to get the prescription with your data, but the point being is, a lot of routine things that are not serious are going to be offloaded to patients because we have this ability to process that data with algorithms and decompress the need for doctors.
Alan: 11:38 Yeah, it seems to fit in with a paradigm that we’re already familiar with, pregnancy tests that you can do at home and other tests you can take that don’t require the presence of a doctor. I guess that gives you more time, but what…
Eric: 11:55 One other thing about the patient side, too, besides, as you point out, the more home testing, but also, it’s this virtual medical coach.
Alan: 12:05 Oh, tell me about that. What’s…
Eric: 12:07 Yeah. Already, we’re seeing it in certain conditions, like diabetes, but here you have this avatar on your smartphone or your smart speaker, and it’s taking in all your data, all this data that no human could possibly process, and it’s-
Alan: 12:26 You mean, like, the way you walk, and your heartbeat, and things like that?
Eric: 12:29 Yes, yes, and then all your prior history, all your genome, your microbiome and the works. It’s constantly, along with the medical literature that’s getting updated all the time, it’s going to give you feedback in real time about you. So-
Alan: 12:50 It collects all this information on me, and then it matches it against data that’s been collected from millions of other people?
Eric: 13:00 Right, right, exactly.
Alan: 13:03 So in the process, I suppose it’s making matches based on probabilities of correct matches.
Eric: 13:10 Yes, you’re right on. In fact, at some point, we could learn from each other another type of AI. It’s called nearest neighbor analysis, matching you up as your closest digital twin. Let’s say you had a diagnosis of cancer, and now we find these closest twins around the world that have your genome and your everything, and treatment and outcome that work best for someone just like you. These are some things that are in the works right now, the virtual coach, the digital twinning. These are things that are really exciting to improve care and fulfill the dream of prevention, which we’ve never really gotten to before.
Alan: 13:54 So, you or somebody might get in touch with your digital twin, the person who matches all your data, and say, “Hate to tell you this, but you probably have this terrible disease.”
Eric: 14:08 Well, you might not have direct contact with your digital twin, but the data that’s been encrypted, anonymized, de-identified, would help serve you from experience because today, Alan, we largely make our recommendations based on clinical trials of large numbers of people, and we don’t have it at the granular level of a person, of an individual, but where this is all headed in this ability to analyze this multimodal massive data is to be able to get to the person level. That, I think, really represents a jump forward. I-
Alan: 14:42 I think… I’m sorry. Go ahead.
Eric: 14:44 Well, we’ll talk about diet because that’s something that you’ll definitely get a handle on why that’s important.
Alan: 14:52 I’m not quite sure I get the digital twin idea yet because it sounds like if you have a digital twin, there’s a temptation to know who that person is, to reach out and help that person, but that means you have to de-anonymize them, and then suddenly, everything can be known about me that might affect my employment, my insurance, and things like that. How do we avoid that problem?
Eric: 15:19 Well, I mean, I didn’t envision that people would be de-anonymized because, basically, it’s their information, their data, their outcomes, their treatments. [crosstalk 00:15:29].
Alan: 15:29 As you said before, yeah.
Eric: 15:31 But it’s possible. You’re bringing up a whole nother wrinkle to this, which is if you want to be de-anonymized to help people, that would be potentially an option, but I think most people wouldn’t want to go there.
Alan: 15:43 Yeah. Have we covered the first two elements?
Eric: 15:49 Well, I think the idea that people will identify with about how AI can make a difference is cracking this diet story, which has been out there for, seemingly, forever. That is that we should all eat this particular diet, this one or that one, which was all wrong because we’re so unique. And so, it was actually the group in Israel at the Weizmann Institute a few years ago. They’ve cracked the case by having extensive data from, now, thousands of people where they gathered not just everything they ate and drank, but their sleep, their physical activity, their microbiome of their gut, all their labs, and on and on.
Eric: 16:33 Basically, with the glucose sensor that was wearable, they found that each person has a very unique response to food. If you and I, Alan, eat the exact same food, the exact same amount, our glucoses would be completely different and unique to us. Now, this week, we’re learning that’s the same with triglycerides and insulin response. The point being is that what is the right diet for us, and it’s only because of AI, which is like a virtual coach, we’ll be able to get that information about what are the best foods that you should eat or I should eat? That’s just another way to use food as medicine in the future. We’re not there yet, but we’re making some significant strides because of this type of analytic.
Alan: 17:23 That really sounds exciting because when you consider food as a medicine, it’s a medicine all of us take at least three times a day.
Eric: 17:33 Right, right, exactly.
Alan: 17:34 It would be good to know we’re taking the right medicine.
Eric: 17:37 Yeah. I couldn’t agree with you more. We got to get away from prescriptions and get a lot more use of the things that we normally do each day, like eating.
MUSIC BRIDGE

Alan: 17:47 Like eating, right, amazing. So, that leaves precious time for empathic medicine and a relationship with the patient that is not only not provided now by the time taken typing and recording data in various ways, but it allows for what I think you’ve even quotes a couple of famous doctors who feel that empathy and making contact with the patient, caring for the patient, showing you care for the patient is an important part of the medical experience. Tell me how you perceive empathic medicine. What’s going to happen when they have more time to be empathic during a visit?
Eric: 18:55 Well, first, we got to get that more time. We’ll talk about that because that also could go the wrong way, as it has happened in the past, but let’s say we have considerably more time because now a lot of things are offloaded to machines or back to patients having more autonomy. This time is the first step, which is you actually are now connecting, the laying of hands, the presence. People know when you’re really cued into them, as compared to today when people start to talk and you’re interrupted within seconds because a doctor feels rushed. We’ll never be able to digitize a patient’s life story, and that means listening and not even just to the words, but to the look, the presence of the patient and the doctor, to get back the trust, the sense of true care in healthcare, which has largely been lost, and so we’re kind of in a desperate shape. You and I are old enough to remember what medicine was like decades ago when it was a precious relationship. There was so much trust that you really relied on this doctor that was looking after you.
Alan: 20:13 Yes.
Eric: 20:13 That’s the exception today, rather than the rule.
Alan: 20:17 If you don’t-
Eric: 20:17 We can get back there.
Alan: 20:18 If you don’t have that… Excuse me. If you don’t have that trust, you’re less likely, I think, to follow the doctor’s orders, to take the medication, to take it when you’re supposed to, even to remember what the doctor told you.
Eric: 20:36 Yeah. In so many ways, you get that. I started the book about my knee replacement, which was a fiasco, not the operation, itself, but when I went back to the orthopedist a month later with a purple knee so swollen, and I was in really desperate shape, he said I needed anti-depression medicines. It was just-
Alan: 21:00 The knee doctor told you that?
Eric: 21:01 Yeah, yeah. My crying spells were not from that. It was from unmitigated pain and the whole scene was… It was a disaster for me, but that was a robotic response from a very capable surgeon. When I wound up finally getting a physical therapist, who individualized my therapy, what I needed for my knee, because I had this congenital condition that put me in kind of a special place of what should be done, but the point is, the difference. She never said you needed depression medications. She actually cared about me. She would send me a text every day or two saying, “How is our knee?” That’s care. That’s just a relationship that we crave for. The fact that I got better after all that, in part, wasn’t just her physical therapy, but knowing that somebody cared for me, and that’s what we need from our doctor, and we can get there.
Alan: 22:02 You had a partner. You weren’t being delivered. It wasn’t a delivery of medicine. You were together with somebody you trusted. I can’t imagine that that doesn’t have an important effect on the patient.
Eric: 22:15 You said that so well, a partner, dependence on someone that you know really cares about you. That’s what the essence of medicine is. I know it’s a paradox about could AI help us get there, but I don’t know any other way that we can get to where we used to be.
Alan: 22:38 Do you think you can just ask doctors to be more empathic? Can they achieve more empathy, especially with the stress they’re under, because of a rise in stress and a rise in, say, cortisol or stress hormones like that, can have an adverse effect on empathy, I’ve been told, that you can…
Eric: 23:02 Right.
Alan: 23:02 … you can lose your level of empathy…
Eric: 23:05 Sure.
Alan: 23:05 … and just try to brusk your way through it.
Eric: 23:08 Right.
Alan: 23:09 Or, if you’re too empathically deep in somebody else’s suffering, you can bring about your own burnout more easily. So, managing your empathy, increasing it, managing it when you are in the presence of somebody who is suffering, those things, it seems to me, need to be learned.
Eric: 23:28 Yes.
Alan: 23:28 You can’t just give them a lecture and say, “Now be more empathic.”
Eric: 23:31 No, no. I mean, the first step is, you’ve got to give them time so they can connect. Most people-
Alan: 23:37 Make it possible.
Eric: 23:39 Yeah. Most people went into medicine, they really do care and this is what they wanted to do, and so that’s the first step, but you’re right. You can’t just tell them, “You need to do this.” You have to be cultivated. It has to be nurtured, and in medical schools, of course, and selecting the people who are most likely to be empathetic and caring and compassionate people. But, the other part of this is, what got us into this trouble is that doctors don’t run the show in their clinics, and hospitals, and health systems. It’s run almost entirely by administrators, managers. That’s what-
Alan: 24:16 Are they, themselves, physicians?
Eric: 24:18 No, no. Very rarely are they physicians. They are bean counters. They are squeezing their medical profession to see more patients, read more scans, read more slides, whatever, do more. That’s what got us into this mess with the electronic health records that were for billing purposes only, that were not to help the doctor or patient, and look what’s happened with them. Then relative value units and health maintenance organizations, all these things were done to doctors, and of course, indirectly to patients because of this structure where doctors have these overlords. The problem we have now… If they could use AI to actually squeeze doctors more…
Alan: 25:00 That’s exactly what I wanted to ask you.
Eric: 25:01 … that’s my biggest worry. Yeah.
Alan: 25:03 With the gift of precious time is a gift only if you use it as a gift for time, but if your overlord is operating on the profit motive only and sees you have extra time, how is the doctor going to use that extra time for more empathic relating if the boss is saying, “You’re wasting time. I want you to see twice as many patients.”
Eric: 25:32 Yep, that’s the problem, and that’s where this could go. That’s what the realization I made when I was working on this, which is that we can’t let this happen. We can’t just roll over yet again. What we’ve seen in recent times, as you know, Alan, is that doctors are starting to show their ability to organize. For example, with the NRA and when the NRA came out after doctors saying, “Stay in your lane,” about taking on the gun policies in this country. All of a sudden, the doctors came together. “This is our lane,” and that type of [crosstalk 00:26:06].
Alan: 26:07 [crosstalk 00:26:07] health issue.
Eric: 26:07 Yeah, yeah. And so, we started to see the beginning of doctors organizing. The hope is that we can organize around this, because otherwise, as you’ve pointed out, this is going to go the wrong way. If it’s possible to make things worse, then we’ll see it.
But in this new world of AI making sure doctors have enough time with their patients is only part of the problem. When we come back from our break, Dr. Topol tells me about some of the serious dangers of artificial intelligence itself. Be right back.
MIDROLL
This is Clear + Vivid, and now back to my conversation with Eric Topol.
Alan: 26:28 What can we do? I’m anxious. This is such important stuff you’re talking about. This is such an important change in our lives that will come about if what you’re proposing or predicting will happen, the using of AI to benefit our health outcomes, to increase our health outcomes. I always worry about AI. I’m in awe of what AI can do.
Eric: 27:02 Well, you’re right because [crosstalk 00:27:04] I think a lot of people don’t realize that machines can be trained to see things that humans can’t see and will never see. I’ll give you an example of that, retina pictures. If you give retina pictures to international experts of the retina and you say, “Is this a retina picture from a man or a woman?” The chance of them getting it right, 50%.
Alan: 27:32 50/50. Yeah, right.
Eric: 27:32 But, a machine algorithm is over 97% accurate.
Alan: 27:37 Wow.
Eric: 27:37 Now, you and I realize there are other ways to tell whether it’s a male or female, but this illustrates the point, is that you can give hundreds of thousands, millions of images or voice, or soon, text, and you can train machines to do things. For example, a radiologist, when they look at scans, they miss over 30% of things, but that number could be brought down to low single digits, maybe never to zero, but maybe close because of training machine algorithms with deep learning. That just gives you a sense of the power of AI.
Alan: 28:16 But you have, in the book, an example, and I think it really shows your impartial, objective, scientific approach to this, an example of AI not working so well.
Eric: 28:32 Oh, there’s so many examples like that, but the one that I immediately think about is my father-in-law, when he was hospitalized. Basically, they were letting him go. They told my wife and I, and the family, that he’s a dead man, essentially, and that we need to discontinue all the life support. We were going to take him home to Hospice into our home to die. The morning he was getting ready to be transferred, literally being put on a gurney to take out of the hospital, he woke up and started talking and calling for my wife. It was amazing. I mean, she was in the hall, and here was the person who was never going to come out of a coma. The algorithm there predicted 100% that he was going to die, and obviously, it was like a Lazarus thing. This is the point, is AI is not so great at predicting things. It’s really good at reading images. It can be trained. It can be trained, as we talked about, for speech voice, but for predicting outcomes, it’s okay. It has a long ways to go.
Alan: 29:50 That’s a startling story because he could have… In essence, he was almost being left to die, if there was anything more that could’ve been done to get him back on his feet. You had been officially told, “Hope is lost.”
Eric: 30:09 Yes.
Alan: 30:10 The algorithm told you that.
Eric: 30:12 Yes.
Alan: 30:14 That’s sort of at the heart of what worries me about these converging lines that seem to be leading us toward algorithms running our lives more than may be prudent. For instance, here’s my basic worry. Maybe you can help me out with this. If the machine is working like an artificial neural network and putting data together and see when it has a chance or what probability of being a good match and leading to some useful further matching. I’m saying this in a sloppy way, but the idea that the machine takes so many twists and turns as it puts together data, and it never tells us how it arrived at the final conclusion.
Eric: 31:05 Right.
Alan: 31:08 From what I hear from people, it’s not easy, or perhaps, not even possible to get it to give us an update every time it makes something like a decision to go on to the next branch of the branching tree. So, we don’t know how it got there, and yet it could be making life and death decisions for us. If it’s running a car or something like that, it could be a life and death decision. If it’s saying we need a serious operation, that could also turn out to be a life and death decision, what your father-in-law went through.
Eric: 31:46 Yeah, no. I’m so glad you brought this up. This is not just the idea of black box or lack of explainability, but what you centered on is why we always will need doctors to provide oversight of any important matter. So, there’s anything like diagnosis of a cancer, or a need for an operation, or you name it, we want a doctor to review that output from any algorithm or neural network because it could be wrong. It could be subject to malware. It’s often, right now at least, lacking explainability. We don’t even know that prior example about how come a machine can determine the gender of a person from their retina picture. We’d have no idea why.
Alan: 32:34 Oh, that’s a perfect example, yeah.
Eric: 32:36 [crosstalk 00:32:36].
Alan: 32:37 You made me imagine a scene in a movie, and young Dr. Kildare says, “I don’t care what the machine says. This person is not going to die.” “Oh, but Doctor, you’re just operating on your gut instinct.” “I’m telling you, this patient is not going to die.” “Well, the machine is always right. How often are you right?”
Eric: 32:59 It’s so right. It’s so [inaudible 00:33:00].
Alan: 33:00 [crosstalk 00:33:00].
Eric: 33:00 Yeah, yeah.
Alan: 33:00 There might have been a lot of scenes like that, but…
Eric: 33:07 I would trust the human oversight with the added support of the machine. It’s going to be objective data. It’s going to see things we can’t see. Yes, there’s work being done by AI to deconstruct these algorithms, these neural nets, but the point is that it’s giving a whole nother support for doctors to process immense data that they could never do. They can’t see these things because no doctor is going to see a million retina images and be able to see the granular features that the machine can see. So the example here is the symbiosis or the synergy of having these neural networks, plus a doctor. That’s why it’s so silly for people to be talking about, “Oh, we’re going to get rid of this type of doctor, all doctors.” I mean, that’s just nonsense. So, we want to have both.
Alan: 33:58 You mentioned this has been done in other places, successfully, some aspects of what we’re talking about. I heard you had been working in the UK with the health system there. How far have you gone with that?
Eric: 34:16 Well, the interesting thing, I got commissioned by the government to review the NHS, to lead a big group there, and spent about a year and a half on that where we were trying to figure out, where is AI in digital genomics going? You may know, the UK is the genomics capital in the world. They’ve done the most work in that space.
Alan: 34:37 No, I didn’t.
Eric: 34:38 Yeah, it’s amazing. They’re so far ahead of where we’ve been in the US in assembling bio banks, and data and information that’s every day having an impact, but the NHS is a lot more progressive than I realized because they… For example, in emergency rooms in the UK, they’ve already gotten rid of keyboards. They’re using voice to make the notes, which in an emergency room, I mean, that’s kind of like MASH, right? It’s like, wow? How can they do that?
Alan: 35:13 Now when they do that… Let me interrupt for a second because I want to get a picture of it that I can relate to. Does the doctor tend to think he’s on the air because he knows the microphone is recording it, or is it an actual realtime personal conversation that’s being picked up? To what extent does it become artificial?
Eric: 35:35 Yeah, great point. In the beginning, it is like you know you’re on a mic. You know that it’s being recorded, but then as you get used to that, after a number of encounters, it just becomes natural. You don’t even think about it. You don’t become guarded about what you’re going to say.
Alan: 35:51 How about the patient? Is the patient aware that it’s being recorded? That seems like a privacy question.
Eric: 35:58 Yes, yes. The patient is aware. What is really great is the patient gets a copy of their note with all the information in there, rather than just very small pieces of it.
Alan: 36:10 You mentioned something in the book that really caught my eye, that some studies have shown that people seem to be more willing to reveal personal information to an avatar than to a person, to a living person. Is that [crosstalk 00:36:26]?
Eric: 36:25 That’s so striking. I never would’ve expected that, but now there are multiple studies to confirm or replicate that… It’s fascinating. For mental health support, the idea that people will trust their inner most secrets more with an avatar, by far, than with another human being.
Alan: 36:46 By far.
Eric: 36:48 Yeah.
Alan: 36:48 Wow.
Eric: 36:49 Who would ever have guessed that?
Alan: 36:52 Okay, so I think I interrupted you. I’m so interested in talking to you. I can’t help, but interrupt you. I’m sorry.
Eric: 37:00 No, it’s good. It’s good. Well, going back to the patient-doctor conversation, the idea is that there’s a lot going on there, and most patients forget stuff by the time they get home, or if they’ve been in the emergency room, they go there because they are acutely ill. This gives a much more comprehensive complete documentation of what happened there so they can go back to it. The note is a far better one, and it’s automated. It doesn’t require any pecking data clerk function, and the doctor in the emergency room, that’s a kind of important encounter because you went there because you’re sick. I think showing you that it works there is really, I think, a nice foray into where this is headed. In the next few years, we’ll see much, much less keyboards when we go for a doctor visit.
Alan: 38:00 So, there, is it working to improve the doctor-patient relationship, and anywhere in the world, including in our country, have you seen any improvement in the doctor-patient relationship with the introduction of AI?
Eric: 38:15 Well, yeah. I think that, for sure, the places that are adopting the keyboard liberation, as I mentioned, emergency rooms in UK, several places in China have claimed that this is really popular. The experience here is somewhat limited. There’s work being done at Stanford and a bunch of centers and a bunch of startup companies in select clinics, but there hasn’t been that much yet published. We’re still waiting for the US to kind of show up in this space.
Alan: 38:48 I’m wondering if there’s any reason to introduce the data scientists who are working on these AI projects, if there’s any reason to introduce them to real physicians. I realized they’re not directing the way the machine learns. They’re just having the machine compare data and learn on its own, but would they benefit from having some translational experiences with the doctors?
Eric: 39:19 Oh, you’re bringing up an ideal combo. The more we can get data scientists, computer scientists, to work with doctors, the better because they tend to work each of these disciplines in isolation. A classic thing would be, oh, we got this great algorithm. It’s got an accuracy of 99%. The doctor says, “Well, that’s nice, but does it help patients?” We got to get these groups working together so they don’t [inaudible 00:39:52] around and they work on algorithms that are going to help people.
Alan: 39:54 Right. It doesn’t help to know that 90% of people don’t like the robes they’re wearing in the hospital.
Eric: 40:03 Right, right. The relevance and the-
Alan: 40:05 Well, maybe it does help, actually. Actually, I’d like to see data on how many people feel humiliated by having to put on those little, skimpy gowns.
Eric: 40:12 Yeah, that would be most, for sure.
Alan: 40:14 It certainly was, me.
Eric: 40:17 Right.
Alan: 40:18 They open in the back where you can’t do anything about it.
Eric: 40:20 I know. Isn’t it amazing [crosstalk 00:40:22].
Alan: 40:22 How did that happen? What kind of empathic medicine is that?
Eric: 40:26 Well, that kind of tells a story in itself, doesn’t it?
Alan: 40:29 It seems to me to. It’s one of the first signs that I’m a piece of meat when I’m [crosstalk 00:40:36].
Eric: 40:35 Yeah. Well, what you’re bringing up there is that patients have been suppressed from the beginning of profession for millennia because it was paternalistic doctors, and they didn’t object to these gowns and a lot of other things. They kept the data to themselves. They didn’t give patients their notes. They still don’t in the US, the majority, and so we got to get over this paternalism and we got to start thinking about much more from the patient perspective, and gowns would be a really good start.
Alan: 41:08 The idea of referring to the patient as a person and not as a room number, or an organ, or a disease seems to be a really good first step.
Eric: 41:21 Right. That goes along in the continuum of empathy. If you don’t see the patient as an equal, if you don’t listen, if you don’t really get really a bond established, it’s not going to go very far.
Alan: 41:39 It reminds me that I feel reassured that some of the lines of study that I’ve pursued in trying to… It reassures me to know that I’m sort of on the right track in following your example of how I try to get doctors to be more empathic. One of the ways I think I know that is I gave a talk to doctors once, because I am an expert in one field of medicine, which is being a patient. So, I speak it from the point of view of my expertise as a patient. One talk, I entitled, The Patient Will See You Now. I realized later, that’s the title of one of your books.
Eric: 42:29 Right. Oh, wow.
Alan: 42:30 I don’t know which one of us should sue the other.
Eric: 42:34 Well, but that also speaks another point why you are a master of communication. You teach about communication and science, and that’s something that we could all learn. Doctors tend to speak in their kind of inside baseball jargon, and they don’t communicate enough to the public. Researchers and scientists, they don’t do a good enough job of that. And so, we really do need to learn about this whole aspect of communicating on an equal footing, and so the patient will see you now, that’s a flip, of course, to some of the most famous words in all of medicine, “The doctor will see you now,” and-
Alan: 43:15 Which is really after that three hour wait.
Eric: 43:18 Yeah, yeah. You wait three weeks to get the appointment. Then you’re an hour in the waiting room. Then the receptionist says, “The doctor will see you now.” It’s like, oh my gosh, help me, help me.
Alan: 43:30 My take on that when I gave the talk was to flip it. It must be your take on it, too, to flip it, that the patient is the one with the information you need to get.
Eric: 43:41 Yeah. That flip, it’s still in the early days, we’d be so entrenched in the doctor knows best and in control, but we got the cede control. We have to communicate better. The thing that’s really tipped us, Alan, is the fact that when all the data went digital, it became eminently portable, and so it’s very hard to suppress and withhold information from people now because they have a right to have. It’s their data. It’s their body. They’re the ones that have the most critical interest in the data, or should.
Alan: 44:20 It’s so easy to download.
Eric: 44:22 Right, right.
Alan: 44:25 [crosstalk 00:44:25].
Eric: 44:25 Yeah. Why doesn’t everybody have all their data? I mean, nobody does. I mean, it’s all distributed and multiple different doctors and health systems, and they don’t have their actual scans. Did you know that 10% of medical scans in the US are repeated because a patient can’t get ahold of their scan?
Alan: 44:43 That’s interesting.
Eric: 44:44 It’s a waste of billions of dollars. It’s because people don’t own their data.
Alan: 44:48 I’ve had a scan that I’ve gone to another doctor, or three doctors, and needed the scan, and I had to have the actual original scan transported from one doctor to another. At one point along the way, it’s bound to get lost.
Eric: 45:04 Right, right. Well, and a lot of people, they just can’t even get it to happen. There’s so much difficulty in getting your data that you paid for. It’s like, what’s wrong with this picture?
Alan: 45:17 What’s the rationale for not releasing the data to the patient? I don’t get that.
Eric: 45:21 Well, legally now, there are laws that are supposed to happen, but the real story is called information blocking. Basically, what that is is that the health systems don’t really want to give out the data because they don’t want to lose their patient. They don’t want any second opinions to be done.
Alan: 45:41 We’ll keep you by keeping your data.
Eric: 45:43 Yeah, yeah. And so, it’s a default mode, is whatever we can do to prevent this from you acquiring your data going elsewhere, we’re going to do that. It’s rampant.
Alan: 45:56 They’ll keep my shoes next, right?
MUSIC BRIDGE

Alan: Eric, it’s been so much fun talking to you. I really have enjoyed it, and it’s been illuminating.
Eric: 46:07 Oh, I thoroughly enjoyed it, too, Alan, what a real treat.
Alan: 46:10 Before we close, I hope you’re up for this, we always end with seven quick questions that invite seven quick answers. It’s generally about relating and communicating, which is really what we’ve been talking about today. Are you up for it?
Eric: 46:26 Sure.
Alan: 46:27 Okay. First question, what do you wish you really understood?
Eric: 46:32 Oh, gosh, that’s a tough one. Well, I mean, I think the meaning of life is always out there. You think about it all the time, and I think that’s something that I ponder about quite a bit.
Alan: 46:51 Okay, good. What do you wish other people understood about you? Let me say that again with more voice.
Eric: 46:58 Yeah, no, I…
Alan: 46:59 What do you wish people understood about you?
Eric: 47:03 Well, a lot of people think that I’m into the technology, or they don’t really understand that that’s just a path to enhancing the humaneness, the humanity. So, I think they really don’t understand. If they were a patient of mine or they were a colleague, they would know better, but that, I think, is a misperception.
Alan: 47:23 Okay, next. What’s the strangest question anyone has ever asked you?
Eric: 47:28 I think the first question that you asked me.
Alan: 47:34 That’s a question you ask yourself all the time. It’s not so strange.
Eric: 47:37 Right, but you brought it out. That’s the difference, yeah.
Alan: 47:41 Okay, here’s the next one. How do you stop a compulsive talker?
Eric: 47:47 That’s a good one. I think the way to do that is non-verbal communication.
Alan: 47:56 Go to sleep.
Eric: 47:58 Well, you kind of get ancy, move around. You’re looking somewhere else because if you try to cut them off, that doesn’t go too well. Yeah. I think you have to try to convey that the person just needs to kind of let up in different ways.
Alan: 48:17 Okay. I want to frame this next question. If you take empathy as meaning you’re taking the other person’s perspective into account, not that you’re compassionate toward them or sympathetic toward them, is there anyone for whom you just can’t feel empathy?
Eric: 48:39 Well, I think that’s a really important point. A lot of times, you might get that sense as I can’t feel empathy for this person, but that’s where we want to strive for. I mean, I think that reflex about… that there’s something about this person, which is just very negative. We don’t want to go there. We really want to see the positives about everybody.
Alan: 49:09 That’s such a good point and a way I hadn’t heard it put that way. To a great extent, you’re saying when you feel I just can’t experience empathy for this person, in a way, that’s when you need to call up your empathy the most.
Eric: 49:26 Yes. That’s where you got to go into system two thinking. You got to not be reflexive, but reflective.
Alan: 49:33 Good.
Eric: 49:33 You got to say, “Hey, you know what? This is a human being right here, and there’s something I don’t like about this person, but you know what? That’s what I’m all about. That’s why I’m doing this.” You got to reach for it.
Alan: 49:46 Okay. How do you like to deliver bad news? In-person, on the phone, or by carrier pigeon?
Eric: 49:55 Yeah. Well, no, that has to be done in-person. There’s no other way.
Alan: 49:59 How do you…
Eric: 50:01 You don’t like it, but there’s an art to that. It takes time. It takes time to do that. It’s essential, but doing that through phone, or email, or some other means is just unacceptable. Of course, sometimes that becomes necessary, just because of unusual circumstances, but that’s where the human-human bond thing is really the essentiality.
Alan: 50:28 Last question. What, if anything, would make you end a friendship?
Eric: 50:36 Well, I think a sense of real betrayal. I think that when you feel like someone is going after you at [inaudible 00:50:46]. That would be really a challenge to a friendship. Fortunately, it doesn’t happen very often, but I think I’ve certainly seen some examples of that through my life. It’s really heart-breaking when it happens, but unfortunately, it can happen.
Alan: 51:07 Well, I have had a lovely conversation with you. I hope that’s the beginning of a beautiful friendship, Louis.
Eric: 51:14 Well, I really enjoyed it, and I look forward to having more chats with you over the years ahead.
Alan: 51:19 Me, too. Thanks so much, Eric.
Eric: 51:21 Thank you. All right, you take care.

Dr Eric Topol is an executive vice president at Scripps Research and the founder and director of Scripps Research Translational Institute.
Dr. Topol has published thousands of peer-reviewed articles and he’s now widely viewed as one of the most influential physician leaders in the country. His book, Deep Medicine, which just hit the bookstores , focuses on artificial intelligence. You can find Dr. Topol on Twitter discussing his latest research at @EricTopol