"Chapter 10: A Great Step for MD Anderson: Building Multidisciplinary T" by Gabriel N. Hortobagyi MD and Tacey A. Rosolowski PhD
 
Chapter 10: A Great Step for MD Anderson: Building Multidisciplinary Teams

Chapter 10: A Great Step for MD Anderson: Building Multidisciplinary Teams

Files

Error loading player: No playable sources found
 

Description

Dr. Hortobagyi begins this chapter by stating that, in 1975, he was disillusioned by the lack of collegiality at MD Anderson, and so he invited individuals from many disciplines (including Developmental Therapeutics) to discuss cases, explain, their different perspectives on treatment, and collectively determine the best combination and order of measures. Slowly, he notes, they were able to build mutually respectful teams. He describes some of the clinical trials that emerged from the collaborations. Dr. Hortobagyi affirms that this interdisciplinary work represents one of the greatest steps forward at the institution, one that created teamwork twenty years before the creation of the official multidisciplinary breast center.

Dr. Hortobagyi next explains that, in the seventies some leaders at MD Anderson considered randomized clinical trials immoral because they would withhold from some patients’ therapies believed to be more effective than what was in existence. Dr. Hortobagyi himself believes that clinical trials are an important tool for medical science. He sketches the development of thought regarding ethics and randomized trials and explains other reasons why physicians do not believe that randomized trials are necessary. He observes that that oncology is “light years ahead” of the rest of medicine in accepting their value. He tells a story that demonstrates how radiology does not see the benefit; he also notes that there are no controlled trials comparing, for example, proton therapy to conventional electron beam therapy. He sees a similar situation with the treatment options for prostate cancer, and states that it is “tragic” that there is a lack of evidence-based information for major decisions. Dr. Hortobagyi then compares the laboratory research scenario to the complex challenges characterizing clinical investigation of living human systems. He states that much of what physicians do has no basis in fact. He goes on to talk about the economic impact that such decisions can have. He compares the $1000 one might spend on adjuvant therapy to the $200,000 one can spend on a full regimen of neoadjuvant treatment, surgery, and chemo, noting that his group has done cost-benefit studies to insure the money on these treatments is well spent.

Dr. Hortobagyi points out that very expensive treatments cannot always be exported to other institutions. Randomized trials provide a way of determining how effective treatments are at each cost level and therefore provide a logical way of seeing incremental benefit. This provides a sound basis for making decisions on which treatment methods to adopt for the greatest public benefit.

Dr. Hortobagyi explains that he is not a “purist” who insists that every point be demonstrated through randomized trials. He advocates the identification of basic questions and treatment options in each specialty and a strategy of comparing them via Comparative Effectiveness Research methods.

Identifier

HortobagyiGN_02_20130107_C10

Publication Date

1-7-2013

Publisher

The Making Cancer History® Voices Oral History Collection, The University of Texas MD Anderson Cancer Center

City

Houston, Texas

Topics Covered

The University of Texas MD Anderson Cancer Center - Building the Institution; Critical Perspectives on MD Anderson; Building/Transforming the Institution; Multi-disciplinary Approaches; Institutional Mission and Values; MD Anderson Past; Controversy; On Research and Researchers; Understanding Cancer, the History of Science, Cancer Research; The History of Health Care, Patient Care; Professional Practice; The Professional at Work; Collaborations; The Administrator; On Research and Researchers; The Healthcare Industry; Fiscal Realities in HealthcareBeyond the Institution; Global Issues –Cancer, Health, Medicine

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.

Disciplines

History of Science, Technology, and Medicine | Oncology | Oral History

Transcript

Gabriel Hortobagyi, MD:

In the meantime, I became quite uncomfortable about the less than collegial interaction between the various specialties in the breast cancer field here. And especially there was a not very collegial interaction between medical oncology and surgery with a little bit more neutral interaction with Radiation Oncology. I started to organize a biweekly conference in which we invited everybody interested in breast cancer to come and discuss specific cases.

Tacey Ann Rosolowski, PhD:



I'm sorry to interrupt you. I missed the second division or department that you were interacting with. Medical oncology and surgery and then medical oncology and—?

Gabriel Hortobagyi, MD:

Radiation.

Tacey Ann Rosolowski, PhD:

So you began to invite people to discuss cases.

Gabriel Hortobagyi, MD:

Right.

Tacey Ann Rosolowski, PhD:

And about what year was this?

Gabriel Hortobagyi, MD:

This is 1975 probably. And then at the beginning a few would come, but then increasingly more and more would show up, and then we would discuss specific cases. We would look at their x-rays. We would look at their pathology. We would talk about them, and we would try to reach a consensus about what was the combination of treatments and the order of treatments that would best suit those patients. And then eventually when you start to talk to people and to listen to people, more importantly, all of the mistrust goes away. And then slowly we built a mutually respectful team that persists to this day. I think probably arguably that is one of the greatest steps forward we took in this institution, because we did that probably twenty years before the concept of breast centers started to come around, and we were doing this well before the ’90s. Today there is a lot written about comprehensive breast centers and multidisciplinary care and whatnot, but that was really developed in this institution, and it was developed thanks to the willingness of all of my colleagues to come together and put their anger and mistrust and ill feelings aside.

Tacey Ann Rosolowski, PhD:

With that process, was it a bumpy ride at first?

Gabriel Hortobagyi, MD:

Oh, it was a bumpy ride for about ten years. It didn’t happen overnight.

Tacey Ann Rosolowski, PhD:

How did it work? Did you and others come in and sort of smooth feathers? Or was there an agenda? How did you manage it? That’s a big cultural change.

Gabriel Hortobagyi, MD:

There was always an agenda, and we had a number of patients that were listed. We specifically invited those who were responsible for the care of those patients in the various departments, and then we started to talk not only about those patients but in discussions it was obvious that there was a difference in opinion about something. We would start talking about, okay, that’s wonderful. Here there are two experts with great experience in this who disagree completely about a matter of great importance. Why don’t we put that to a test? And then we would develop protocols, clinical trials that would then test whether it is better to do this first or the other things first.

Tacey Ann Rosolowski, PhD:

What were some of the clinical trials that came from this process?

Gabriel Hortobagyi, MD:

Well, the first clinical trial was, for instance, the sequence of chemotherapy first followed by either surgery or radiation therapy and then followed by more chemotherapy. Then based on that we were able to determine that neither surgery alone nor radiation therapy alone was sufficient for this and that we actually needed both of them. Then the next protocol included both of them, and then we tested the sequence, and then gradually we developed that. Viewed with today’s eyes with what we consider to be level one evidence and the gold standard of clinical trials, which is the randomized trial, those were modest steps because randomized trials were not acceptable in this institution in the early ’70s. In fact, there were several intellectual leaders who were vociferous against randomized trials and considered them immoral, because if you thought you could do better with something, how could you even test the other aspect? Those were philosophical discussions about whether real equipoise exists when you really have an idea that you think you can do better.

Tacey Ann Rosolowski, PhD:

What was your view of that at the time?

Gabriel Hortobagyi, MD:

Well, I thought all along that science moves because one comes up with a hypothesis and that sometimes that hypothesis is right. Sometimes it is wrong, but the only way to figure out what is right and what is wrong is by testing it. And by testing it, you have to put aside your own prejudices and your biases; and therefore, the randomized trial is an important tool. Not the only tool, but it is a very important tool. And in that, I found myself in disagreement with some of my colleagues, especially some of my mentors. But gradually one is able to implement things, and eventually, several years after my arrival, we were able to initiate some randomized trials, and we then initiated and completed a number of randomized trials over the years that have had a substantial impact on the management of breast cancer.

Tacey Ann Rosolowski, PhD:

Can I ask another kind of historical question about this? Because it sounds like—I mean, this being in the early ’70s, this is one of those basic ethical dilemmas. And was this something that all research institutions were wrestling with cancer questions? Or was that more—it happened more at MD Anderson because of the leadership here? How would you evaluate that?

Gabriel Hortobagyi, MD:

Well, randomized clinical trials were first developed in the 1940s, probably with antibiotics. But they were few and far between until the 1960s. There were a couple done in the 1950s, but they were not considered the standard. Clinical research essentially was I will do this, and then I check back later to see how I did. That was, of course, valuable in some ways because you could test whether your idea was feasible or not. You can test whether you caused some benefit or whether you caused some harm. But in the absence of comparing, it in an unbiased manner to something that a large proportion of your colleagues considered the standard, it was not a very useful technique because it was always intended to fulfill your biases and prove that you were right, and that’s not the point of science. That’s not the point of investigation. In this institution, everybody struggled with that controversy, and it required a cultural change on the part of everyone. And even today some specialties do better than others in that respect.

And I'm not just talking about the oncology world but in general. I can tell you, for instance, oncology—and in that I include all oncology, surgical, radiation, medical, et cetera—we are light years ahead of the rest of medicine in terms of randomized trials. Just as an aside, for instance, when I was president of ASCO, I asked my outcomes research committee to put together a review paper or position paper or consensus paper about PET scanning and breast cancer—Positron Emission Tomography and breast cancer—because in the radiology field it was being seen—and I'm talking six or seven years ago—as the greatest thing since sliced bread, and there was very little evidence for that. There were a number of small papers of fifteen, twenty, thirty patients—each no controls—and that was their conclusion. I asked the members of this committee to identify a number of leaders in the radiology community, bring them together, and try to either design clinical trials that would give us the answers or try to come to a consensus. And after a year of deliberations, there was absolute impossibility to even come close to a consensus because the oncologists wanted this particular imaging modality to be put to a test in a randomized trial, and the radiologists absolutely saw no reason to do it.

You see other examples in our day. For instance, our institution has invested a huge amount of money and places a lot of emphasis on proton therapy. We have a huge thing down the road about proton therapy, and those of us who are not radiation oncologists—we are still waiting for some evidence from a control trial that using the very expensive technology of proton therapy is any better than using standard external beam photon therapy or electron beam therapy. And it is not because the radiation oncology field is so polarized that some radiation oncologists think they already know that it is better, and therefore, they see no reason to test it. And others don’t think that it is ever going to be better, and they don’t see any reason to test it for that reason. The same in the prostate cancer field. As you know, there has never been a consensus reached about when you have early prostate cancer, are you better treated by a prostatectomy—therefore surgical excision—or by radiation therapy to the prostate? And that has never been put to a test because those two specialties cannot come together to design a trial. And I think it’s tragic for medicine to find ourselves in a situation where you reach an impasse where no evidence-based answer comes forth because we believe that we are scientists, but we don’t act like scientists.

Tacey Ann Rosolowski, PhD:

I was just going to say that I've had a number of conversations with people who have mentioned that clinical research seems to be a poor handmaiden when it comes to a comparison with laboratory research. But this cultural issue might be a very strong factor in that. Gabriel Hortobagyi MD

It’s a huge factor in that. Clinical research is hard enough because when you're in the lab and you deal with test tubes, you have absolute control over the circumstances. You can define the temperature. You can define the number of cells. You can define the nutrients. You can define the time. You can define the amount of drug you put in there or whatever your experiment is. Or if you're dealing with animals, you can pick the exact type of mice or rats or monkeys or dogs. You can define when you transplant the cancer, when you give them the carcinogen. You can define how long you watch them. You can define how large you let the tumor grow before you do A, B, C, or D. Everything is under your control. When you deal with human beings, you can’t. We are a highly inbred species. We have subtle differences, but we are all somewhat different. We all have a life, so I can tell my patient, “I want you to receive drug X every three weeks.” Well, she may be able to do that the first two times, and then the third time she runs into a car accident, or her child is sick, or some other thing happens. She develops the flu, and she misses that appointment. Well, what can I do about it? It’s a fact of life. Or I give the same dose to Mrs. Jones as to Mrs. Smith, and Mrs. Jones tolerates it perfectly well, and Mrs. Smith gets deathly ill with that same dose. Well, the two of them have different metabolisms, so on and so forth. You have to deal with that, and you have to do the best you can, and you have to realize that you cannot control everything. We know from large clinical trials that we get compliance. Depending on what we are trying to accomplish, compliance is anywhere from fifty to seventy percent with what we are trying to establish. So then you have to make statistical adjustments to that to make sure that you don't lose the power of the observation. It’s very complicated, even when everybody is committed to it. Now if nobody is committed to that, then it’s impossible to do, but you're correct. Clinical research is much harder because of that than laboratory research and in some ways much more frustrating because of that. Because sometimes you do a clinical trial. You run a clinical trial for five, ten years, and at the end of that, you have nothing to show for it because of all of these intricacies and these complexities. Well, but I digress.

Tacey Ann Rosolowski, PhD:

Actually, if you don’t mind, I'd like to ask one more question, because as your own persona as a researcher evolved, how did you come to your own intellectual peace with that process and learn how to be flexible in the face of all those frustrations?

Gabriel Hortobagyi, MD:

Well, first of all, it’s because I'm a natural skeptic. So while I have a lot of respect for my elders and those who preceded me, the more I learned about science and the more reading I did, the more I realized that the way to make progress was to continuously question pretty much everything and to really look for the evidence behind every sweeping statement. And there are many sweeping statements in medicine. And yet, when you look at what we do as physicians, much of what we do—the majority of what we do—and now I'm talking about medicine in general—has never been proven by what we call level one evidence. Much of it is anecdotal, and there are millions of examples in medicine where we do things for years or decades or centuries that have absolutely no basis in fact. And even in modern times, we continue to do things because we lose the mental discipline or the scientific discipline that should be an absolute necessity in the practice of medicine. Now I don’t want you to go away with the idea that I'm a purist and that I want the randomized trial to support absolutely everything we do because that day will never come. There are far too many questions, too many details, too many complexities for us to be able to test them in a randomized trial. And randomized trials are very expensive, very labor and time consuming, so that is not possible. But certainly, some of the basic issues, some of the basic principles, need to be established.

Tacey Ann Rosolowski, PhD:



What are some of those principles that you feel have not been demonstrated but right now need to be very desperately?

Gabriel Hortobagyi, MD:

Well, I mentioned to you, for instance, the situation in prostate cancer where depending on what door you walk into in this institution or to your physician when you are first diagnosed with prostate cancer, you are told to do this or that. Entirely different treatments with different complications, and we don’t know whether they have similar or very different efficacy because we have never compared them. We have done that in the breast cancer area, and we have done that in colorectal cancer and in lung cancer and whatnot. Why not in the area of prostate cancer? And unfortunately, by doing certain comparisons in one cancer it does not guarantee you that the answer will be the same in others. Those types of things need to be established. I think technical progress is one thing.

Clinical utility is something else. For me to accept that proton therapy is better than electron beam therapy or even something as basic as cobalt-60—which still is the equipment used in many third-world countries and does a perfectly good job—I think there is a need to demonstrate that, to demonstrate that both in terms of what is good and what is bad about that because we always imagine that everything that is new is better, but most things have two sides. Most technical improvements are a double-edged sword, and we need to understand that. And the only way we can understand that is by comparing the new one with the old one. I think those are critical issues, and some of the things we do that are new in addition to the importance of understanding in what way are they better or in what way are they worse are incredibly more expensive. I can give, for instance, someone adjuvant chemotherapy for breast cancer for about $1000 for the entire course; or I can spend $200,000 with our newest and greatest adjuvant chemotherapy and biological therapy program with all the bells and whistles. Now it just happens that we have non-randomized trials all throughout the process between the most basic and the most expensive one, and we know that there are some incremental benefits along the way.

But by having done that, then I can go back and do what I did, for instance, with others in a group called Breast Health Global Initiative where we developed guidelines for countries of limited resources. You can think that we live in a very rich and very prosperous country, and therefore, we can afford everything for everybody. I don’t think that’s true, but we are certainly privileged to live in a country with lots of resources. But when you live in Ghana or Mali or Bolivia or hundreds of countries like that, you can’t because people’s average daily income is under a dollar, and the entire healthcare budget for the country is less than one-tenth of the annual budget of MD Anderson. You can’t say we’re going to give the $200,000 adjuvant regimen to everyone with a certain type of breast cancer in Mali. It just doesn’t work. We were able to backtrack and say the incremental benefit from going from $1000 to $200,000 is this much. Is that worth it for the public health of that country, recognizing that it was not optimal? But you can’t do that if you have never done any of the comparisons. You don’t know what you're missing. You don’t know what is worth that incremental cost. So I think identifying some basic questions in each of our specialties and identifying some options and comparing them is very important. Now today one of the emerging specialties in medicine—specialty in oncology—is comparative effectiveness research, and I think that’s critically important.

Tacey Ann Rosolowski, PhD:



I'm sorry. I missed the first word.

Gabriel Hortobagyi, MD:

Comparative.

Tacey Ann Rosolowski, PhD:



Oh, comparative—comparative effectiveness.

Gabriel Hortobagyi, MD:

Comparative effectiveness research or CER, and in fact, part of the ACA, the Affordable Care Act, is to dedicate a certain amount of resources to doing that—to compare treatments that are seemingly focused on the same condition and trying to figure out which is better. Which is better in terms of effectiveness, and which is better in terms of cost, and which should be the standard of care for our community? And while some people think of that as, oh, that will lead to rationing, when we scream about, we don’t want to pay more in taxes, then we need to figure out who is going to pay for our healthcare.

Tacey Ann Rosolowski, PhD:

And what exactly we’re paying for.

Gabriel Hortobagyi, MD:

And what exactly are we paying for, and is it worth paying for that?

Conditions Governing Access

Open

Chapter 10: A Great Step for MD Anderson: Building Multidisciplinary Teams

Share

COinS