JC: Those who can, do; those who teach, do better. St.Emlyn’s

I’ve always been somewhat irritated by that old adage that ‘Those that can do; those that can’t, teach’. Apart from the obvious intended insult to teachers (which is just rude), it’s also patently not true in medicine. If I think back across my medical career it’s clear to me that the best clinicians are almost always fantastic teachers. That is if you use the standard of ‘whether I think they are any good’ and of course that assessment in itself is full of all sorts of bias.

A better measure might be whether those who teach improve patient outcome. Indeed, that is what has been published this month where authors from the US medicaid program have suggested that there is an association between better patient outcome and the teaching hospital status of an organisation. The article is published in JAMA.1

The abstract is below but as always read the full paper.

What kind of study is this?

It’s an observational database study which is hardly the highest level of evidence. Such studies are commonly churned out of large databases of routinely collected information to look for trends in outcome. That sounds great, but such studies are prone to bias23. There are many reasons why we might find a chance association and that does not equate to causality. Thus these type of studies should usually be considered hypothesis generating rather than proving.

On the other hand, it’s rather tricky to look at this question in any other way. We could hardly randomise patients to a different type of facility in a study allocation method. A trial of intervention is not pragmatic so let’s go with the observational data instead.

Talk to me about size.

It’s huge! The study compares the outcome for patients across 15 medical and 6 surgical conditions. They looked at 4483 hospitals and 21 million patients. Wow, you say. That’s amazing and must equate to incredible statistical power, and that’s true. The sheer numbers allow us to statistically look for trends with a degree of precision which is simply not possible with small numbers. However, be careful here. If a trial has methodological flaws that won’t be affected by patient numbers and in fact the reverse can take place. A very large trial with a methodological error may appear to be more robust than a small trial with the same error because the precision estimates (confidence intervals, fragility index, p-values) appear to be better. Precision will certainly improve with size, methodological problems certainly won’t, so we must not be dazzled by big numbers, but focus on the methods and potential biases that result from research design.

Tell me about the patients.

This is really important and something that’s not apparent in the abstract. These are medicare patients (so not everyone), aged over 65 (so not everyone), with one of 16 conditions (so not everything), in US hospitals (so not like us). These limitations significantly restrict the generalisability of the findings, particularly when considering the effect in a country like the UK where we are all teaching hospitals. We do have solely private institutions and we do have major and minor teaching hospitals, but there are clearly significant structural and academic differences between here and the US.

And the outcomes?

7 and 30 day mortality.

What did they find?

In brief (read the abstract above) they found that major teaching hospitals had lower mortality rates than minor teaching hospitals and non-teaching hospitals.. This difference persisted even when the data was adjusted for known confounders. They also looked to see if the size of hospital mattered, but again when similar size hospitals were compared it still looks as though there is a mortality benefit to being cared for in a teaching facility.

How should we handle these findings?

Well it does not answer my original question about individuals abilities to teach. This is an enormous study where nuance and style are swamped by statistical power. We must be careful to remember that association is not the same as causation and there are many reasons why outcomes might be different. The patients, the facilities, the education, the severity and the location of the patients will be different in ways that the statistical adjustments here would not be able to account for and we must always be cautious about this.

The reference standards chosen here do not account for the broad range of conditions we see  in clinical practice and it’s possible that hospitals who know which sentinal conditions are going to be studied will put additional resources in those areas, a solution that might be easier in a larger hospital or teaching unit.

At the risk of repetition, the difference between what is a teaching facility and what is not need careful consideration. There are many differences likely to relate to location, funding, socio-economic status, support, staffing, access to resources, sub specialists, specialist interventional procedures, and a whole host of other patient and non-patient factors that could influence results like the ones found here. It is tempting to think that teaching hospitals really are doing better, and coming from a teaching hospital myself I would like to believe the results (Ed – a personal bias which is unfounded in reality) but is there sufficient data to be sure of that? I really don’t think so. Is there sufficient data here to state that teaching is the reason for the difference? Absolutely not. Again, association is not the same as causation and this study is a good reminder of that fact. I shudder at how the press might misinterpret these findings in the coming months. I am told by my friends that this whole topic of teaching vs. teaching hospitals is a controversial area in the US with cultural and political undertones. It’s therefore very important that we subject papers like this to careful review. It’s also very clear to the St.Emlyn’s team that we know some fantastic educators in the #FOAMed workd who are not in mainstream teaching facilities in the US and thus we can’t even equate excellence in education to the designation of the facility.

The differences found, if true, would be clinically important. Post statistical adjustment there is a 1.2% difference in mortality for patients which (as you all know) is an NNH of about 83. That’s a pretty small NNH and one that raises an eyebrow.  Remarkable? Yes of course, but perhaps too remarkable to be entirely true and we need to think hard about why that might be.

What does this mean?

Tricky, it probably means that outcomes are different. The question is why? This paper sensibly concludes that we don’t really know, but that we may well need to look quite carefully.

vb

@EMManchester

 

Before you go please don’t forget to…

References

1.
Burke LG, Frakt AB, Khullar D, Orav EJ, Jha AK. Association Between Teaching Status and Mortality in US Hospitals. JAMA. 2017;317(20):2105. doi: 10.1001/jama.2017.5702
2.
Brodt E, Skelly A, Dettori J, Hashimoto R. Administrative Database Studies: Goldmine or Goose Chase? Evidence-Based Spine-Care Journal. 2014;05(02):074-076. doi: 10.1055/s-0034-1390027
3.
Rosenthal GE. Finding Pure and Simple Truths With Administrative Data. JAMA. 2012;307(13):1433. doi: 10.1001/jama.2012.404

1 Comment

  1. Mike Davis

    It is not very often that I feel the urge to respond to St Emlyn’s blogs, bot out of lack of interest, but more from a realisation of my limits. I was drawn to this one, however, by the title and felt that it might give me something to think about. So thanks, Simon, for getting me this far. A number of things strike me;

    that the study probably says as much about the size, financial worth, social context etc etc of the hospitals in the study as it does about the teaching role that some of the clinicians occupy – both the paper and Simon’s commentary reflect this, and rightly so.

    the research method says a lot about what some academic clinicians with an interest in educational methodology like to assume: that you can conduct the same kind of studies as you might in more closed studies, preferably ones with the capacity for randomised control, and therefore, educational interventions can be studied in the same way that clinical ones can – I don’t think that this is either possible or desirable. Educational interventions are notably free of “proofs” that might justify one course of action over another. This can lead to some uncertainty and the capacity of systems to be dictated by idiots (think 1988 Education Reform Act), but they are easier to undo when a new good idea comes along, or a new seriously bad idea appears for no reason at all, other than a nostalgia for the 1950s (cf Theresa May and Grammar Schools).

    What I think is interesting about the general observations in the blog is that teaching makes for better clinical practice, summed up by the neat reversal, in the title, of the stereotypical view of teachers. As an ex-teacher (2ndary school, university, now CME), I have a vested interest in dismissing the notion that teachers can’t “do”, and I have met very many who are clearly real experts in their field who delight in sharing their expertise with others. There are a few out there for whom teaching is not an interest and they are, accordingly, not very good at it. I am sure some of them are excellent clinicians but many may not be.

    Anyway, this has been a diversion from what I should be “doing”, so perhaps I should stop trying to teach and get on with it.

    Reply

Thanks so much for following. Viva la #FOAMed

Translate »