Making good decisions in the ED. #RCEM15 #EuSEM15

Screenshot 2015-10-13 09.49.49

This blog is based on a talk I gave at the Royal College of Emergency Medicine conference in Manchester, and a similar talk on Metacognition in the ED delivered today in Torino at the EuSEM conference. The conference is the first conference under our new Royal Charter and is a really significant event in the history of the college and of course in the history of emergency medicine across the UK.

The title of the talk is ‘Making Good Decisions’ and in reality that could include many things. Life as an emergency physician is crammed full of decisions and therefore full of judgement. The world of the emergency physician is an uncertain one, where we are required to make difficult decisions on a daily basis. If you want to get the quick version of the talk then watch this short summary from the Royal College youtube channel, and then if you’re interested in knowing more then read on.

The slides for this talk can be seen here, and there’s a podcast version at the bottom of this post.

What do we mean by ‘judgement’?

Screenshot 2015-10-13 09.49.56There is a risk that can get bogged down with semantics and definitions so we should define what we we are talking about. For the purposes of the talk and this blog post we consider judgement as a term that describes a range of techniques, psychological processes and ideas that we use in the emergency department to make decisions. These include concepts such as risk, probability, gestalt, reasoning and uncertainty. These are topics that we have looked at over the years in EM and there are a few reasons why that is particularly relevant to us as emergency physicians and it’s worth reflecting and reviewing why this is such an important topic for us.

1.     Emergency medicine is often considered to be a risky speciality in that we deal with a population of patients who may go on to have adverse consequences of their disease. We also operate at the admission/discharge interface and so every time we see a patient (or at least in my experience every shift) we take a risk. Is discharge the right decision or are we sending someone home who might have an adverse event?

Screenshot 2015-10-13 09.50.472.     We arguably treat populations as much as we do individuals. This can cause difficult realities in our practice as probability underpins many of our decision making processes. If we take something like pleuritic chest pain and the risk of PE we have great evidence that the tools we use have a sensitivity of 98% which sounds great. That does of course mean that we miss 1:50 patients with PE, and this is still good practice. In essence our diagnostic tools have an acceptable failure rate. Therefore we need to be wise in when to use clinical decision rules, and also when not to.

Screenshot 2015-10-13 09.50.403.     We frequently see patients at an early stage of the disease process when clinical information may be unavailable or at least only minimally manifest. At our initial assessment, in the ED, with only that information that we can glean at the bedside the level of uncertainty can be high. As time passes, results come back, trends in clinical course become apparent and uncertainty reigns. It is only after information flows, time passes, investigations are returned that diagnoses are refined and that uncertainty falls. We, as emergency physicians, operate in this zone, the zone of uncertainty, the zone of judgement, the zone where it is the mind of the emergency physician has primacy.

Screenshot 2015-10-13 09.50.034.     I’ve been a proponent of evidence based medicine for many years, but in the last 5-10 years it has become increasingly apparent to me that evidence is not enough. Nearly all evidence is filtered through something before it reaches a patient, and that something is you and me. It is a holy grail for researchers and health services that when high quality evidence appears it will reach the bedside rapidly and effectively, but we know that this is not the case. The average time for new knowledge to reach every day practice for our patients is 14 years. Everything we do is filtered through awareness, judgement, opinion and belief and this is a theme that we have been exploring within St.Emlyn’s for many years.  Knowledge translation and transference remains a problem in all aspects of health care and is a reason why we blog and podcast at St.Emlyn’s.

So let’s think about judgement in a little more detail  and perhaps consider how we could know whether we are good at it.

·      You can have a great outcome with bad judgement

Screenshot 2015-10-13 14.52.48A patient comes into the ED with a severe sudden onset headache associated with a collapse. He is 32 years old and is well when seen by the ED clinician. He has no neurology and only has a mild headache. He is discharged from the ED as his symptoms have resolved. 3 months later. He is fine. Nothing happened and he came to no harm. 60 years later he dies having never developed significant cataracts or a brain tumour. Great outcome – bad judgement.

·      You can have a bad outcome with good judgement.

Screenshot 2015-10-13 14.52.59A patient attends the ED on a Friday night with a swollen leg. They score 2 on a Well’s score and so a d-dimer is taken. This comes back as raised and so the patient is placed on low molecular weight heparin over the weekend pending an ultrasound scan on Monday. Sadly he returns on Sunday evening with a raging compartment syndrome requiring a fasciotomy and months of rehabilitation. Great judgement – bad outcome.

When thinking about judgement then process is not the same as outcome.

 

So how do we know if we are making good judgements?

Let’s have a think about some of the mechanisms that we, or others use when considering this question of clinical decision making. Let’s consider five fallacies of feedback, reasons why we might not make good judgements about our judgement.

1.     You might think that you have great judgement because you don’t hear about complaints, coroner’s inquests, or sued. The fallacy here is obvious if you think about it. There is no doubt that adverse events are great learning experiences. They tell us Screenshot 2015-10-13 09.51.10why things go wrong and can help us identify individual and systematic failings in our health care systems. They are really important but they do not tell us what we need to know about the entirety of our practice. An analogy for me comes from the airline industry to which we sometimes appear to be in awe of when it comes to systems and safety. There is no doubt that work transferred from the airline industry around crash investigation, human factors and patient safety has made profound and important differences to the way we practice medicine. I am a great advocate of this but it’s a bit like judging whether someone is a good pilot by analysing their crashes… That’s a bad idea for many reasons most notably that harm has to take place before wisdom arises. It would clearly be insane to spend a long time training and learning about how to crash, and then not crash a plane into a mountain in the hope that by doing so we would then be capable of flying a 737 from London to Manchester. Learning from error is great, but we must also recognize that most of our practice is normal care. From an airline perspective we need to consider our performance in normal flight and not just when it goes wrong.

Screenshot 2015-10-13 09.51.202.     You might think that you make great decisions because you can recall a case where you made a fantastic decision. Perhaps you even saved a life. I recall a case where I made the decision to perform an ultrasound in a shocked 30 year old we thought had sepsis. Turned out they had a ruptured AAA. Was that great judgement or luck? Does it mean I’m a great doctor. No. Just as with the assessment of error in isolation we cannot judge our practice on extremes.

3.     Perhaps you have a department that functions well and has great outcomes. Does that mean you have great decision making? Probably not. Most of our patients pass through many hands and processes so drilling down into the effect of individual decisions can be difficult. System wide outcomes lack the fidelity to tell you about your decision making. Similarly if you work in a department with poor outcomes that may not be a result of your decision making.

4.     Perhaps nobody has taken you to one side to discuss your judgement in many years. This is an issue for all, but perhaps mostly for those of us who are getting on in our careers. At junior level there are mechanisms to help understand progress and to ensure that we at least consider the breadth of the curriculum. The more senior you become the less likely it is that you have a formalized mechanism to assess your performance. That is unless you have already been referred to the GMC for poor practice, or if you find out through an HLI or SUI in your practice. Sadly, unless you keep on top of your decision making and thought the first time you might find out about a hidden error is when disaster happens.

5.     Perhaps time will help you. It’s been said (by Gary Player amongst others) that the more I practice the luckier I get. It’s a Screenshot 2015-10-13 15.11.59great quote from a great player, but we are not playing golf and in medicine it’s different. In sport luck gets you to the top, in medicine it protects you at the bottom. Let’s assume that you’re not completely hopeless as a physician but you’re not very good. Let’s say you have fallen into the trap of not examining children with potential sepsus properly. Maybe you don’t look for rashes routinely. Sure you can spot the moribund patients with a widespread rash of meningococcal septicaemia, but the more subtle cases you’re rubbish at. Well luck is on your side because very few kids with fever have meningococcal septicaemia. Luck and probability are on your side and so you can go on sending them home without looking for a rash for days, weeks, months, in fact years before you will miss something important (and you will miss it). Luck is very much on your side as a diagnostician and as such it is a fallacy of feedback. You can make poor decisions for a very long time before your luck runs out.

6.     Perhaps you make great decisions, but it’s on the premise of poor knowledge. This is also known as the unknown unknown problem. Thisis characterised by you doing what you believe to be correct, but the world has moved on and what you are doing is historic medicine, it may even be harmful. In those circumstances everyone blissfully carries on regardless without highlighting any poor practice because you simply don’t recognize what you are doing as poor practice. If I were to think of an example then let’s consider using peripheral vasopressors in the ED. Some of you may think that you need to get a central line in to deliver these and your patients may hang around, potentially with an adverse outcome, whilst you wait to achieve it. You may well be working in an environment where you would be criticized, even incident reported for starting a peripheral Noradrenaline infusion. The evidence is there that they are safe, and you and your colleagues would be unconsciously incompetent about this.

The theme in all of these issues is around feedback, and you might think that I’m suggesting that feedback does not work well. In fact I believe the opposite. Feedback is fantastic and it’s the key to making good decisions but we don’t do it well.

Screenshot 2015-10-13 09.51.28

Does feedback work?

Well there is evidence that it can. It comes from an area that you are very familiar with, that is very important for us here in Manchester and it’s something that impact on you every day.

Any thoughts on what it might be?

It is the weather. When weather forecasting started out it was a little hit and miss meteorologists would look at information coming in from weather stations across the world. Maps would be created and predictions would be made for the weather the next day. In the early days predictions were better than flipping a coin, but there was clearly room for improvement and that’s exactly what has happened. Today weather forecasting is excellent. If it says it’s going to rain tomorrow in Manchester then it probably will – although that’s a fairly easy prediction. What we now have is the ability to predict the probability of weather with a high degree of accuracy and reliability.

This change has come about through feedback. However, it’s not the feedback of tornadoes or snow. It’s regular, repeatable, routine feedback. In airline terms it’s an analysis of normal flight rather than acrobatics or crashes

In medicine do we do this? Well in many specialities it does happen. Many surgical specialities have outcome and process data (e.g. cardiac surgery, ophthalmology). Similarly, pretty much every other speciality gets feedback on their decisions in some formalised way. Clinic patients come back. Referring patients from one in patient servie to another triggers a process that produces a letter of opinion which (mostly) returns to the referring doctor. This creates an effective feedback loop that helps the clinician learn about their own decision making and judgement.

Screenshot 2015-10-13 09.51.35In many emergency medicine systems it’s different. With few exceptions we do not get regular, routine feedback on our patients. Most of the time we see patients in the ED, make a diagnosis and then fire and forget. The patient leaves with a label or a treatment but we remain ignorant of whether it was right or wrong unless it is an exceptional case.  We may learn about the extremes of our practice through normal mechanisms but what do we have to learn about the generality of our practice?

This is wrong and counterintuitive to good learning. There is no doubt that examining exceptional events can produce positive outcomes and I’m not suggesting that we should not do this.  However, if it’s the only learning we do then it’s the equivalent of examining shark behaviour by only looking at shark attacks. Our views would be skewed if we did and we might never go back in the water, or we might cause major damage by changing behaviour to cover just one small aspect of behaviour to avoid such rare events.

 

So how do you know if YOU make good decisions?

What does your feedback loop like?

How do you follow up the routine patients in your practice?

 

Perhaps we do need to follow patients up and find out what happens to them, and not just the exceptions. It’s not difficult for some of our patients. Those we admit to hospital will inevitably get a discharge letter and in most hospitals these are available on your desktop computer. There is no reason apart from inertia that prevents you from doing this and it’s interesting when you do. I’ve found some interesting cases that have made me think about my practice. In the majority of cases this process validates what I do, but occasionally cases make me stop and think. The patient with my diagnosis of ACS turning out to have pericarditis. That’s not the sort of case that would comeback as an error, but it is the sort of case that I can learn from.

 

So what can we do to train our judgement and decision making?

Case note reviews

1.     Full notes review. You could ask your admin team to pull all sets of notes that you have seen and then follow them up, but in all honesty this is unnecessary and you could end up in paralytic reflection.

2.     Discharge letters – let’s remember that every admitted patient will get a discharge letter. These are easy to find and can be looked at 1-2 weeks post an on call shift. Keep a record of patients you see in a notebook, a digital entry, a photocopy of the ED record or whatever. Follow up initially with the discharge letter and then delve where you think you need to.

3.     For those patients who are discharged from the department it’s trickier. We could phone patients though this is not something I’ve tried. We would need to get clear agreement form patients to do this and so don’t try it until you have considered the confidentiality and logistic issues in your health system.

4.     It may be possible to get feedback from GPs but again this is difficult as systems between hospital and primary care are not geared up for this.

 

How much should I be doing?

That’s a good question and it rather depends on what else you are up to at the moment. I would suggest that you can get by with 5-10 cases per week. If you keep something like an NHS number and have electronic records or discharge summaries this will take you less than 20 minutes. So that’s quite a short investment in time for potentially an important return.

 

Peer review

Screenshot 2015-10-13 09.51.58Although I’ve talked about some of the disadvantages of working in the ED in terms of getting feedback from inpatient teams or family practitioners, in some ways we have the potential advantages. We work in teams. We often work alongside our consultant colleagues in the resus room and thus we have amazing opportunities to use them as a spotlight on our judgement. Think about it, a fellow physician, with the same training and requirements as yourself in the same space and time. It’s what I call a process of internal externality (we have an internal resource that can give a degree of external feedback). This does not have to be complex, and can be as simple as  peer review/observation. We’ve looked at this before at St.Emlyn’s describing a peer review process for trauma team leaders. Interestingly this usually results in more learning for the observer than the observed, perhaps tackling questions of unconscious incompetence.

 

Asking the right questions.

Screenshot 2015-10-13 09.51.52One of our most important roles as a clinical leader and trainer is to give advice on patient care in the department. When times are busy and the department overworked it’s all too easy to just tell people what to do, but that is a missed opportunity for exploring clinical decision making.

Remember that outcome is not the same as process, so when a colleague asks whether they should do something (eg admit a patient as an ACS) don’t just agree or disagree. Learn to explore how your trainee came to a decision, not just what the decision is. This allows both of you to understand and even diagnose their thinking, and from there you can even deliver therapy. By therapy I mean that if you understand why someone came to a decision you can then  suggest why they may have come to the decision.

I like the paper by Bowen et al from the NEJM in 2006 which outlines a number of strategies to diagnose and treat clinical judgement, It’s a good paper and a fairly straightforward read.

 

Summary

In summary we need to recognise and value clinical decision making as a core skill in the ED. We must learn how to understand our own decisions and those of our colleagues. Where abnormal thinking arises we should be able to understand why and to assist colleagues and ourselves in improving it.

Screenshot 2015-10-13 09.50.55

 

Major learning points.

1.     It’s difficult to judge your own decisions without information and we often lack this

2.     Looking at outcomes is good but not enough. Judgement is about process not outcome. Nobody wants the lucky doctor, they want the good doctor.

3.     Normal flight is the best place to develop and train judgement. Make sure that you spend time in the middle and not at the extremes of practice.

Further reading

 

Before you go please don’t forget to…

Cite this article as: Simon Carley, "Making good decisions in the ED. #RCEM15 #EuSEM15," in St.Emlyn's, October 13, 2015, https://www.stemlynsblog.org/making-good-decisions-in-the-ed-rcem15/.

15 thoughts on “Making good decisions in the ED. #RCEM15 #EuSEM15”

  1. Simon,

    You address an important, yet quite subjective issue. The problem in clinical medicine is that often we cannot precisely quantify risk. Sometimes there are no statistics available, on other occasions they are averages derived from patient populations who we are attempting to quantify for an individual with their own particular set of circumstances that may alter their responsiveness or complication rates to treatment.

    Another issue that comes into play is the subjective question of how much risk is tolerable or acceptable. Part of this is driven by social and cultural expectations.

    Take for example, two highly-developed Western countries that border on one another. On one side is a highly litigious society with healthcare predominantly delivered by a private sector that promotes defensive medicine and aggressive investigation. On the other is a socialised system, attracts less funding and therefore less expenditure. On the face of it and on world health indices they are functioning similarly. It is difficult to distinguish them statistically even though their clinical practise may differ.

    Even within my department, there is a range of risk-taking/risk behaviours by different consultants. If I asked for feedback I would get a mixture of responses. In my own mind I stay on the middle ground, neither too conservative nor too cavalier – not too progressive but not faddish. Yet our gross outcomes are the same in terms of adverse incidents. In other words, we are all pretty safe.

    Then I compare myself with some doctors from less resourced countries. Sometimes I think THAT their are a little more ‘risky’. And isn’t due to lack of clinical knowledge or in a difference in the appraisal of the information at hand. We both accept in a certain patient the risk is already small. However, in the systems they have previously worked, reducing this further wasn’t obligated or expected. And to reduce it further would have entailed a significant resource burden.

    A 1-2% miss rate for PE is ‘acceptable’ in a modern Western health system. Who decided that this was the threshold level for a ‘safe’ protocol? Could it be too low? Could it be too high? Should it be 0.5% or even less? And how much more would we need to spend to achieve this?

    This all reflects the law-of-diminishing-returns of healthcare. As your systems become safe and safer, it becomes very difficult to determine which interventions or ‘judgments’ actually make a big difference to outcomes at the population level.

    The risk-benefit of Peripheral norad vs central norad is vanishingly small compared to a third-world hospital that has run out of antibiotics.

  2. Pingback: Overconfidence in the ED - St.Emlyn's

  3. The 1-2% risk threshold was devised in PE studies at the point when further investigation became more harmful than leaving nature to take it’s course. It was thought through quite carefully, but I would agree that there is still a degree of subjectivity.

    Thanks again for your comments and apologies for taking so long to reply.

    S

  4. Pingback: Case 6: The Chest Pain Patient With Stripes - Pondering EM

  5. Pingback: Case 6: The Chest Pain Patient With Stripes – Global Intensive Care

  6. Pingback: The 2017 ACEM Winter Symposium with Katherine Gridley. - St.Emlyn's

  7. Pingback: Week 4.1 – Chest Pain Emergencies – Island Docs

  8. Pingback: Decision making theory: Links og referencer (Del 4) – Akutmedicineren.dk

  9. Pingback: Probabalistisk tankegang – Del 2 (Tæm din flodhest: en historie om tjeklister) – Akutmedicineren.dk

  10. Pingback: Tips on how to look after your OWN mental health • St Emlyn's

  11. Pingback: Probabalistisk tankegang – Del 5 (take home og referencer) – Akutmedicineren.dk

  12. Pingback: SMACC2019: The Power of Peer Review • St Emlyn's

  13. Pingback: Decision making 101 (opsummering) – Akutmedicineren.dk

  14. Pingback: Level Pegging? JC and the PEGeD study @StEmlyns • St Emlyn's

  15. Pingback: PatientCENTRERET, ikke PatientSTYRET (del 1) – Akutmedicineren.dk

Thanks so much for following. Viva la #FOAMed

Scroll to Top