Well! I go away for 1 week to Devon to spend time with the family and to visit the holiday haunts of Agatha Christie and what do you know? JC descends into another discussion about the investigation of Pulmonary Embolus with CTPA. This is an area that has dominated Journal Club (and EM globally) over the last few years and I do my best to talk about other stuff as well, but secretly they were just waiting until I was away. I feel the influence of the @thegreathornero here and suspect that he has something to do with it. As the recognised St.Emlyn’s expert in all things VTE he no doubt influenced this even though he has been temporarily banished to St.Elsewhere’s at the moment.
Having said that, the issue remains as an important one to us so it is perhaps deserving of a review and it’s also a plug for the St.Emlyn’s twitter journal club @JC_StE which ran alongside this paper last week.
So, what was the paper you ask? Have a read of the abstract below and follow the links to get the full paper. Read and decide whether it is of use to you and whether there are any flaws/biases in the paper that we need to address.
So, what have they looked at here? It’s a bit confusing at first but I think they are trying to see if there is an association between the level of D-dimer and the probability of PE on CTPA. Now there is some reason to do this as if d-dimer levels rise with probability then we can use this data together with likelihood rations to generate more meaningful results for patients (in terms of probability). However, d-dimers are usually used as a yes/no cut off which is probably flawed and historically set at a very low level (excellent podcast here all about this so I won’t go over old ground). So there may be something meaningful that we can use, so let’s read on.
In this retrospective (see below) review they have looked at 1136 patients who had CTPA for possible PE. They have then looked at the d-dimer levels in those patients to see if the diagnostic yield (likelihood of PE) changes with different levels of D-dimer. All well and good you say. Fine, what’s the headline figures here?
- 31.1% of patients who ended up with a CTPA had a low raise in their d-dimer (0.5-099ng/L) so it’s quite a big group and that fits with our clinical practice too.
- In that group the proportion of CTPA positive patients is SMALL, very small really at 2.6% which is damn close to the point when you might argue that it’s not worth investigating at all (Kline suggested 1.8%)
- That doubling the threshold in low risk patients would reduce the need for additional scans and save radiation and cost.
This sounds great, but there are a number of problems in this study. Read for yourself and make your own mind up but I would highlight the following.
- 1. The PERC rule seems to be applied retrospectively. This might explain why some people get scans despite being perc negative. Odd though, clearly risk at time and risk in retrospect are not the same here. I would be very cautious indeed about any use of data around the PERC risk calculation as I think it needed to be done prospectively. OK to look at the d-dimer levels, but beyond that it gets a bit ambitious in the conclusions. I know they do talk about this in the discussion, but I’m not sure how useful it is to have in the paper. You cannot really apply a pretest probability test as a post test discriminator, that just doesn’t seem to make much sense to the team here.
- 2. The numbers after filtering down through risk, levels, follow etc. are pretty small. Arguably in the group they talk about not scanning we are down to 99 patients in a disease which has a low incidence in this group. In the presence of such a low incidence the confidence intervals are wide and you should be very cautious indeed about taking that data forward into clinical practice. They say that there were no events in 113 patients within this group but what’s the confidence interval? You can use the rule of three here to calculate confidence intervals for this and it works out at 0-2.6%. Not so confident now then!
- 3. This is single centre which is fair enough in many ways as they can control the data, but it does limit the generalisability of the study. The journal club got the impression (difficult in a retrospective study I know) that this was a litigation adverse system as evidenced by the number of patient who appeared to be negative for d-dimer and low risk (even in retrospect). This either suggests a methodological problem or some degree of risk avertion.
- 4. Lastly the cohort here is of patients who had a CTPA. What we don’t know is what happened to patients who presented with typical symptoms or signs but who never got a CTPA. As EPs we want to know about patients in the ED, not about patients who have already had a scan that we now don’t think some of them need (Eh? -Ed). As EPs we need studies that start from the point at which we would be making decisions, not this retrospective version.
So what can we take away? Anything?
Sure we can, studies such as this can really help highlight areas for future research and are helpful in questioning what we do. We have no doubt that it was a lot of work and it’s generated some great discussion. It’s not yet conclusive for us and we are not changing practice on the basis of this, but we are thinking…..and that’s no bad thing. Is there other work out there on this? Sure there is and I think it is something we will hear more about in the next year as many of us are getting worried about the increased use of radiation in the ED. I would recommend checking out this paper from Canada and co-authored by one of the greatest St.Emlyn’s alumni Dr Kerstin Hogg. They find that doubling the threshold does not really change the probability of finding important PEs on the CTPA.
Heading towards FCEM? If yes, then try these questions.
[learn_more caption=”Comment on the change in scanner types used in this study.”] This was a retrospective study so things happened beyond the control of the authors. In 2005 the scanners changed from a 4 to a 16 slice model. Since CTPA is being used as the gold standard for diagnosis then this effectively means that the gold standard has changed. Whilst we have some sympathy for this it is a flaw in the study and one of the consequences of doing this retrospectively. [/learn_more]
[learn_more caption=”What are the advantages to a retrospective study design in this paper”] Quite frankly it is convenience. Retrospective studies are a great way of getting data quickly….because it is already there. They can therefore be very useful if you need to get an answer quickly BUT they have flaws such as problems with record keeping, memory, data quality and unexpected changes over time (such as the change in scanner here). So don’t dismiss all retrospective studies, they can help, but be cautious when interpreting the findings.
Last word to the Journal Club Twitteratti…if you don’t agree, click on the box below and take it to Twitter.