scispace - formally typeset
Search or ask a question

Showing papers by "Steve Goodacre published in 2019"


Journal ArticleDOI
TL;DR: The 4 ‘A’s Test is a short, pragmatic tool which can help improving detection rates of delirium in routine clinical care and can be compared with the Confusion Assessment Method, which has a sensitivity and specificity higher than the 4AT.
Abstract: Delirium affects > 15% of hospitalised patients but is grossly underdetected, contributing to poor care. The 4 ‘A’s Test (4AT, www.the4AT.com ) is a short delirium assessment tool designed for routine use without special training. The primary objective was to assess the accuracy of the 4AT for delirium detection. The secondary objective was to compare the 4AT with another commonly used delirium assessment tool, the Confusion Assessment Method (CAM). This was a prospective diagnostic test accuracy study set in emergency departments or acute medical wards involving acute medical patients aged ≥ 70. All those without acutely life-threatening illness or coma were eligible. Patients underwent (1) reference standard delirium assessment based on DSM-IV criteria and (2) were randomised to either the index test (4AT, scores 0–12; prespecified score of > 3 considered positive) or the comparator (CAM; scored positive or negative), in a random order, using computer-generated pseudo-random numbers, stratified by study site, with block allocation. Reference standard and 4AT or CAM assessments were performed by pairs of independent raters blinded to the results of the other assessment. Eight hundred forty-three individuals were randomised: 21 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome, and 785 were included in the analysis. Mean age was 81.4 (SD 6.4) years. 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT had an area under the receiver operating characteristic curve of 0.90 (95% CI 0.84–0.96). The 4AT had a sensitivity of 76% (95% CI 61–87%) and a specificity of 94% (95% CI 92–97%). The CAM had a sensitivity of 40% (95% CI 26–57%) and a specificity of 100% (95% CI 98–100%). The 4AT is a short, pragmatic tool which can help improving detection rates of delirium in routine clinical care. International standard randomised controlled trial number (ISRCTN) 53388093 . Date applied 30/05/2014; date assigned 02/06/2014.

111 citations


Journal ArticleDOI
TL;DR: Findings support the use of the 4AT as a rapid delirium assessment instrument and its usability, diagnostic accuracy and cost are evaluated.
Abstract: Background Delirium is a common and serious neuropsychiatric syndrome, usually triggered by illness or drugs. It remains underdetected. One reason for this is a lack of brief, pragmatic assessment tools. The 4 ‘A’s test (Arousal, Attention, Abbreviated Mental Test – 4, Acute change) (4AT) is a screening tool designed for routine use. This project evaluated its usability, diagnostic accuracy and cost. Methods Phase 1 – the usability of the 4AT in routine practice was measured with two surveys and two qualitative studies of health-care professionals, and a review of current clinical use of the 4AT as well as its presence in guidelines and reports. Phase 2 – the 4AT’s diagnostic accuracy was assessed in newly admitted acute medical patients aged ≥ 70 years. Its performance was compared with that of the Confusion Assessment Method (CAM; a longer screening tool). The performance of individual 4AT test items was related to cognitive status, length of stay, new institutionalisation, mortality at 12 weeks and outcomes. The method used was a prospective, double-blind diagnostic test accuracy study in emergency departments or in acute general medical wards in three UK sites. Each patient underwent a reference standard delirium assessment and was also randomised to receive an assessment with either the 4AT (n = 421) or the CAM (n = 420). A health economics analysis was also conducted. Results Phase 1 found evidence that delirium awareness is increasing, but also that there is a need for education on delirium in general and on the 4AT in particular. Most users reported that the 4AT was useful, and it was in widespread use both in the UK and beyond. No changes to the 4AT were considered necessary. Phase 2 involved 785 individuals who had data for analysis; their mean age was 81.4 (standard deviation 6.4) years, 45% were male, 99% were white and 9% had a known dementia diagnosis. The 4AT (n = 392) had an area under the receiver operating characteristic curve of 0.90. A positive 4AT score (> 3) had a specificity of 95% [95% confidence interval (CI) 92% to 97%] and a sensitivity of 76% (95% CI 61% to 87%) for reference standard delirium. The CAM (n = 382) had a specificity of 100% (95% CI 98% to 100%) and a sensitivity of 40% (95% CI 26% to 57%) in the subset of participants whom it was possible to assess using this. Patients with positive 4AT scores had longer lengths of stay (median 5 days, interquartile range 2.0–14.0 days) than did those with negative 4AT scores (median 2 days, interquartile range 1.0–6.0 days), and they had a higher 12-week mortality rate (16.1% and 9.2%, respectively). The estimated 12-week costs of an initial inpatient stay for patients with delirium were more than double the costs of an inpatient stay for patients without delirium (e.g. in Scotland, £7559, 95% CI £7362 to £7755, vs. £4215, 95% CI £4175 to £4254). The estimated cost of false-positive cases was £4653, of false-negative cases was £8956, and of a missed diagnosis was £2067. Limitations Patients were aged ≥ 70 years and were assessed soon after they were admitted, limiting generalisability. The treatment of patients in accordance with reference standard diagnosis limited the ability to assess comparative cost-effectiveness. Conclusions These findings support the use of the 4AT as a rapid delirium assessment instrument. The 4AT has acceptable diagnostic accuracy for acute older patients aged > 70 years. Future work Further research should address the real-world implementation of delirium assessment. The 4AT should be tested in other populations. Trial registration Current Controlled Trials ISRCTN53388093. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 40. See the NIHR Journals Library website for further project information. The funder specified that any new delirium assessment tool should be compared against the CAM, but had no other role in the study design or conduct of the study.

54 citations


Journal ArticleDOI
TL;DR: To identify clinical features associated with pulmonary embolism diagnosis and determine the accuracy of decision rules and D‐dimer for diagnosing suspected PE in pregnant/postpartum women.

42 citations


Journal ArticleDOI
TL;DR: The projects that were set up, the challenges of putting these projects into a maintenance-only state, and ongoing activities to maintain readiness for activation are described, and how to plan research for a range of major incidents is discussed.
Abstract: The 2009 influenza A H1N1 pandemic was responsible for considerable global morbidity and mortality. In 2009, several research studies in the UK were rapidly funded and activated for clinical and public health actions. However, some studies were too late for their results to have an early and substantial effect on clinical care, because of the time required to call for research proposals, assess, fund, and set up the projects. In recognition of these inherent delays, a portfolio of projects was funded by the National Institute for Health Research in 2012. These studies have now been set up (ie, with relevant permissions and arrangements made for data collection) and pilot tested where relevant. All studies are now on standby awaiting activation in the event of a pandemic being declared. In this Personal View, we describe the projects that were set up, the challenges of putting these projects into a maintenance-only state, and ongoing activities to maintain readiness for activation, and discuss how to plan research for a range of major incidents.

34 citations


Journal ArticleDOI
09 Aug 2019-Trials
TL;DR: Progression criteria for an internal pilot are usually well specified but targets vary widely, and for the actual criteria, red/amber/green systems have increased in popularity in recent years.
Abstract: With millions of pounds spent annually on medical research in the UK, it is important that studies are spending funds wisely. Internal pilots offer the chance to stop a trial early if it becomes apparent that the study will not be able to recruit enough patients to show whether an intervention is clinically effective. This study aims to assess the use of internal pilots in individually randomised controlled trials funded by the Health Technology Assessment (HTA) programme and to summarise the progression criteria chosen in these trials. Studies were identified from reports of the HTA committees’ funding decisions from 2012 to 2016. In total, 242 trials were identified of which 134 were eligible to be included in the audit. Protocols for the eligible studies were located on the NIHR Journals website, and if protocols were not available online then study managers were contacted to provide information. Over two-thirds (72.4%) of studies said in their protocol that they would include an internal pilot phase for their study and 37.8% of studies without an internal pilot had done an external pilot study to assess the feasibility of the full study. A typical study with an internal pilot has a target sample size of 510 over 24 months and aims to recruit one-fifth of their total target sample size within the first one-third of their recruitment time. There has been an increase in studies adopting a three-tiered structure for their progression rules in recent years, with 61.5% (16/26) of studies using the system in 2016 compared to just 11.8% (2/17) in 2015. There was also a rise in the number of studies giving a target recruitment rate in their progression criteria: 42.3% (11/26) in 2016 compared to 35.3% (6/17) in 2015. Progression criteria for an internal pilot are usually well specified but targets vary widely. For the actual criteria, red/amber/green systems have increased in popularity in recent years. Trials should justify the targets they have set, especially where targets are low.

26 citations


Journal ArticleDOI
29 Apr 2019
TL;DR: The aim was to develop new ways of measuring the impact of ambulance service care by reviewing and synthesising literature on prehospital ambulance outcome measures and using consensus methods to identify measures for further development, and creating a data set linking routinely collected ambulance service, hospital and mortality data.
Abstract: Background Ambulance service quality measures have focused on response times and a small number of emergency conditions, such as cardiac arrest. These quality measures do not reflect the care for the wide range of problems that ambulance services respond to and the Prehospital Outcomes for Evidence Based Evaluation (PhOEBE) programme sought to address this. Objectives The aim was to develop new ways of measuring the impact of ambulance service care by reviewing and synthesising literature on prehospital ambulance outcome measures and using consensus methods to identify measures for further development; creating a data set linking routinely collected ambulance service, hospital and mortality data; and using the linked data to explore the development of case-mix adjustment models to assess differences or changes in processes and outcomes resulting from ambulance service care. Design A mixed-methods study using a systematic review and synthesis of performance and outcome measures reported in policy and research literature; qualitative interviews with ambulance service users; a three-stage consensus process to identify candidate indicators; the creation of a data set linking ambulance, hospital and mortality data; and statistical modelling of the linked data set to produce novel case-mix adjustment measures of ambulance service quality. Setting East Midlands and Yorkshire, England. Participants Ambulance services, patients, public, emergency care clinical academics, commissioners and policy-makers between 2011 and 2015. Interventions None. Main outcome measures Ambulance performance and quality measures. Data sources Ambulance call-and-dispatch and electronic patient report forms, Hospital Episode Statistics, accident and emergency and inpatient data, and Office for National Statistics mortality data. Results Seventy-two candidate measures were generated from systematic reviews in four categories: (1) ambulance service operations (n = 14), (2) clinical management of patients (n = 20), (3) impact of care on patients (n = 9) and (4) time measures (n = 29). The most common operations measures were call triage accuracy; clinical management was adherence to care protocols, and for patient outcome it was survival measures. Excluding time measures, nine measures were highly prioritised by participants taking part in the consensus event, including measures relating to pain, patient experience, accuracy of dispatch decisions and patient safety. Twenty experts participated in two Delphi rounds to refine and prioritise measures and 20 measures scored ≥ 8/9 points, which indicated good consensus. Eighteen patient and public representatives attending a consensus workshop identified six measures as important: time to definitive care, response time, reduction in pain score, calls correctly prioritised to appropriate levels of response, proportion of patients with a specific condition who are treated in accordance with established guidelines, and survival to hospital discharge for treatable emergency conditions. From this we developed six new potential indicators using the linked data set, of which five were constructed using case-mix-adjusted predictive models: (1) mean change in pain score; (2) proportion of serious emergency conditions correctly identified at the time of the 999 call; (3) response time (unadjusted); (4) proportion of decisions to leave a patient at scene that were potentially inappropriate; (5) proportion of patients transported to the emergency department by 999 emergency ambulance who did not require treatment or investigation(s); and (6) proportion of ambulance patients with a serious emergency condition who survive to admission, and to 7 days post admission. Two indicators (pain score and response times) did not need case-mix adjustment. Among the four adjusted indicators, we found that accuracy of call triage was 61%, rate of potentially inappropriate decisions to leave at home was 5–10%, unnecessary transport to hospital was 1.7–19.2% and survival to hospital admission was 89.5–96.4% depending on Clinical Commissioning Group area. We were unable to complete a fourth objective to test the indicators in use because of delays in obtaining data. An economic analysis using indicators (4) and (5) showed that incorrect decisions resulted in higher costs. Limitations Creation of a linked data set was complex and time-consuming and data quality was variable. Construction of the indicators was also complex and revealed the effects of other services on outcome, which limits comparisons between services. Conclusions We identified and prioritised, through consensus processes, a set of potential ambulance service quality measures that reflected preferences of services and users. Together, these encompass a broad range of domains relevant to the population using the emergency ambulance service. The quality measures can be used to compare ambulance services or regions or measure performance over time if there are improvements in mechanisms for linking data across services. Future work The new measures can be used to assess different dimensions of ambulance service delivery but current data challenges prohibit routine use. There are opportunities to improve data linkage processes and to further develop, validate and simplify these measures. Funding The National Institute for Health Research Programme Grants for Applied Research programme.

22 citations


Journal ArticleDOI
TL;DR: In practice, pain scoring may not accurately reflect patient experience and using pain scoring to determine the appropriateness of triage and treatment decisions reduces its validity as a measure of patient experience.

22 citations


Journal ArticleDOI
TL;DR: A systematic review identified variables predicting venous thromboembolism (VTE) in this group and found limited evidence to support the use of other risk factors within prediction models.

19 citations


Journal ArticleDOI
TL;DR: To determine the clinical effectiveness and cost-effectiveness of different strategies for providing thromboprophylaxis to people with lower-limb immobilisation caused by injury and to identify priorities for future research, systematic reviews were undertaken.
Abstract: Background Thromboprophylaxis can reduce the risk of venous thromboembolism (VTE) during lower-limb immobilisation, but it is unclear whether or not this translates into meaningful health benefit, justifies the risk of bleeding or is cost-effective. Risk assessment models (RAMs) could select higher-risk individuals for thromboprophylaxis. Objectives To determine the clinical effectiveness and cost-effectiveness of different strategies for providing thromboprophylaxis to people with lower-limb immobilisation caused by injury and to identify priorities for future research. Data sources Ten electronic databases and research registers (MEDLINE, EMBASE, Cochrane Database of Systematic Reviews, Database of Abstracts of Review of Effects, the Cochrane Central Register of Controlled Trials, Health Technology Assessment database, NHS Economic Evaluation Database, Science Citation Index Expanded, ClinicalTrials.gov and the International Clinical Trials Registry Platform) were searched from inception to May 2017, and this was supplemented by hand-searching reference lists and contacting experts in the field. Review methods Systematic reviews were undertaken to determine the effectiveness of pharmacological thromboprophylaxis in lower-limb immobilisation and to identify any study of risk factors or RAMs for VTE in lower-limb immobilisation. Study quality was assessed using appropriate tools. A network meta-analysis was undertaken for each outcome in the effectiveness review and the results of risk-prediction studies were presented descriptively. A modified Delphi survey was undertaken to identify risk predictors supported by expert consensus. Decision-analytic modelling was used to estimate the incremental cost per quality-adjusted life-year (QALY) gained of different thromboprophylaxis strategies from the perspectives of the NHS and Personal Social Services. Results Data from 6857 participants across 13 trials were included in the meta-analysis. Thromboprophylaxis with low-molecular-weight heparin reduced the risk of any VTE [odds ratio (OR) 0.52, 95% credible interval (CrI) 0.37 to 0.71], clinically detected deep-vein thrombosis (DVT) (OR 0.40, 95% CrI 0.12 to 0.99) and pulmonary embolism (PE) (OR 0.17, 95% CrI 0.01 to 0.88). Thromboprophylaxis with fondaparinux (Arixtra®, Aspen Pharma Trading Ltd, Dublin, Ireland) reduced the risk of any VTE (OR 0.13, 95% CrI 0.05 to 0.30) and clinically detected DVT (OR 0.10, 95% CrI 0.01 to 0.94), but the effect on PE was inconclusive (OR 0.47, 95% CrI 0.01 to 9.54). Estimates of the risk of major bleeding with thromboprophylaxis were inconclusive owing to the small numbers of events. Fifteen studies of risk factors were identified, but only age (ORs 1.05 to 3.48), and injury type were consistently associated with VTE. Six studies of RAMs were identified, but only two reported prognostic accuracy data for VTE, based on small numbers of patients. Expert consensus was achieved for 13 risk predictors in lower-limb immobilisation due to injury. Modelling showed that thromboprophylaxis for all is effective (0.015 QALY gain, 95% CrI 0.004 to 0.029 QALYs) with a cost-effectiveness of £13,524 per QALY, compared with thromboprophylaxis for none. If risk-based strategies are included, it is potentially more cost-effective to limit thromboprophylaxis to patients with a Leiden thrombosis risk in plaster (cast) [L-TRiP(cast)] score of ≥ 9 (£20,000 per QALY threshold) or ≥ 8 (£30,000 per QALY threshold). An optimal threshold on the L-TRiP(cast) receiver operating characteristic curve would have sensitivity of 84–89% and specificity of 46–55%. Limitations Estimates of RAM prognostic accuracy are based on weak evidence. People at risk of bleeding were excluded from trials and, by implication, from modelling. Conclusions Thromboprophylaxis for lower-limb immobilisation due to injury is clinically effective and cost-effective compared with no thromboprophylaxis. Risk-based thromboprophylaxis is potentially optimal but the prognostic accuracy of existing RAMs is uncertain. Future work Research is required to determine whether or not an appropriate RAM can accurately select higher-risk patients for thromboprophylaxis. Study registration This study is registered as PROSPERO CRD42017058688. Funding The National Institute for Health Research Health Technology Assessment programme.

18 citations


Journal ArticleDOI
02 Nov 2019-BMJ Open
TL;DR: This project seeks to identify the optimal ACP configuration for epilepsy and a pro-active dissemination strategy will make those considering developing or supporting an epilepsy ACP aware of the project and opportunities to take part in it.
Abstract: Introduction Emergency department (ED) visits for epilepsy are common, costly, often clinically unnecessary and typically lead to little benefit for epilepsy management. An ‘Alternative Care Pathway’ (ACP) for epilepsy, which diverts people with epilepsy (PWE) away from ED when ‘999’ is called and leads to care elsewhere, might generate savings and facilitate improved ambulatory care. It is unknown though what features it should incorporate to make it acceptable to persons from this particularly vulnerable target population. It also needs to be National Health Service (NHS) feasible. This project seeks to identify the optimal ACP configuration. Methods and analysis Mixed-methods project comprising three-linked stages. In Stage 1, NHS bodies will be surveyed on ACPs they are considering and semi-structured interviews with PWE and their carers will explore attributes of care important to them and their concerns and expectations regarding ACPs. In Stage 2, Discrete Choice Experiments (DCE) will be completed with PWE and carers to identify the relative importance placed on different care attributes under common seizure scenarios and the trade-offs people are willing to make. The uptake of different ACP configurations will be estimated. In Stage 3, two Knowledge Exchange workshops using a nominal group technique will be run. NHS managers, health professionals, commissioners and patient and carer representatives will discuss DCE results and form a consensus on which ACP configuration best meets users’ needs and is NHS feasible. Ethics and dissemination Ethical approval: NRES Committee (19/WM/0012) and King’s College London ethics Committee (LRS-18/19-10353). Primary output will be identification of optimal ACP configuration which should be prioritised for implementation and evaluation. A pro-active dissemination strategy will make those considering developing or supporting an epilepsy ACP aware of the project and opportunities to take part in it. It will also ensure they are informed of its findings. Project registration number Researchregistry4723.

7 citations


Journal ArticleDOI
TL;DR: The fidelity measures used were reliable and showed that the intervention was delivered as attended, therefore, any estimates of intervention effect will not be influenced by poor implementation fidelity.
Abstract: Purpose. To measure fidelity with which a group seizure first aid training intervention was delivered within a pilot randomized controlled trial underway in the UK for adults with epilepsy who visit emergency departments (ED) and informal carers. Estimates of its effects, including on ED use, will be produced by the trial. Whilst hardly ever reported for trials of epilepsy interventions—only one publication on this topic exists—this study provides the information on treatment fidelity necessary to allow the trial’s estimates to be accurately interpreted. This rare worked example of how fidelity can be assessed could also provide guidance sought by neurology trialists on how to assess fidelity. Methods. 53 patients who had visited ED on ≥2 occasions in prior year were recruited for the trial; 26 were randomized to the intervention. 7 intervention courses were delivered for them by one facilitator. Using audio recordings, treatment “adherence” and “competence” were assessed. Adherence was assessed by a checklist of the items comprising the intervention. Using computer software, competence was measured by calculating facilitator speech during the intervention (didacticism). Interrater reliability was evaluated by two independent raters assessing each course using the measures and their ratings being compared. Results. The fidelity measures were found to be reliable. For the adherence instrument, raters agreed 96% of the time, PABAK-OS kappa 0.91. For didacticism, raters’ scores had an intraclass coefficient of 0.96. In terms of treatment fidelity, not only were courses found to have been delivered with excellent adherence (88% of its items were fully delivered) but also as intended they were highly interactive, with the facilitator speaking for, on average, 55% of course time. Conclusions. The fidelity measures used were reliable and showed that the intervention was delivered as attended. Therefore, any estimates of intervention effect will not be influenced by poor implementation fidelity.



Proceedings ArticleDOI
01 Apr 2019-BMJ Open
TL;DR: Involving public contributors enabled the research team to identify patient-prioritised outcomes and adjust the proposed study design to reflect these in the proposal.
Abstract: Background Involving patients and public members in research helps ensure evidence is relevant, accountable and high quality. Public and patient involvement (PPI) is required in many funding applications. We aimed to involve public contributors in designing a research bid about prehospital management for hip fracture. Method We recruited two public contributors with experience of hip fracture and prehospital care to our research team of academic, clinical and managerial partners developing the RAPID 2 proposal evaluating paramedic administration of Fascia Iliaca Compartment Block, a local anesthetic injection into the hip. We supported them to consult with a public/patient group and identify patient priorities to inform our decisions. We held research development meetings and shared project drafts to gain views, share decisions and amend documents. Results Consultation responses suggested patient priorities after hip fracture were to return home, recover mobility and gain independence. These views guided our decisions on setting primary outcomes which were length-of-hospital-stay and health-related quality-of-life. Their concern about the study design causing delayed access to treatment meant we decided to identify common exclusion criteria before randomisation to expedite access to pain management and reduce attrition. Public contributors also agreed patients should be offered an incentive for completing and returning questionnaires to enhance data completeness. Conclusion Involving public contributors enabled the research team to identify patient-prioritised outcomes and adjust the proposed study design to reflect these in the proposal. Public contributors will remain involved if funding is awarded to ensure patient perspectives inform all stages of research management and dissemination. Conflict of interest None. Funding PRIME Centre Wales.


Journal ArticleDOI
TL;DR: A decision-analytic model was developed to compare the management of a cohort of patients with lower limb immobilisation following injury who received pharmacological thromboprophylaxis to management without this treatment, in terms of 6-month and 5-year outcomes, and lifetime QALYs.
Abstract: Pharmacological thromboprophylaxis reduces the risk of symptomatic venous thromboembolism (VTE) in people with lower limb immobilisation due to injury (Zee et al, 2017) but can increase the risk of bleeding. Clinicians therefore need to weigh the risks and benefits of thromboprophylaxis to determine the overall benefit of treatment. Decision-analytic modelling can inform this process by simulating patient management according to alternative strategies to determine the probability of different outcomes with each strategy. Outcomes can then be valued as quality-adjusted life years (QALYs) to determine which strategy is associated with the greatest quality-adjusted life expectancy. We developed a decision-analytic model to compare the management of a cohort of patients with lower limb immobilisation following injury who received pharmacological thromboprophylaxis to management without this treatment, in terms of 6-month and 5-year outcomes, and lifetime QALYs. Full details of the methods and data sources are provided in the online appendix. Briefly, a 6-month decision tree model was used to estimate for each strategy; the number of patients receiving thromboprophylaxis, the impact of thromboprophylaxis on VTE outcomes [pulmonary emboli (PE) and deep vein thrombosis (DVT)], and the incidence of major bleeds during either thromboprophylaxis or VTE treatment with anticoagulants. Major bleeds were divided into fatal bleeds, non-fatal intracranial haemorrhage (ICH) and other major bleeds. PEs were divided into fatal and non-fatal events. DVTs were divided first into symptomatic and asymptomatic DVTs and then into proximal and distal DVTs. Symptomatic DVTs and non-fatal PEs are assumed to result in 3 months of anticoagulant treatment. A Markov model was then used to extrapolate life-time outcomes, including overall survival and ongoing morbidity related to either bleeds or VTE. The health states included within the Markov model capture the risk of post-thrombotic syndrome (PTS) following VTE and the risk of chronic thromboembolic pulmonary hypertension (CTEPH) following PE. The risk of PTS is dependent on whether the DVT is symptomatic and treated or asymptomatic and untreated and also whether the DVT is proximal or distal. The CTEPH state is divided according to whether patients receive medical or surgical management to allow for differential costs and survival between these groups. There is also a post-ICH state to capture ongoing morbidity following ICHs. The effectiveness of thromboprophylaxis and the risk of VTE in patients not receiving thromboprophylaxis were estimated from a systematic review of thromboprophylaxis in lower limb immobilisation (Pandor et al, in press). The relative risk of bleeding was estimated from a systematic review of thromboprophylaxis across multiple conditions (National Clinical Guideline Centre – Acute and Chronic Conditions (UK), 2010) and applied to a baseline risk of bleeding from a large primary care database with 16 4 million person-years of follow-up (Hippisley-Cox & Coupland, 2014). The data sources used to determine the probabilites of subsequent events in the decision tree and Markov models are described in the online appendix. QALYs were estimated by applying estimates of health utility (a measure of health-related quality of life on a scale of zero to one) to life expectancy after each of the events in the model. During the decision tree phase, absolute utility values were applied based on the events occurring, with age-dependent general population values applied to those not having any events. A disutility (i.e. a reduction in quality of life) was applied to patients receiving prophylaxis with low molecular weight heparin to account for the impact of regular injections and a disutility was applied during VTE treatment to reflect patients’ preferences to avoid long-term treatment. During the Markov model phase, patients without long-term sequelae or ongoing symptoms have general population levels of utility which vary with age and those with sequelae or ongoing symptoms have utility multipliers applied which reduce their utility by a fixed proportion relative to the general population level for their age. Details of utilities and life expectancy after each of the model states are provided in the online appendix. Short and long-term clinical outcomes per 100 000 patients are presented in Table I. The model predicts that the combined rate of serious acute adverse outcomes (ICH or death from VTE or bleeding) would be very low regardless of whether thromboprophylaxis is used (around 1 in 4000). The short-term benefits of thromboprophylaxis lie in reducing the rates of non-fatal PE (225 vs. 415 per 100 000), symptomatic DVT (492 vs. 907 per 100 000) and asymptomatic DVT (3820 vs. 7052 per 100 000). These lead to longer term benefits in terms of reduced risks of PTS (1007 versus 1859 per 100,000) and CTEPH (6 vs. 11 per 100 000), with an additional 4 patients in 100 000 surviving to 5 years compared with no thromboprophylaxis. Overall, Correspondence