scispace - formally typeset
Search or ask a question

Showing papers in "Health Technology Assessment in 2018"


Journal ArticleDOI
TL;DR: PET/CT provided a significant incremental diagnostic benefit in the diagnosis of pancreatic cancer and significantly influenced the staging and management of patients and is likely to be cost-effective at current reimbursement rates for PET/CT to the UK NHS.
Abstract: BACKGROUND: Pancreatic cancer diagnosis and staging can be difficult in 10-20% of patients. Positron emission tomography (PET)/computed tomography (CT) adds precise anatomical localisation to functional data. The use of PET/CT may add further value to the diagnosis and staging of pancreatic cancer. OBJECTIVE: To determine the incremental diagnostic accuracy and impact of PET/CT in addition to standard diagnostic work-up in patients with suspected pancreatic cancer. DESIGN: A multicentre prospective diagnostic accuracy and clinical value study of PET/CT in suspected pancreatic malignancy. PARTICIPANTS: Patients with suspected pancreatic malignancy. INTERVENTIONS: All patients to undergo PET/CT following standard diagnostic work-up. MAIN OUTCOME MEASURES: The primary outcome was the incremental diagnostic value of PET/CT in addition to standard diagnostic work-up with multidetector computed tomography (MDCT). Secondary outcomes were (1) changes in patients' diagnosis, staging and management as a result of PET/CT; (2) changes in the costs and effectiveness of patient management as a result of PET/CT; (3) the incremental diagnostic value of PET/CT in chronic pancreatitis; (4) the identification of groups of patients who would benefit most from PET/CT; and (5) the incremental diagnostic value of PET/CT in other pancreatic tumours. RESULTS: Between 2011 and 2013, 589 patients with suspected pancreatic cancer underwent MDCT and PET/CT, with 550 patients having complete data and in-range PET/CT. Sensitivity and specificity for the diagnosis of pancreatic cancer were 88.5% and 70.6%, respectively, for MDCT and 92.7% and 75.8%, respectively, for PET/CT. The maximum standardised uptake value (SUVmax.) for a pancreatic cancer diagnosis was 7.5. PET/CT demonstrated a significant improvement in relative sensitivity (p = 0.01) and specificity (p = 0.023) compared with MDCT. Incremental likelihood ratios demonstrated that PET/CT significantly improved diagnostic accuracy in all scenarios (p < 0.0002). PET/CT correctly changed the staging of pancreatic cancer in 56 patients (p = 0.001). PET/CT influenced management in 250 (45%) patients. PET/CT stopped resection in 58 (20%) patients who were due to have surgery. The benefit of PET/CT was limited in patients with chronic pancreatitis or other pancreatic tumours. PET/CT was associated with a gain in quality-adjusted life-years of 0.0157 (95% confidence interval -0.0101 to 0.0430). In the base-case model PET/CT was seen to dominate MDCT alone and is thus highly likely to be cost-effective for the UK NHS. PET/CT was seen to be most cost-effective for the subgroup of patients with suspected pancreatic cancer who were thought to be resectable. CONCLUSION: PET/CT provided a significant incremental diagnostic benefit in the diagnosis of pancreatic cancer and significantly influenced the staging and management of patients. PET/CT had limited utility in chronic pancreatitis and other pancreatic tumours. PET/CT is likely to be cost-effective at current reimbursement rates for PET/CT to the UK NHS. This was not a randomised controlled trial and therefore we do not have any information from patients who would have undergone MDCT only for comparison. In addition, there were issues in estimating costs for PET/CT. Future work should evaluate the role of PET/CT in intraductal papillary mucinous neoplasm and prognosis and response to therapy in patients with pancreatic cancer. STUDY REGISTRATION: Current Controlled Trials ISRCTN73852054 and UKCRN 8166. FUNDING: The National Institute for Health Research Health Technology Assessment programme.

108 citations


Journal ArticleDOI
TL;DR: To achieve similar levels of sensitivity to the Assessment of Different NEoplasias in the adneXa (ADNEX) model and the International Ovarian Tumour Analysis (IOTA) group's simple ultrasound rules, a very low RMI 1 decision threshold would be needed.
Abstract: Background Ovarian cancer is the sixth most common cancer in UK women and can be difficult to diagnose, particularly in the early stages Risk-scoring can help to guide referral to specialist centres Objectives To assess the clinical and cost-effectiveness of risk scores to guide referral decisions for women with suspected ovarian cancer in secondary care Methods Twenty-one databases, including MEDLINE and EMBASE, were searched from inception to November 2016 Review methods followed published guidelines The meta-analysis using weighted averages and random-effects modelling was used to estimate summary sensitivity and specificity with 95% confidence intervals (CIs) The cost-effectiveness analysis considered the long-term costs and quality-adjusted life-years (QALYs) associated with different risk-scoring methods, and subsequent care pathways Modelling comprised a decision tree and a Markov model The decision tree was used to model short-term outcomes and the Markov model was used to estimate the long-term costs and QALYs associated with treatment and progression Results Fifty-one diagnostic cohort studies were included in the systematic review The Risk of Ovarian Malignancy Algorithm (ROMA) score did not offer any advantage over the Risk of Malignancy Index 1 (RMI 1) Patients with borderline tumours or non-ovarian primaries appeared to account for disproportionately high numbers of false-negative, low-risk ROMA scores (Confidential information has been removed) To achieve similar levels of sensitivity to the Assessment of Different NEoplasias in the adneXa (ADNEX) model and the International Ovarian Tumour Analysis (IOTA) group's simple ultrasound rules, a very low RMI 1 decision threshold (25) would be needed; the summary sensitivity and specificity estimates for the RMI 1 at this threshold were 949% (95% CI 915% to 972%) and 511% (95% CI 470% to 552%), respectively In the base-case analysis, RMI 1 (threshold of 250) was the least effective [16926 life-years (LYs), 13820 QALYs] and the second cheapest (£5669) The IOTA group's simple ultrasound rules (inconclusive, assumed to be malignant) were the cheapest (£5667) and the second most effective [16954 LYs, 13841 QALYs], dominating RMI 1 The ADNEX model (threshold of 10%), costing £5699, was the most effective (16957 LYs, 13843 QALYs), and compared with the IOTA group's simple ultrasound rules, resulted in an incremental cost-effectiveness ratio of £15,304 per QALY gained At thresholds of up to £15,304 per QALY gained, the IOTA group's simple ultrasound rules are cost-effective; the ADNEX model (threshold of 10%) is cost-effective for higher thresholds Limitations Information on the downstream clinical consequences of risk-scoring was limited Conclusions Both the ADNEX model and the IOTA group's simple ultrasound rules may offer increased sensitivity relative to current practice (RMI 1); that is, more women with malignant tumours would be referred to a specialist multidisciplinary team, although more women with benign tumours would also be referred The cost-effectiveness model supports prioritisation of sensitivity over specificity Further research is needed on the clinical consequences of risk-scoring Study registration This study is registered as PROSPERO CRD42016053326 Funding details The National Institute for Health Research Health Technology Assessment programme

87 citations


Journal ArticleDOI
TL;DR: The long-term efficacy of EVAR against OR in patients deemed fit and suitable for both procedures (EVAR trial 1; EVAR-1); and against no intervention in patients unfit for OR (evAR-2) is assessed to appraise the long- term significance of type II endoleak and define criteria for intervention.
Abstract: Background Short-term survival benefits of endovascular aneurysm repair (EVAR) compared with open repair (OR) of intact abdominal aortic aneurysms have been shown in randomised trials, but this early survival benefit is soon lost Survival benefit of EVAR was unclear at follow-up to 10 years Objective To assess the long-term efficacy of EVAR against OR in patients deemed fit and suitable for both procedures (EVAR trial 1; EVAR-1); and against no intervention in patients unfit for OR (EVAR trial 2; EVAR-2) To appraise the long-term significance of type II endoleak and define criteria for intervention Design Two national, multicentre randomised controlled trials: EVAR-1 and EVAR-2 Setting Patients were recruited from 37 hospitals in the UK between 1 September 1999 and 31 August 2004 Participants Men and women aged ≥ 60 years with an aneurysm of ≥ 55 cm (as identified by computed tomography scanning), anatomically suitable and fit for OR were randomly assigned 1 : 1 to either EVAR (n = 626) or OR (n = 626) in EVAR-1 using computer-generated sequences at the trial hub Patients considered unfit were randomly assigned to EVAR (n = 197) or no intervention (n = 207) in EVAR-2 There was no blinding Interventions EVAR, OR or no intervention Main outcome measures The primary end points were total and aneurysm-related mortality until mid-2015 for both trials Secondary outcomes for EVAR-1 were reinterventions, costs and cost-effectiveness Results In EVAR-1, over a mean of 127 years (standard deviation 15 years; maximum 158 years), we recorded 93 deaths per 100 person-years in the EVAR group and 89 deaths per 100 person-years in the OR group [adjusted hazard ratio (HR) 111, 95% confidence interval (CI) 097 to 127; p = 014] At 0–6 months after randomisation, patients in the EVAR group had a lower mortality (adjusted HR 061, 95% CI 037 to 102 for total mortality; HR 047, 95% CI 023 to 093 for aneurysm-related mortality; p = 0031), but beyond 8 years of follow-up patients in the OR group had a significantly lower mortality (adjusted HR 125, 95% CI 100 to 156, p = 0048 for total mortality; HR 582, 95% CI 164 to 2065, p = 00064 for aneurysm-related mortality) The increased aneurysm-related mortality in the EVAR group after 8 years was mainly attributable to secondary aneurysm sac rupture, with increased cancer mortality also observed in the EVAR group Overall, aneurysm reintervention rates were higher in the EVAR group than in the OR group, 41 and 17 per 100 person-years, respectively (p < 0001), with reinterventions occurring throughout follow-up The mean difference in costs over 14 years was £3798 (95% CI £2338 to £5258) Economic modelling based on the outcomes of the EVAR-1 trial showed that the cost per quality-adjusted life-year gained over the patient’s lifetime exceeds conventional thresholds used in the UK In EVAR-2, patients died at the same rate in both groups, but there was suggestion of lower aneurysm mortality in those who actually underwent EVAR Type II endoleak itself is not associated with a higher rate of mortality Limitations Devices used were implanted between 1999 and 2004 Newer devices might have better results Later follow-up imaging declined, particularly for OR patients Methodology to capture reinterventions changed mainly to record linkage through the Hospital Episode Statistics administrative data set from 2009 Conclusions EVAR has an early survival benefit but an inferior late survival benefit compared with OR, which needs to be addressed by lifelong surveillance of EVAR and reintervention if necessary EVAR does not prolong life in patients unfit for OR Type II endoleak alone is relatively benign Future work To find easier ways to monitor sac expansion to trigger timely reintervention Trial registration Current Controlled Trials ISRCTN55703451 Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and the results will be published in full in Health Technology Assessment; Vol 22, No 5 See the NIHR Journals Library website for further project information

84 citations


Journal ArticleDOI
TL;DR: Multiparametric magnetic resonance imaging used as a triage test might allow men to avoid unnecessary TRUS-guided biopsy and improve diagnostic accuracy, and the cost-effectiveness of a mpMRI-based diagnostic pathway is estimated.
Abstract: BACKGROUND: Men with suspected prostate cancer usually undergo transrectal ultrasound (TRUS)-guided prostate biopsy. TRUS-guided biopsy can cause side effects and has relatively poor diagnostic accuracy. Multiparametric magnetic resonance imaging (mpMRI) used as a triage test might allow men to avoid unnecessary TRUS-guided biopsy and improve diagnostic accuracy. OBJECTIVES: To (1) assess the ability of mpMRI to identify men who can safely avoid unnecessary biopsy, (2) assess the ability of the mpMRI-based pathway to improve the rate of detection of clinically significant (CS) cancer compared with TRUS-guided biopsy and (3) estimate the cost-effectiveness of a mpMRI-based diagnostic pathway. DESIGN: A validating paired-cohort study and an economic evaluation using a decision-analytic model. SETTING: Eleven NHS hospitals in England. PARTICIPANTS: Men at risk of prostate cancer undergoing a first prostate biopsy. INTERVENTIONS: Participants underwent three tests: (1) mpMRI (the index test), (2) TRUS-guided biopsy (the current standard) and (3) template prostate mapping (TPM) biopsy (the reference test). MAIN OUTCOME MEASURES: Diagnostic accuracy of mpMRI, TRUS-guided biopsy and TPM-biopsy measured by sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) using primary and secondary definitions of CS cancer. The percentage of negative magnetic resonance imaging (MRI) scans was used to identify men who might be able to avoid biopsy. RESULTS: Diagnostic study - a total of 740 men were registered and 576 underwent all three tests. According to TPM-biopsy, the prevalence of any cancer was 71% [95% confidence interval (CI) 67% to 75%]. The prevalence of CS cancer according to the primary definition (a Gleason score of ≥ 4 + 3 and/or cancer core length of ≥ 6 mm) was 40% (95% CI 36% to 44%). For CS cancer, TRUS-guided biopsy showed a sensitivity of 48% (95% CI 42% to 55%), specificity of 96% (95% CI 94% to 98%), PPV of 90% (95% CI 83% to 94%) and NPV of 74% (95% CI 69% to 78%). The sensitivity of mpMRI was 93% (95% CI 88% to 96%), specificity was 41% (95% CI 36% to 46%), PPV was 51% (95% CI 46% to 56%) and NPV was 89% (95% CI 83% to 94%). A negative mpMRI scan was recorded for 158 men (27%). Of these, 17 were found to have CS cancer on TPM-biopsy. Economic evaluation - the most cost-effective strategy involved testing all men with mpMRI, followed by MRI-guided TRUS-guided biopsy in those patients with suspected CS cancer, followed by rebiopsy if CS cancer was not detected. This strategy is cost-effective at the TRUS-guided biopsy definition 2 (any Gleason pattern of ≥ 4 and/or cancer core length of ≥ 4 mm), mpMRI definition 2 (lesion volume of ≥ 0.2 ml and/or Gleason score of ≥ 3 + 4) and cut-off point 2 (likely to be benign) and detects 95% (95% CI 92% to 98%) of CS cancers. The main drivers of cost-effectiveness were the unit costs of tests, the improvement in sensitivity of MRI-guided TRUS-guided biopsy compared with blind TRUS-guided biopsy and the longer-term costs and outcomes of men with cancer. LIMITATIONS: The PROstate Magnetic resonance Imaging Study (PROMIS) was carried out in a selected group and excluded men with a prostate volume of > 100 ml, who are less likely to have cancer. The limitations in the economic modelling arise from the limited evidence on the long-term outcomes of men with prostate cancer and on the sensitivity of MRI-targeted repeat biopsy. CONCLUSIONS: Incorporating mpMRI into the diagnostic pathway as an initial test prior to prostate biopsy may (1) reduce the proportion of men having unnecessary biopsies, (2) improve the detection of CS prostate cancer and (3) increase the cost-effectiveness of the prostate cancer diagnostic and therapeutic pathway. The PROMIS data set will be used for future research; this is likely to include modelling prognostic factors for CS cancer, optimising MRI scan sequencing and biomarker or translational research analyses using the blood and urine samples collected. Better-quality evidence on long-term outcomes in prostate cancer under the various management strategies is required to better assess cost-effectiveness. The value-of-information analysis should be developed further to assess new research to commission. TRIAL REGISTRATION: Current Controlled Trials ISRCTN16082556 and NCT01292291. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 39. See the NIHR Journals Library website for further project information. This project was also supported and partially funded by the NIHR Biomedical Research Centre at University College London (UCL) Hospitals NHS Foundation Trust and UCL and by The Royal Marsden NHS Foundation Trust and The Institute of Cancer Research Biomedical Research Centre and was co-ordinated by the Medical Research Council's Clinical Trials Unit at UCL (grant code MC_UU_12023/28). It was sponsored by UCL. Funding for the additional collection of blood and urine samples for translational research was provided by Prostate Cancer UK.

80 citations


Journal ArticleDOI
TL;DR: The clinical effectiveness and cost-effectiveness of a bespoke exercise programme designed specifically for people with mild to moderate dementia (MMD) and carer burden and HRQoL and the economic evaluation was expressed in terms of incremental cost per quality-adjusted life-year (QALY) gained from a NHS and Personal Social Services perspective.
Abstract: Background Approximately 670,000 people in the UK have dementia. Previous literature suggests that physical exercise could slow dementia symptom progression. Objectives To estimate the clinical effectiveness and cost-effectiveness of a bespoke exercise programme, in addition to usual care, on the cognitive impairment (primary outcome), function and health-related quality of life (HRQoL) of people with mild to moderate dementia (MMD) and carer burden and HRQoL. Design Intervention development, systematic review, multicentred, randomised controlled trial (RCT) with a parallel economic evaluation and qualitative study. Setting 15 English regions. Participants People with MMD living in the community. Intervention A 4-month moderate- to high-intensity, structured exercise programme designed specifically for people with MMD, with support to continue unsupervised physical activity thereafter. Exercises were individually prescribed and progressed, and participants were supervised in groups. The comparator was usual practice. Main outcome measures The primary outcome was the Alzheimer’s Disease Assessment Scale – Cognitive Subscale (ADAS-Cog). The secondary outcomes were function [as measured using the Bristol Activities of Daily Living Scale (BADLS)], generic HRQoL [as measured using the EuroQol-5 Dimensions, three-level version (EQ-5D-3L)], dementia-related QoL [as measured using the Quality of Life in Alzheimer’s Disease (QoL-AD) scale], behavioural symptoms [as measured using the Neuropsychiatric Inventory (NPI)], falls and fractures, physical fitness (as measured using the 6-minute walk test) and muscle strength. Carer outcomes were HRQoL (Quality of Life in Alzheimer’s Disease) (as measured using the EQ-5D-3L) and carer burden (as measured using the Zarit Burden Interview). The economic evaluation was expressed in terms of incremental cost per quality-adjusted life-year (QALY) gained from a NHS and Personal Social Services perspective. We measured health and social care use with the Client Services Receipt Inventory. Participants were followed up for 12 months. Results Between February 2013 and June 2015, 494 participants were randomised with an intentional unequal allocation ratio: 165 to usual care and 329 to the intervention. The mean age of participants was 77 years [standard deviation (SD) 7.9 years], 39% (193/494) were female and the mean baseline ADAS-Cog score was 21.5 (SD 9.0). Participants in the intervention arm achieved high compliance rates, with 65% (214/329) attending between 75% and 100% of sessions. Outcome data were obtained for 85% (418/494) of participants at 12 months, at which point a small, statistically significant negative treatment effect was found in the primary outcome, ADAS-Cog (patient reported), with a mean difference of –1.4 [95% confidence interval (CI) –2.62 to –0.17]. There were no treatment effects for any of the other secondary outcome measures for participants or carers: for the BADLS there was a mean difference of –0.6 (95% CI –2.05 to 0.78), for the EQ-5D-3L a mean difference of –0.002 (95% CI –0.04 to 0.04), for the QoL-AD scale a mean difference of 0.7 (95% CI –0.21 to 1.65) and for the NPI a mean difference of –2.1 (95% CI –4.83 to 0.65). Four serious adverse events were reported. The exercise intervention was dominated in health economic terms. Limitations In the absence of definitive guidance and rationale, we used a mixed exercise programme. Neither intervention providers nor participants could be masked to treatment allocation. Conclusions This is a large well-conducted RCT, with good compliance to exercise and research procedures. A structured exercise programme did not produce any clinically meaningful benefit in function or HRQoL in people with dementia or on carer burden. Future work Future work should concentrate on approaches other than exercise to influence cognitive impairment in dementia. Trial registration Current Controlled Trials ISRCTN32612072. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full programme and will be published in full in Health Technology Assessment Vol. 22, No. 28. See the NIHR Journals Library website for further project information. Additional funding was provided by the Oxford NIHR Biomedical Research Centre and the Oxford NIHR Collaboration for Leadership in Applied Health Research and Care.

73 citations


Journal ArticleDOI
TL;DR: LDCT screening may be clinically effective in reducing lung cancer mortality, but there is considerable uncertainty about the effect on costs and the magnitude of benefits, and screening programmes are predicted to be more effective than no screening, reduce lungcancer mortality and result in more lung cancer diagnoses.
Abstract: Background Diagnosis of lung cancer frequently occurs in its later stages. Low-dose computed tomography (LDCT) could detect lung cancer early. Objectives To estimate the clinical effectiveness and cost-effectiveness of LDCT lung cancer screening in high-risk populations. Data sources Bibliographic sources included MEDLINE, EMBASE, Web of Science and The Cochrane Library. Methods Clinical effectiveness – a systematic review of randomised controlled trials (RCTs) comparing LDCT screening programmes with usual care (no screening) or other imaging screening programmes [such as chest X-ray (CXR)] was conducted. Bibliographic sources included MEDLINE, EMBASE, Web of Science and The Cochrane Library. Meta-analyses, including network meta-analyses, were performed. Cost-effectiveness – an independent economic model employing discrete event simulation and using a natural history model calibrated to results from a large RCT was developed. There were 12 different population eligibility criteria and four intervention frequencies [(1) single screen, (2) triple screen, (3) annual screening and (4) biennial screening] and a no-screening control arm. Results Clinical effectiveness – 12 RCTs were included, four of which currently contribute evidence on mortality. Meta-analysis of these demonstrated that LDCT, with ≤ 9.80 years of follow-up, was associated with a non-statistically significant decrease in lung cancer mortality (pooled relative risk 0.94, 95% confidence interval 0.74 to 1.19). The findings also showed that LDCT screening demonstrated a non-statistically significant increase in all-cause mortality. Given the considerable heterogeneity detected between studies for both outcomes, the results should be treated with caution. Network meta-analysis, including six RCTs, was performed to assess the relative clinical effectiveness of LDCT, CXR and usual care. The results showed that LDCT was ranked as the best screening strategy in terms of lung cancer mortality reduction. CXR had a 99.7% probability of being the worst intervention and usual care was ranked second. Cost-effectiveness – screening programmes are predicted to be more effective than no screening, reduce lung cancer mortality and result in more lung cancer diagnoses. Screening programmes also increase costs. Screening for lung cancer is unlikely to be cost-effective at a threshold of £20,000/quality-adjusted life-year (QALY), but may be cost-effective at a threshold of £30,000/QALY. The incremental cost-effectiveness ratio for a single screen in smokers aged 60–75 years with at least a 3% risk of lung cancer is £28,169 per QALY. Sensitivity and scenario analyses were conducted. Screening was only cost-effective at a threshold of £20,000/QALY in only a minority of analyses. Limitations Clinical effectiveness – the largest of the included RCTs compared LDCT with CXR screening rather than no screening. Cost-effectiveness – a representative cost to the NHS of lung cancer has not been recently estimated according to key variables such as stage at diagnosis. Certain costs associated with running a screening programme have not been included. Conclusions LDCT screening may be clinically effective in reducing lung cancer mortality, but there is considerable uncertainty. There is evidence that a single round of screening could be considered cost-effective at conventional thresholds, but there is significant uncertainty about the effect on costs and the magnitude of benefits. Future work Clinical effectiveness and cost-effectiveness estimates should be updated with the anticipated results from several ongoing RCTs [particularly the NEderlands Leuvens Longkanker Screenings ONderzoek (NELSON) screening trial]. Study registration This study is registered as PROSPERO CRD42016048530. Funding The National Institute for Health Research Health Technology Assessment programme.

69 citations


Journal ArticleDOI
TL;DR: Roux-en-Y gastric bypass was costly to deliver, but it was the most cost-effective intervention, and most WMPs were cost- effective compared with current population obesity trends.
Abstract: Background Adults with severe obesity [body mass index (BMI) of ≥ 35 kg/m2] have an increased risk of comorbidities and psychological, social and economic consequences. Objectives Systematically review bariatric surgery, weight-management programmes (WMPs) and orlistat pharmacotherapy for adults with severe obesity, and evaluate the feasibility, acceptability, clinical effectiveness and cost-effectiveness of treatment. Data sources Electronic databases including MEDLINE, EMBASE, PsycINFO, the Cochrane Central Register of Controlled Trials and the NHS Economic Evaluation Database were searched (last searched in May 2017). Review methods Four systematic reviews evaluated clinical effectiveness, cost-effectiveness and qualitative evidence for adults with a BMI of ≥ 35 kg/m2. Data from meta-analyses populated a microsimulation model predicting costs, outcomes and cost-effectiveness of Roux-en-Y gastric bypass (RYGB) surgery and the most effective lifestyle WMPs over a 30-year time horizon from a NHS perspective, compared with current UK population obesity trends. Interventions were cost-effective if the additional cost of achieving a quality-adjusted life-year is < £20,000–30,000. Results A total of 131 randomised controlled trials (RCTs), 26 UK studies, 33 qualitative studies and 46 cost-effectiveness studies were included. From RCTs, RYGB produced the greatest long-term weight change [–20.23 kg, 95% confidence interval (CI) –23.75 to –16.71 kg, at 60 months]. WMPs with very low-calorie diets (VLCDs) produced the greatest weight loss at 12 months compared with no WMPs. Adding a VLCD to a WMP gave an additional mean weight change of –4.41 kg (95% CI –5.93 to –2.88 kg) at 12 months. The intensive Look AHEAD WMP produced mean long-term weight loss of 6% in people with type 2 diabetes mellitus (at a median of 9.6 years). The microsimulation model found that WMPs were generally cost-effective compared with population obesity trends. Long-term WMP weight regain was very uncertain, apart from Look AHEAD. The addition of a VLCD to a WMP was not cost-effective compared with a WMP alone. RYGB was cost-effective compared with no surgery and WMPs, but the model did not replicate long-term cost savings found in previous studies. Qualitative data suggested that participants could be attracted to take part in WMPs through endorsement by their health-care provider or through perceiving innovative activities, with WMPs being delivered to groups. Features improving long-term weight loss included having group support, additional behavioural support, a physical activity programme to attend, a prescribed calorie diet or a calorie deficit. Limitations Reviewed studies often lacked generalisability to UK settings in terms of participants and resources for implementation, and usually lacked long-term follow-up (particularly for complications for surgery), leading to unrealistic weight regain assumptions. The views of potential and actual users of services were rarely reported to contribute to service design. This study may have failed to identify unpublished UK evaluations. Dual, blinded numerical data extraction was not undertaken. Conclusions Roux-en-Y gastric bypass was costly to deliver, but it was the most cost-effective intervention. Adding a VLCD to a WMP was not cost-effective compared with a WMP alone. Most WMPs were cost-effective compared with current population obesity trends. Future work Improved reporting of WMPs is needed to allow replication, translation and further research. Qualitative research is needed with adults who are potential users of, or who fail to engage with or drop out from, WMPs. RCTs and economic evaluations in UK settings (e.g. Tier 3, commercial programmes or primary care) should evaluate VLCDs with long-term follow-up (≥ 5 years). Decision models should incorporate relevant costs, disease states and evidence-based weight regain assumptions. Study registration This study is registered as PROSPERO CRD42016040190. Funding The National Institute for Health Research Health Technology Assessment programme. The Health Services Research Unit and Health Economics Research Unit are core funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorate.

63 citations


Journal ArticleDOI
TL;DR: This study met most feasibility objectives and found that it is feasible to assess the cost-effectiveness of VR, and RTW was most strongly related to social participation and work self-efficacy.
Abstract: Background Up to 160,000 people incur traumatic brain injury (TBI) each year in the UK. TBI can have profound effects on many areas of human functioning, including participation in work. There is limited evidence of the clinical effectiveness and cost-effectiveness of vocational rehabilitation (VR) after injury to promote early return to work (RTW) following TBI. Objective To assess the feasibility of a definitive, multicentre, randomised controlled trial (RCT) of the clinical effectiveness and cost-effectiveness of early, specialist VR plus usual care (UC) compared with UC alone on work retention 12 months post TBI. Design A multicentre, feasibility, parallel-group RCT with a feasibility economic evaluation and an embedded mixed-methods process evaluation. Randomisation was by remote computer-generated allocation. Setting Three NHS major trauma centres (MTCs) in England. Participants Adults with TBI admitted for > 48 hours and working or studying prior to injury. Interventions Early specialist TBI VR delivered by occupational therapists (OTs) in the community using a case co-ordination model. Main outcome measures Self-reported RTW 12 months post randomisation, mood, functional ability, participation, work self-efficacy, quality of life and work ability. Feasibility outcomes included recruitment and retention rates. Follow-up was by postal questionnaires in two centres and face to face in one centre. Those collecting data were blind to treatment allocation. Results Out of 102 target participants, 78 were recruited (39 randomised to each arm), representing 39% of those eligible and 5% of those screened. Approximately 2.2 patients were recruited per site per month. Of those, 56% had mild injuries, 18% had moderate injuries and 26% had severe injuries. A total of 32 out of 45 nominated carers were recruited. A total of 52 out of 78 (67%) TBI participants responded at 12 months (UC, n = 23; intervention, n = 29), completing 90% of the work questions; 21 out of 23 (91%) UC respondents and 20 out of 29 (69%) intervention participants returned to work at 12 months. Two participants disengaged from the intervention. Face-to-face follow-up was no more effective than postal follow-up. RTW was most strongly related to social participation and work self-efficacy. It is feasible to assess the cost-effectiveness of VR. Intervention was delivered as intended and valued by participants. Factors likely to affect a definitive trial include deploying experienced OTs, no clear TBI definition or TBI registers, and repatriation of more severe TBI from MTCs, affecting recruitment of those most likely to benefit/least likely to drop out. Limitations Target recruitment was not reached, but mechanisms to achieve this in future studies were identified. Retention was lower than expected, particularly in UC, potentially biasing estimates of the 12-month RTW rate. Conclusions This study met most feasibility objectives. The intervention was delivered with high fidelity. When objectives were not met, strategies to ensure feasibility of a full trial were identified. Future work should test two-stage recruitment and include resources to recruit from ‘spokes’. A broader measure covering work ability, self-efficacy and participation may be a more sensitive outcome.

61 citations


Journal ArticleDOI
TL;DR: Disability, rate of deep infection, quality of life and resource use in patients with severe open fracture of the lower limb treated with negative-pressure wound therapy (NPWT) versus standard wound management after the first surgical debridement of the wound are assessed.
Abstract: Background Open fractures of the lower limb occur when a broken bone penetrates the skin and is exposed to the outside environment. These are life-changing injuries. The risk of deep infection may be as high as 27%. The type of dressing applied after surgical debridement could potentially reduce the risk of infection in the open-fracture wound. Objectives To assess the disability, rate of deep infection, quality of life and resource use in patients with severe open fracture of the lower limb treated with negative-pressure wound therapy (NPWT) versus standard wound management after the first surgical debridement of the wound. Design A pragmatic, multicentre randomised controlled trial. Setting Twenty-four specialist trauma hospitals in the UK Major Trauma Network. Participants A total of 460 patients aged ≥ 16 years with a severe open fracture of the lower limb were recruited from July 2012 through to December 2015. Patients were excluded if they presented more than 72 hours after their injury or were unable to complete questionnaires. Interventions Negative-pressure wound therapy (n = 226) where an ‘open-cell’ solid foam or gauze was placed over the surface of the wound and connected to a suction pump which created a partial vacuum over the dressing versus standard dressings not involving negative pressure (n = 234). Main outcome measures Disability Rating Index (DRI) – a score of 0 (no disability) to 100 (completely disabled) at 12 months was the primary outcome measure, with a minimal clinically important difference of 8 points. The secondary outcomes were deep infection, quality of life and resource use collected at 3, 6, 9 and 12 months post randomisaton. Results There was no evidence of a difference in the patients’ DRI at 12 months. The mean DRI in the NPWT group was 45.5 points [standard deviation (SD) 28.0 points] versus 42.4 points (SD 24.2 points) in the standard dressing group, giving a difference of –3.9 points (95% confidence interval –8.9 to 1.2 points) in favour of standard dressings (p = 0.132). There was no difference in HRQoL and no difference in the number of surgical site infections or other complications at any point in the 12 months after surgery. NPWT did not reduce the cost of treatment and it was associated with a low probability of cost-effectiveness. Limitations Owing to the emergency nature of the interventions, we anticipated that some patients who were randomised into the trial would subsequently be unable or unwilling to take part. Such post-randomisation withdrawal of patients could have posed a risk to the external validity of the trial. However, the great majority of these patients (85%) were found to be ineligible after randomisation. Therefore, we can be confident that the patients who took part were representative of the population with severe open fractures of the lower limb. Conclusions Contrary to the existing literature and current clinical guidelines, NPWT dressings do not provide a clinical or an economic benefit for patients with an open fracture of the lower limb. Future work Future work should investigate alternative strategies to reduce the incidence of infection and improve outcomes for patients with an open fracture of the lower limb. Two specific areas of potentially great benefit are (1) the use of topical antibiotic preparations in the open-fracture wound and (2) the role of orthopaedic implants with antimicrobial coatings when fixing the associated fracture. Trial registration Current Controlled Trials ISRCTN33756652 and UKCRN Portfolio ID 11783. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 73. See the NIHR Journals Library website for further project information.

49 citations


Journal ArticleDOI
TL;DR: A systematic review of treatments for dry AMD and STGD was carried out, and a number of promising research topics were identified, including drug treatments, stem cells, new forms of laser treatment, and implantable intraocular lens telescopes.
Abstract: Background Age-related macular degeneration (AMD) is the leading cause of visual loss in older people. Advanced AMD takes two forms, neovascular (wet) and atrophic (dry). Stargardt disease (STGD) is the commonest form of inherited macular dystrophy. Objective To carry out a systematic review of treatments for dry AMD and STGD, and to identify emerging treatments where future NIHR research might be commissioned. Design Systematic review. Methods We searched MEDLINE, EMBASE, Web of Science and The Cochrane Library from 2005 to 13 July 2017 for reviews, journal articles and meeting abstracts. We looked for studies of interventions that aim to preserve or restore vision in people with dry AMD or STGD. The most important outcomes are those that matter to patients: visual acuity (VA), contrast sensitivity, reading speed, ability to drive, adverse effects of treatment, quality of life, progression of disease and patient preference. However, visual loss is a late event and intermediate predictors of future decline were accepted if there was good evidence that they are strong predictors of subsequent visual outcomes. These include changes detectable by investigation, but not necessarily noticed by people with AMD or STGD. ClinicalTrials.gov, the World Health Organization search portal and the UK Clinical Trials gateway were searched for ongoing and recently completed clinical trials. Results The titles and abstracts of 7948 articles were screened for inclusion. The full text of 398 articles were obtained for further screening and checking of references and 112 articles were included in the final report. Overall, there were disappointingly few good-quality studies (including of sufficient size and duration) reporting useful outcomes, particularly in STGD. However we did identify a number of promising research topics, including drug treatments, stem cells, new forms of laser treatment, and implantable intraocular lens telescopes. In many cases, research is already under way, funded by industry or governments. Limitations In AMD, the main limitation came from the poor quality of much of the evidence. Many studies used VA as their main outcome despite not having sufficient duration to observe changes. The evidence on treatments for STGD is sparse. Most studies tested interventions with no comparison group, were far too short term, and the quality of some studies was poor. Future work We think that the topics on which the Health Technology Assessment (HTA) and Efficacy Mechanism and Evaluation (EME) programmes might consider commissioning primary research are in STGD, a HTA trial of fenretinide (ReVision Therapeutics, San Diego, CA, USA), a visual cycle inhibitor, and EME research into the value of lutein and zeaxanthin supplements, using short-term measures of retinal function. In AMD, we suggest trials of fenretinide and of a potent statin. There is epidemiological evidence from the USA that the drug, levodopa, used for treating Parkinson’s disease, may reduce the incidence of AMD. We suggest that similar research should be carried out using the large general practice databases in the UK. Ideally, future research should be at earlier stages in both diseases, before vision is impaired, using sensitive measures of macular function. This may require early detection of AMD by screening. Study registration This study is registered as PROSPERO CRD42016038708. Funding The National Institute for Health Research HTA programme.

44 citations


Journal ArticleDOI
TL;DR: A prospective, multicentre, open-label feasibility study to inform the design and conduct of a future RCT of PA using high-intensity focused ultrasound (HIFU) versus radical prostatectomy (RP) for intermediate-risk PCa and to test and optimise methods of data capture.
Abstract: Background Prostate cancer (PCa) is the most common cancer in men in the UK. Patients with intermediate-risk, clinically localised disease are offered radical treatments such as surgery or radiotherapy, which can result in severe side effects. A number of alternative partial ablation (PA) technologies that may reduce treatment burden are available; however the comparative effectiveness of these techniques has never been evaluated in a randomised controlled trial (RCT). Objectives To assess the feasibility of a RCT of PA using high-intensity focused ultrasound (HIFU) versus radical prostatectomy (RP) for intermediate-risk PCa and to test and optimise methods of data capture. Design We carried out a prospective, multicentre, open-label feasibility study to inform the design and conduct of a future RCT, involving a QuinteT Recruitment Intervention (QRI) to understand barriers to participation. Setting Five NHS hospitals in England. Participants Men with unilateral, intermediate-risk, clinically localised PCa. Interventions Radical prostatectomy compared with HIFU. Primary outcome measure The randomisation of 80 men. Secondary outcome measures Findings of the QRI and assessment of data capture methods. Results Eighty-seven patients consented to participate by 31 March 2017 and 82 men were randomised by 4 May 2017 (41 men to the RP arm and 41 to the HIFU arm). The QRI was conducted in two iterative phases: phase I identified a number of barriers to recruitment, including organisational challenges, lack of recruiter equipoise and difficulties communicating with patients about the study, and phase II comprised the development and delivery of tailored strategies to optimise recruitment, including group training, individual feedback and 'tips' documents. At the time of data extraction, on 10 October 2017, treatment data were available for 71 patients. Patient characteristics were similar at baseline and the rate of return of all clinical case report forms (CRFs) was 95%; the return rate of the patient-reported outcome measures (PROMs) questionnaire pack was 90.5%. Centres with specific long-standing expertise in offering HIFU as a routine NHS treatment option had lower recruitment rates (Basingstoke and Southampton) - with University College Hospital failing to enrol any participants - than centres offering HIFU in the trial context only. Conclusions Randomisation of men to a RCT comparing PA with radical treatments of the prostate is feasible. The QRI provided insights into the complexities of recruiting to this surgical trial and has highlighted a number of key lessons that are likely to be important if the study progresses to a main trial. A full RCT comparing clinical effectiveness, cost-effectiveness and quality-of-life outcomes between radical treatments and PA is now warranted. Future work Men recruited to the feasibility study will be followed up for 36 months in accordance with the protocol. We will design a full RCT, taking into account the lessons learnt from this study. CRFs will be streamlined, and the length and frequency of PROMs and resource use diaries will be reviewed to reduce the burden on patients and research nurses and to optimise data completeness. Trial registration Current Controlled Trials ISRCTN99760303. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 52. See the NIHR Journals Library website for further project information.

Journal ArticleDOI
TL;DR: The modest evidence for an effect of routine use of cell salvage during caesarean section on rates of donor blood transfusion was associated with increased FMH, which emphasises the need for adherence to guidance on anti-D prophylaxis.
Abstract: Background Caesarean section is associated with blood loss and maternal morbidity. Excessive blood loss requires transfusion of donor (allogeneic) blood, which is a finite resource. Cell salvage returns blood lost during surgery to the mother. It may avoid the need for donor blood transfusion, but reliable evidence of its effects is lacking. Objectives To determine if routine use of cell salvage during caesarean section in mothers at risk of haemorrhage reduces the rates of blood transfusion and postpartum maternal morbidity, and is cost-effective, in comparison with standard practice without routine salvage use. Design Individually randomised controlled, multicentre trial with cost-effectiveness analysis. Treatment was not blinded. Setting A total of 26 UK obstetric units. Participants Out of 3054 women recruited between June 2013 and April 2016, we randomly assigned 3028 women at risk of haemorrhage to cell salvage or routine care. Randomisation was stratified using random permuted blocks of variable sizes. Of these, 1672 had emergency and 1356 had elective caesareans. We excluded women for whom cell salvage or donor blood transfusion was contraindicated. Interventions Cell salvage (intervention) versus routine care without salvage (control). In the intervention group, salvage was set up in 95.6% of the women and, of these, 50.8% had salvaged blood returned. In the control group, 3.9% had salvage deployed. Main outcome measures Primary – donor blood transfusion. Secondary – units of donor blood transfused, time to mobilisation, length of hospitalisation, mean fall in haemoglobin, fetomaternal haemorrhage (FMH) measured by Kleihauer–Betke test, and maternal fatigue. Analyses were adjusted for stratification factors and other factors that were believed to be prognostic a priori. Cost-effectiveness outcomes – costs of resources and service provision taking the UK NHS perspective. Results We analysed 1498 and 1492 participants in the intervention and control groups, respectively. Overall, the transfusion rate was 2.5% in the intervention group and 3.5% in the control group [adjusted odds ratio (OR) 0.65, 95% confidence interval (CI) 0.42 to 1.01; p = 0.056]. In a planned subgroup analysis, the transfusion rate was 3.0% in the intervention group and 4.6% in the control group among emergency caesareans (adjusted OR 0.58, 95% CI 0.34 to 0.99), whereas it was 1.8% in the intervention group and 2.2% in the control group among elective caesareans (adjusted OR 0.83, 95% CI 0.38 to 1.83) (interaction p = 0.46, suggesting that the difference in effect between subgroups was not statistically significant). Secondary outcomes did not differ between groups, except for FMH, which was higher under salvage in rhesus D (RhD)-negative women with RhD-positive babies (25.6% vs. 10.5%, adjusted OR 5.63, 95% CI 1.43 to 22.14; p = 0.013). No case of amniotic fluid embolism was observed. The additional cost of routine cell salvage during caesarean was estimated, on average, at £8110 per donor blood transfusion avoided. Conclusions The modest evidence for an effect of routine use of cell salvage during caesarean section on rates of donor blood transfusion was associated with increased FMH, which emphasises the need for adherence to guidance on anti-D prophylaxis. We are unable to comment on long-term antibody sensitisation effects. Based on the findings of this trial, cell salvage is unlikely to be considered cost-effective. Future work Research into risk of alloimmunisation among women exposed to cell salvage is needed. Trial registration Current Controlled Trials ISRCTN66118656. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 2. See the NIHR Journals Library website for further project information.

Journal ArticleDOI
TL;DR: In participants following TBI and with an ICP of > 20 mmHg, titrated therapeutic hypothermia successfully reduced ICP but led to a higher mortality rate and worse functional outcome, which favoured standard care alone.
Abstract: Background Traumatic brain injury (TBI) is a major cause of disability and death in young adults worldwide. It results in around 1 million hospital admissions annually in the European Union (EU), causes a majority of the 50,000 deaths from road traffic accidents and leaves a further ≈10,000 people severely disabled. Objective The Eurotherm3235 Trial was a pragmatic trial examining the effectiveness of hypothermia (32–35 °C) to reduce raised intracranial pressure (ICP) following severe TBI and reduce morbidity and mortality 6 months after TBI. Design An international, multicentre, randomised controlled trial. Setting Specialist neurological critical care units. Participants We included adult participants following TBI. Eligible patients had ICP monitoring in place with an ICP of > 20 mmHg despite first-line treatments. Participants were randomised to receive standard care with the addition of hypothermia (32–35 °C) or standard care alone. Online randomisation and the use of an electronic case report form (CRF) ensured concealment of random treatment allocation. It was not possible to blind local investigators to allocation as it was obvious which participants were receiving hypothermia. We collected information on how well the participant had recovered 6 months after injury. This information was provided either by the participant themself (if they were able) and/or a person close to them by completing the Glasgow Outcome Scale – Extended (GOSE) questionnaire. Telephone follow-up was carried out by a blinded independent clinician. Interventions The primary intervention to reduce ICP in the hypothermia group after randomisation was induction of hypothermia. Core temperature was initially reduced to 35 °C and decreased incrementally to a lower limit of 32 °C if necessary to maintain ICP at 20 mmHg, titrated therapeutic hypothermia successfully reduced ICP but led to a higher mortality rate and worse functional outcome. Limitations Inability to blind treatment allocation as it was obvious which participants were randomised to the hypothermia group; there was biased recording of SAEs in the hypothermia group. We now believe that more adequately powered clinical trials of common therapies used to reduce ICP, such as hypertonic therapy, barbiturates and hyperventilation, are required to assess their potential benefits and risks to patients. Trial registration Current Controlled Trials ISRCTN34555414. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 45. See the NIHR Journals Library website for further project information. The European Society of Intensive Care Medicine supported the pilot phase of this trial.

Journal ArticleDOI
TL;DR: Diagnostic tests for AKI in the ICU offer the potential to improve patient care and add value to the NHS, but cost-effectiveness remains highly uncertain.
Abstract: Background: Acute kidney injury (AKI) is highly prevalent in hospital inpatient populations, leading to significant mortality and morbidity, reduced quality of life and high short- and long-term health-care costs for the NHS. New diagnostic tests may offer an earlier diagnosis or improved care, but evidence of benefit to patients and of value to the NHS is required before national adoption. Objectives: To evaluate the potential for AKI in vitro diagnostic tests to enhance the NHS care of patients admitted to the intensive care unit (ICU) and identify an efficient supporting research strategy. Data sources: We searched ClinicalTrials.gov, The Cochrane Library databases, Embase, Health Management Information Consortium, International Clinical Trials Registry Platform, MEDLINE, metaRegister of Current Controlled Trials, PubMed and Web of Science databases from their inception dates until September 2014 (review 1), November 2015 (review 2) and July 2015 (economic model). Details of databases used for each review and coverage dates are listed in the main report. Review methods: The AKI-Diagnostics project included horizon scanning, systematic reviewing, meta-analysis of sensitivity and specificity, appraisal of analytical validity, care pathway analysis, model-based lifetime economic evaluation from a UK NHS perspective and value of information (VOI) analysis. Results: The horizon-scanning search identified 152 potential tests and biomarkers. Three tests, Nephrocheck® (Astute Medical, Inc., San Diego, CA, USA), NGAL and cystatin C, were subjected to detailed review. The meta-analysis was limited by variable reporting standards, study quality and heterogeneity, but sensitivity was between 0.54 and 0.92 and specificity was between 0.49 and 0.95 depending on the test. A bespoke critical appraisal framework demonstrated that analytical validity was also poorly reported in many instances. In the economic model the incremental cost-effectiveness ratios ranged from £11,476 to £19,324 per quality-adjusted life-year (QALY), with a probability of cost-effectiveness between 48% and 54% when tests were compared with current standard care. Limitations: The major limitation in the evidence on tests was the heterogeneity between studies in the definitions of AKI and the timing of testing. Conclusions: Diagnostic tests for AKI in the ICU offer the potential to improve patient care and add value to the NHS, but cost-effectiveness remains highly uncertain. Further research should focus on the mechanisms by which a new test might change current care processes in the ICU and the subsequent cost and QALY implications. The VOI analysis suggested that further observational research to better define the prevalence of AKI developing in the ICU would be worthwhile. A formal randomised controlled trial of biomarker use linked to a standardised AKI care pathway is necessary to provide definitive evidence on whether or not adoption of tests by the NHS would be of value. Study registration: The systematic review within this study is registered as PROSPERO CRD42014013919. Funding: The National Institute for Health Research Health Technology Assessment programme.

Journal ArticleDOI
TL;DR: The STEPWISE intervention was neither clinically effective nor cost-effective, and the trial results suggest that lifestyle programmes for people with schizophrenia may need greater resourcing than for other populations, and interventions that have been shown to be effective in other populations.
Abstract: BACKGROUND: Obesity is twice as common in people with schizophrenia as in the general population. The National Institute for Health and Care Excellence guidance recommends that people with psychosis or schizophrenia, especially those taking antipsychotics, be offered a healthy eating and physical activity programme by their mental health care provider. There is insufficient evidence to inform how these lifestyle services should be commissioned. OBJECTIVES: To develop a lifestyle intervention for people with first episode psychosis or schizophrenia and to evaluate its clinical effectiveness, cost-effectiveness, delivery and acceptability. DESIGN: A two-arm, analyst-blind, parallel-group, randomised controlled trial, with a 1 : 1 allocation ratio, using web-based randomisation; a mixed-methods process evaluation, including qualitative case study methods and logic modelling; and a cost-utility analysis. SETTING: Ten community mental health trusts in England. PARTICIPANTS: People with first episode psychosis, schizophrenia or schizoaffective disorder. INTERVENTIONS: Intervention group: (1) four 2.5-hour group-based structured lifestyle self-management education sessions, 1 week apart; (2) multimodal fortnightly support contacts; (3) three 2.5-hour group booster sessions at 3-monthly intervals, post core sessions. Control group: usual care assessed through a longitudinal survey. All participants received standard written lifestyle information. MAIN OUTCOME MEASURES: The primary outcome was change in weight (kg) at 12 months post randomisation. The key secondary outcomes measured at 3 and 12 months included self-reported nutrition (measured with the Dietary Instrument for Nutrition Education questionnaire), objectively measured physical activity measured by accelerometry [GENEActiv (Activinsights, Kimbolton, UK)], biomedical measures, adverse events, patient-reported outcome measures and a health economic assessment. RESULTS: The trial recruited 414 participants (intervention arm: 208 participants; usual care: 206 participants) between 10 March 2015 and 31 March 2016. A total of 341 participants (81.6%) completed the trial. A total of 412 participants were analysed. After 12 months, weight change did not differ between the groups (mean difference 0.0 kg, 95% confidence interval -1.59 to 1.67 kg; p = 0.964); physical activity, dietary intake and biochemical measures were unchanged. Glycated haemoglobin, fasting glucose and lipid profile were unchanged by the intervention. Quality of life, psychiatric symptoms and illness perception did not change during the trial. There were three deaths, but none was related to the intervention. Most adverse events were expected and related to the psychiatric illness. The process evaluation showed that the intervention was acceptable, with participants valuing the opportunity to interact with others facing similar challenges. Session feedback indicated that 87.2% of participants agreed that the sessions had met their needs. Some indicated the desire for more ongoing support. Professionals felt that the intervention was under-resourced and questioned the long-term sustainability within current NHS settings. Professionals would have preferred greater access to participants' behaviour data to tailor the intervention better. The incremental cost-effectiveness ratio from the health-care perspective is £246,921 per quality-adjusted life-year (QALY) gained and the incremental cost-effectiveness ratio from the societal perspective is £367,543 per QALY gained. CONCLUSIONS: Despite the challenges of undertaking clinical research in this population, the trial successfully recruited and retained participants, indicating a high level of interest in weight management interventions; however, the STEPWISE intervention was neither clinically effective nor cost-effective. Further research will be required to define how overweight and obesity in people with schizophrenia should be managed. The trial results suggest that lifestyle programmes for people with schizophrenia may need greater resourcing than for other populations, and interventions that have been shown to be effective in other populations, such as people with diabetes mellitus, are not necessarily effective in people with schizophrenia. TRIAL REGISTRATION: Current Controlled Trials ISRCTN19447796. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 65. See the NIHR Journals Library website for further project information.

Journal ArticleDOI
TL;DR: There is no evidence to suggest that regular monitoring of serum AED levels in pregnancy outcomes will be needed or that regular AED dosage adjustment for seizure control is needed, according to women with epilepsy on AEDs.
Abstract: Backgroud Pregnant women with epilepsy on antiepileptic drugs (AEDs) may experience a reduction in serum AED levels. This has the potential to worsen seizure control. Objective To determine if, in pregnant women with epilepsy on AEDs, additional therapeutic drug monitoring reduces seizure deterioration compared with clinical features monitoring after a reduction in serum AED levels. Design A double-blind, randomised trial nested within a cohort study was conducted and a qualitative study of acceptability of the two strategies was undertaken. Stratified block randomisation with a 1 : 1 allocation method was carried out. Setting Fifty obstetric and epilepsy clinics in secondary and tertiary care units in the UK. Participants Pregnant women with epilepsy on one or more of the following AEDs: lamotrigine, carbamazepine, phenytoin or levetiracetam. Women with a ≥ 25% decrease in serum AED level from baseline were randomised to therapeutic drug monitoring or clinical features monitoring strategies. Interventions In the therapeutic drug monitoring group, clinicians had access to clinical findings and monthly serum AED levels to guide AED dosage adjustment for seizure control. In the clinical features monitoring group, AED dosage adjustment was based only on clinical features. Main outcome measures Primary outcome – seizure deterioration, defined as time to first seizure and to all seizures after randomisation per woman until 6 weeks post partum. Secondary outcomes – pregnancy complications in mother and offspring, maternal quality of life, seizure rates in cohorts with stable serum AED level, AED dose exposure and adverse events related to AEDs. Analysis Analysis of time to first and to all seizures after randomisation was performed using a Cox proportional hazards model, and multivariate failure time analysis by the Andersen–Gill model. The effects were reported as hazard ratios (HRs) with 95% confidence intervals (CIs). Secondary outcomes were reported as mean differences (MDs) or odds ratios. Results A total of 130 women were randomised to the therapeutic drug monitoring group and 133 to the clinical features monitoring group; 294 women did not have a reduction in serum AED level. A total of 127 women in the therapeutic drug monitoring group and 130 women in the clinical features monitoring group (98% of complete data) were included in the primary analysis. There were no significant differences in the time to first seizure (HR 0.82, 95% CI 0.55 to 1.2) or timing of all seizures after randomisation (HR 1.3, 95% CI 0.7 to 2.5) between both trial groups. In comparison with the group with stable serum AED levels, there were no significant increases in seizures in the clinical features monitoring (odds ratio 0.93, 95% CI 0.56 to 1.5) or therapeutic drug monitoring group (odds ratio 0.93, 95% CI 0.56 to 1.5) associated with a reduction in serum AED levels. Maternal and neonatal outcomes were similar in both groups, except for higher cord blood levels of lamotrigine (MD 0.55 mg/l, 95% CI 0.11 to 1 mg/l) or levetiracetam (MD 7.8 mg/l, 95% CI 0.86 to 14.8 mg/l) in the therapeutic drug monitoring group than in the clinical features monitoring group. There were no differences between the groups on daily AED exposure or quality of life. An increase in exposure to lamotrigine, levetiracetam and carbamazepine significantly increased the cord blood levels of the AEDs, but not maternal or fetal complications. Women with epilepsy perceived the need for weighing up their increased vulnerability to seizures during pregnancy against the side effects of AEDs. Limitations Fewer women than the original target were recruited. Conclusion There is no evidence to suggest that regular monitoring of serum AED levels in pregnancy improves seizure control or affects maternal or fetal outcomes. Future work recommendations Further evaluation of the risks of seizure deterioration for various threshold levels of reduction in AEDs and the long-term neurodevelopment of infants born to mothers in both randomised groups is needed. An individualised prediction model will help to identify those women who need close monitoring in pregnancy.

Journal ArticleDOI
TL;DR: The results from seven randomised controlled trials suggested that pre-amputation pain and early PLP intensity are good predictors of chronic PLP.
Abstract: BACKGROUND: Although many treatments exist for phantom limb pain (PLP), the evidence supporting them is limited and there are no guidelines for PLP management. Brain and spinal cord neurostimulation therapies are targeted at patients with chronic PLP but have yet to be systematically reviewed. OBJECTIVE: To determine which types of brain and spinal stimulation therapy appear to be the best for treating chronic PLP. DESIGN: Systematic reviews of effectiveness and epidemiology studies, and a survey of NHS practice. POPULATION: All patients with PLP. INTERVENTIONS: Invasive interventions - deep brain stimulation (DBS), motor cortex stimulation (MCS), spinal cord stimulation (SCS) and dorsal root ganglion (DRG) stimulation. Non-invasive interventions - repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS). MAIN OUTCOME MEASURES: Phantom limb pain and quality of life. DATA SOURCES: Twelve databases (including MEDLINE and EMBASE) and clinical trial registries were searched in May 2017, with no date limits applied. REVIEW METHODS: Two reviewers screened titles and abstracts and full texts. Data extraction and quality assessments were undertaken by one reviewer and checked by another. A questionnaire was distributed to clinicians via established e-mail lists of two relevant clinical societies. All results were presented narratively with accompanying tables. RESULTS: Seven randomised controlled trials (RCTs), 30 non-comparative group studies, 18 case reports and 21 epidemiology studies were included. Results from a good-quality RCT suggested short-term benefits of rTMS in reducing PLP, but not in reducing anxiety or depression. Small randomised trials of tDCS suggested the possibility of modest, short-term reductions in PLP. No RCTs of invasive therapies were identified. Results from small, non-comparative group studies suggested that, although many patients benefited from short-term pain reduction, far fewer maintained their benefits. Most studies had important methodological or reporting limitations and few studies reported quality-of-life data. The evidence on prognostic factors for the development of chronic PLP from the longitudinal studies also had important limitations. The results from these studies suggested that pre-amputation pain and early PLP intensity are good predictors of chronic PLP. Results from the cross-sectional studies suggested that the proportion of patients with severe chronic PLP is between around 30% and 40% of the chronic PLP population, and that around one-quarter of chronic PLP patients find their PLP to be either moderately or severely limiting or bothersome. There were 37 responses to the questionnaire distributed to clinicians. SCS and DRG stimulation are frequently used in the NHS but the prevalence of use of DBS and MCS was low. Most responders considered SCS and DRG stimulation to be at least sometimes effective. Neurosurgeons had mixed views on DBS, but most considered MCS to rarely be effective. Most clinicians thought that a randomised trial design could be successfully used to study neurostimulation therapies. LIMITATION: There was a lack of robust research studies. CONCLUSIONS: Currently available studies of the efficacy, effectiveness and safety of neurostimulation treatments do not provide robust, reliable results. Therefore, it is uncertain which treatments are best for chronic PLP. FUTURE WORK: Randomised crossover trials, randomised N-of-1 trials and prospective registry trials are viable study designs for future research. STUDY REGISTRATION: The study is registered as PROSPERO CRD42017065387. FUNDING: The National Institute for Health Research Health Technology Assessment programme.

Journal ArticleDOI
TL;DR: A pragmatic trial to test clinical effectiveness and assess the economic value of the following strategies: personalised OHA versus routine OHA, 12-monthly PI compared with 6- monthly PI, and no PI comparedwith 6-monthlies PI.
Abstract: Background Periodontal disease is preventable but remains the most common oral disease worldwide, with major health and economic implications Stakeholders lack reliable evidence of the relative clinical effectiveness and cost-effectiveness of different types of oral hygiene advice (OHA) and the optimal frequency of periodontal instrumentation (PI) Objectives To test clinical effectiveness and assess the economic value of the following strategies: personalised OHA versus routine OHA, 12-monthly PI (scale and polish) compared with 6-monthly PI, and no PI compared with 6-monthly PI Design Multicentre, pragmatic split-plot, randomised open trial with a cluster factorial design and blinded outcome evaluation with 3 years’ follow-up and a within-trial cost–benefit analysis NHS and participant costs were combined with benefits [willingness to pay (WTP)] estimated from a discrete choice experiment (DCE) Setting UK dental practices Participants Adult dentate NHS patients, regular attenders, with Basic Periodontal Examination (BPE) scores of 0, 1, 2 or 3 Intervention Practices were randomised to provide routine or personalised OHA Within each practice, participants were randomised to the following groups: no PI, 12-monthly PI or 6-monthly PI (current practice) Main outcome measures Clinical – gingival inflammation/bleeding on probing at the gingival margin (3 years) Patient – oral hygiene self-efficacy (3 years) Economic – net benefits (mean WTP minus mean costs) Results A total of 63 dental practices and 1877 participants were recruited The mean number of teeth and percentage of bleeding sites was 24 and 33%, respectively Two-thirds of participants had BPE scores of ≤ 2 Under intention-to-treat analysis, there was no evidence of a difference in gingival inflammation/bleeding between the 6-monthly PI group and the no-PI group [difference 087%, 95% confidence interval (CI) –16% to 33%; p = 0481] or between the 6-monthly PI group and the 12-monthly PI group (difference 011%, 95% CI –23% to 25%; p = 0929) There was also no evidence of a difference between personalised and routine OHA (difference –25%, 95% CI –83% to 33%; p = 0393) There was no evidence of a difference in self-efficacy between the 6-monthly PI group and the no-PI group (difference –0028, 95% CI –0119 to 0063; p = 0543) and no evidence of a clinically important difference between the 6-monthly PI group and the 12-monthly PI group (difference –0097, 95% CI –0188 to –0006; p = 0037) Compared with standard care, no PI with personalised OHA had the greatest cost savings: NHS perspective –£15 (95% CI –£34 to £4) and participant perspective –£64 (95% CI –£112 to –£16) The DCE shows that the general population value these services greatly Personalised OHA with 6-monthly PI had the greatest incremental net benefit [£48 (95% CI £22 to £74)] Sensitivity analyses did not change conclusions Limitations Being a pragmatic trial, we did not deny PIs to the no-PI group; there was clear separation in the mean number of PIs between groups Conclusions There was no additional benefit from scheduling 6-monthly or 12-monthly PIs over not providing this treatment unless desired or recommended, and no difference between OHA delivery for gingival inflammation/bleeding and patient-centred outcomes However, participants valued, and were willing to pay for, both interventions, with greater financial value placed on PI than on OHA Future work Assess the clinical effectiveness and cost-effectiveness of providing multifaceted periodontal care packages in primary dental care for those with periodontitis Trial registration Current Controlled Trials ISRCTN56465715 Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol 22, No 38 See the NIHR Journals Library website for further project information

Journal ArticleDOI
TL;DR: The economic evaluation showed that, when dialysis costs were included in the model, the probability of bioimpedance monitoring being cost-effective ranged from 13% to 26% at a willingness-to-pay threshold of £20,000 per quality-adjusted life-year gained, and the cost-effectiveness ranged from 61% to 67%.
Abstract: Background Chronic kidney disease (CKD) is a long-term condition requiring treatment such as conservative management, kidney transplantation or dialysis. To optimise the volume of fluid removed during dialysis (to avoid underhydration or overhydration), people are assigned a 'target weight', which is commonly assessed using clinical methods, such as weight gain between dialysis sessions, pre- and post-dialysis blood pressure and patient-reported symptoms. However, these methods are not precise, and measurement devices based on bioimpedance technology are increasingly used in dialysis centres. Current evidence on the role of bioimpedance devices for fluid management in people with CKD receiving dialysis is limited. Objectives To evaluate the clinical effectiveness and cost-effectiveness of multiple-frequency bioimpedance devices versus standard clinical assessment for fluid management in people with CKD receiving dialysis. Data sources We searched major electronic databases [e.g. MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, Science Citation Index and Cochrane Central Register of Controlled Trials (CENTRAL)] conference abstracts and ongoing studies. There were no date restrictions. Searches were undertaken between June and October 2016. Review methods Evidence was considered from randomised controlled trials (RCTs) comparing fluid management by multiple-frequency bioimpedance devices and standard clinical assessment in people receiving dialysis, and non-randomised studies evaluating the use of the devices for fluid management in people receiving dialysis. One reviewer extracted data and assessed the risk of bias of included studies. A second reviewer cross-checked the extracted data. Standard meta-analyses techniques were used to combine results from included studies. A Markov model was developed to assess the cost-effectiveness of the interventions. Results Five RCTs (with 904 adult participants) and eight non-randomised studies (with 4915 adult participants) assessing the use of the Body Composition Monitor [(BCM) Fresenius Medical Care, Bad Homburg vor der Hohe, Germany] were included. Both absolute overhydration and relative overhydration were significantly lower in patients evaluated using BCM measurements than for those evaluated using standard clinical methods [weighted mean difference -0.44, 95% confidence interval (CI) -0.72 to -0.15, p = 0.003, I2 = 49%; and weighted mean difference -1.84, 95% CI -3.65 to -0.03; p = 0.05, I2 = 52%, respectively]. Pooled effects of bioimpedance monitoring on systolic blood pressure (SBP) (mean difference -2.46 mmHg, 95% CI -5.07 to 0.15 mmHg; p = 0.06, I2 = 0%), arterial stiffness (mean difference -1.18, 95% CI -3.14 to 0.78; p = 0.24, I2 = 92%) and mortality (hazard ratio = 0.689, 95% CI 0.23 to 2.08; p = 0.51) were not statistically significant. The economic evaluation showed that, when dialysis costs were included in the model, the probability of bioimpedance monitoring being cost-effective ranged from 13% to 26% at a willingness-to-pay threshold of £20,000 per quality-adjusted life-year gained. With dialysis costs excluded, the corresponding probabilities of cost-effectiveness ranged from 61% to 67%. Limitations Lack of evidence on clinically relevant outcomes, children receiving dialysis, and any multifrequency bioimpedance devices, other than the BCM. Conclusions BCM used in addition to clinical assessment may lower overhydration and potentially improve intermediate outcomes, such as SBP, but effects on mortality have not been demonstrated. If dialysis costs are not considered, the incremental cost-effectiveness ratio falls below £20,000, with modest effects on mortality and/or hospitalisation rates. The current findings are not generalisable to paediatric populations nor across other multifrequency bioimpedance devices. Future work Services that routinely use the BCM should report clinically relevant intermediate and long-term outcomes before and after introduction of the device to extend the current evidence base. Study registration This study is registered as PROSPERO CRD42016041785. Funding The National Institute for Health Research Health Technology Assessment programme.

Journal ArticleDOI
TL;DR: The effectiveness, cost-effectiveness and acceptability of a pedometer-based walking intervention in inactive adults, delivered postally or through dedicated practice nurse physical activity (PA) consultations, are assessed.
Abstract: BACKGROUND: Guidelines recommend walking to increase moderate to vigorous physical activity (MVPA) for health benefits. OBJECTIVES: To assess the effectiveness, cost-effectiveness and acceptability of a pedometer-based walking intervention in inactive adults, delivered postally or through dedicated practice nurse physical activity (PA) consultations. DESIGN: Parallel three-arm trial, cluster randomised by household. SETTING: Seven London-based general practices. PARTICIPANTS: A total of 11,015 people without PA contraindications, aged 45-75 years, randomly selected from practices, were invited. A total of 6399 people were non-responders, and 548 people self-reporting achieving PA guidelines were excluded. A total of 1023 people from 922 households were randomised to usual care (n = 338), postal intervention (n = 339) or nurse support (n = 346). The recruitment rate was 10% (1023/10,467). A total of 956 participants (93%) provided outcome data. INTERVENTIONS: Intervention groups received pedometers, 12-week walking programmes advising participants to gradually add '3000 steps in 30 minutes' most days weekly and PA diaries. The nurse group was offered three dedicated PA consultations. MAIN OUTCOME MEASURES: The primary and main secondary outcomes were changes from baseline to 12 months in average daily step counts and time in MVPA (in ≥ 10-minute bouts), respectively, from 7-day accelerometry. Individual resource-use data informed the within-trial economic evaluation and the Markov model for simulating long-term cost-effectiveness. Qualitative evaluations assessed nurse and participant views. A 3-year follow-up was conducted. RESULTS: Baseline average daily step count was 7479 [standard deviation (SD) 2671], average minutes per week in MVPA bouts was 94 minutes (SD 102 minutes) for those randomised. PA increased significantly at 12 months in both intervention groups compared with the control group, with no difference between interventions; additional steps per day were 642 steps [95% confidence interval (CI) 329 to 955 steps] for the postal group and 677 steps (95% CI 365 to 989 steps) for nurse support, and additional MVPA in bouts (minutes per week) was 33 minutes per week (95% CI 17 to 49 minutes per week) for the postal group and 35 minutes per week (95% CI 19 to 51 minutes per week) for nurse support. Intervention groups showed no increase in adverse events. Incremental cost per step was 19p and £3.61 per minute in a ≥ 10-minute MVPA bout for nurse support, whereas the postal group took more steps and cost less than the control group. The postal group had a 50% chance of being cost-effective at a £20,000 per quality-adjusted life-year (QALY) threshold within 1 year and had both lower costs [-£11M (95% CI -£12M to -£10M) per 100,000 population] and more QALYs [759 QALYs gained (95% CI 400 to 1247 QALYs)] than the nurse support and control groups in the long term. Participants and nurses found the interventions acceptable and enjoyable. Three-year follow-up data showed persistent intervention effects (nurse support plus postal vs. control) on steps per day [648 steps (95% CI 272 to 1024 steps)] and MVPA bouts [26 minutes per week (95% CI 8 to 44 minutes per week)]. LIMITATIONS: The 10% recruitment level, with lower levels in Asian and socioeconomically deprived participants, limits the generalisability of the findings. Assessors were unmasked to the group. CONCLUSIONS: A primary care pedometer-based walking intervention in 45- to 75-year-olds increased 12-month step counts by around one-tenth, and time in MVPA bouts by around one-third, with similar effects for the nurse support and postal groups, and persistent 3-year effects. The postal intervention provides cost-effective, long-term quality-of-life benefits. A primary care pedometer intervention delivered by post could help address the public health physical inactivity challenge. FUTURE WORK: Exploring different recruitment strategies to increase uptake. Integrating the Pedometer And Consultation Evaluation-UP (PACE-UP) trial with evolving PA monitoring technologies. TRIAL REGISTRATION: Current Controlled Trials ISRCTN98538934. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 37. See the NIHR Journals Library website for further project information.

Journal ArticleDOI
TL;DR: A randomised controlled trial to assess whether or not a strategy of endovascular repair compared with open repair reduces 30-day and mid-term mortality and cost-effectiveness among patients with a suspected ruptured AAA.
Abstract: BACKGROUND: Ruptured abdominal aortic aneurysm (AAA) is a common vascular emergency. The mortality from emergency endovascular repair may be much lower than the 40-50% reported for open surgery. OBJECTIVE: To assess whether or not a strategy of endovascular repair compared with open repair reduces 30-day and mid-term mortality (including costs and cost-effectiveness) among patients with a suspected ruptured AAA. DESIGN: Randomised controlled trial, with computer-generated telephone randomisation of participants in a 1 : 1 ratio, using variable block size, stratified by centre and without blinding. SETTING: Vascular centres in the UK (n = 29) and Canada (n = 1) between 2009 and 2013. PARTICIPANTS: A total of 613 eligible participants (480 men) with a ruptured aneurysm, clinically diagnosed at the trial centre. INTERVENTIONS: A total of 316 participants were randomised to the endovascular strategy group (immediate computerised tomography followed by endovascular repair if anatomically suitable or, if not suitable, open repair) and 297 were randomised to the open repair group (computerised tomography optional). MAIN OUTCOME MEASURES: The primary outcome measure was 30-day mortality, with 30-day reinterventions, costs and disposal as early secondary outcome measures. Later outcome measures included 1- and 3-year mortality, reinterventions, quality of life (QoL) and cost-effectiveness. RESULTS: The 30-day mortality was 35.4% in the endovascular strategy group and 37.4% in the open repair group [odds ratio (OR) 0.92, 95% confidence interval (CI) 0.66 to 1.28; p = 0.62, and, after adjustment for age, sex and Hardman index, OR 0.94, 95% CI 0.67 to 1.33]. The endovascular strategy appeared to be more effective in women than in men (interaction test p = 0.02). More discharges in the endovascular strategy group (94%) than in the open repair group (77%) were directly to home (p < 0.001). Average 30-day costs were similar between groups, with the mean difference in costs being -£1186 (95% CI -£2997 to £625), favouring the endovascular strategy group. After 1 year, survival and reintervention rates were similar in the two groups, QoL (at both 3 and 12 months) was higher in the endovascular strategy group and the mean cost difference was -£2329 (95% CI -£5489 to £922). At 3 years, mortality was 48% and 56% in the endovascular strategy group and open repair group, respectively (OR 0.73, 95% CI 0.53 to 1.00; p = 0.053), with a stronger benefit for the endovascular strategy in the subgroup of 502 participants in whom repair was started for a proven rupture (OR 0.62, 95% CI 0.43 to 0.89; p = 0.009), whereas aneurysm-related reintervention rates were non-significantly higher in this group. At 3 years, considering all participants, there was a mean difference of 0.174 quality-adjusted life-years (QALYs) (95% CI 0.002 to 0.353 QALYs) and, among the endovascular strategy group, a cost difference of -£2605 (95% CI -£5966 to £702), leading to 88% of estimates in the cost-effectiveness plane being in the quadrant showing the endovascular strategy to be 'dominant'. LIMITATIONS: Because of the pragmatic design of this trial, 33 participants in the endovascular strategy group and 26 in the open repair group breached randomisation allocation. CONCLUSIONS: The endovascular strategy was not associated with a significant reduction in either 30-day mortality or cost but was associated with faster participant recovery. By 3 years, the endovascular strategy showed a survival and QALY gain and was highly likely to be cost-effective. Future research could include improving resuscitation for older persons with circulatory collapse, the impact of local anaesthesia and emergency consent procedures. TRIAL REGISTRATION: Current Controlled Trials ISRCTN48334791 and NCT00746122. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 31. See the NIHR Journals Library website for further project information.

Journal ArticleDOI
TL;DR: There is some evidence of benefit for melatonin compared with placebo, but the degree of benefit is uncertain and future work should include the development of a core outcome, and the cost-effectiveness of pharmacological and non-pharmacological interventions.
Abstract: BACKGROUND: There is uncertainty about the most appropriate ways to manage non-respiratory sleep disturbances in children with neurodisabilities (NDs). OBJECTIVE: To assess the clinical effectiveness and safety of NHS-relevant pharmacological and non-pharmacological interventions to manage sleep disturbance in children and young people with NDs, who have non-respiratory sleep disturbance. DATA SOURCES: Sixteen databases, including The Cochrane Central Register of Controlled Trials, EMBASE and MEDLINE, were searched up to February 2017, and grey literature searches and hand-searches were conducted. REVIEW METHODS: For pharmacological interventions, only randomised controlled trials (RCTs) were included. For non-pharmacological interventions, RCTs, non-randomised controlled studies and before-and-after studies were included. Data were extracted and quality assessed by two researchers. Meta-analysis and narrative synthesis were undertaken. Data on parents' and children's experiences of receiving a sleep disturbance intervention were collated into themes and reported narratively. RESULTS: Thirty-nine studies were included. Sample sizes ranged from 5 to 244 participants. Thirteen RCTs evaluated oral melatonin. Twenty-six studies (12 RCTs and 14 before-and-after studies) evaluated non-pharmacological interventions, including comprehensive parent-directed tailored (n = 9) and non-tailored (n = 8) interventions, non-comprehensive parent-directed interventions (n = 2) and other non-pharmacological interventions (n = 7). All but one study were reported as having a high or unclear risk of bias, and studies were generally poorly reported. There was a statistically significant increase in diary-reported total sleep time (TST), which was the most commonly reported outcome for melatonin compared with placebo [pooled mean difference 29.6 minutes, 95% confidence interval (CI) 6.9 to 52.4 minutes; p = 0.01]; however, statistical heterogeneity was extremely high (97%). For the single melatonin study that was rated as having a low risk of bias, the mean increase in TST was 13.2 minutes and the lower CI included the possibility of reduced sleep time (95% CI -13.3 to 39.7 minutes). There was mixed evidence about the clinical effectiveness of the non-pharmacological interventions. Sixteen studies included interventions that investigated the feasibility, acceptability and/or parent or clinician views of sleep disturbance interventions. The majority of these studies reported the 'family experience' of non-pharmacological interventions. LIMITATIONS: Planned subgroup analysis was possible in only a small number of melatonin trials. CONCLUSIONS: There is some evidence of benefit for melatonin compared with placebo, but the degree of benefit is uncertain. There are various types of non-pharmacological interventions for managing sleep disturbance; however, clinical and methodological heterogeneity, few RCTs, a lack of standardised outcome measures and risk of bias means that it is not possible to draw conclusions with regard to their effectiveness. Future work should include the development of a core outcome, further evaluation of the clinical effectiveness and cost-effectiveness of pharmacological and non-pharmacological interventions and research exploring the prevention of, and methods for identifying, sleep disturbance. Research mapping current practices and exploring families' understanding of sleep disturbance and their experiences of obtaining help may facilitate service provision development. STUDY REGISTRATION: This study is registered as PROSPERO CRD42016034067. FUNDING: The National Institute for Health Research Health Technology Assessment programme.

Journal ArticleDOI
TL;DR: Randomised controlled trial evidence indicates that QI interventions incorporating specific BCT components are associated with meaningful improvements in DRS attendance compared with usual care.
Abstract: BACKGROUND: Diabetic retinopathy screening (DRS) is effective but uptake is suboptimal. OBJECTIVES: To determine the effectiveness of quality improvement (QI) interventions for DRS attendance; describe the interventions in terms of QI components and behaviour change techniques (BCTs); identify theoretical determinants of attendance; investigate coherence between BCTs identified in interventions and determinants of attendance; and determine the cost-effectiveness of QI components and BCTs for improving DRS. DATA SOURCES AND REVIEW METHODS: Phase 1 - systematic review of randomised controlled trials (RCTs) evaluating interventions to increase DRS attendance (The Cochrane Library, MEDLINE, EMBASE and trials registers to February 2017) and coding intervention content to classify QI components and BCTs. Phase 2 - review of studies reporting factors influencing attendance, coded to theoretical domains (MEDLINE, EMBASE, PsycINFO and sources of grey literature to March 2016). Phase 3 - mapping BCTs (phase 1) to theoretical domains (phase 2) and an economic evaluation to determine the cost-effectiveness of BCTs or QI components. RESULTS: Phase 1 - 7277 studies were screened, of which 66 RCTs were included in the review. Interventions were multifaceted and targeted patients, health-care professionals (HCPs) or health-care systems. Overall, interventions increased DRS attendance by 12% [risk difference (RD) 0.12, 95% confidence interval (CI) 0.10 to 0.14] compared with usual care, with substantial heterogeneity in effect size. Both DRS-targeted and general QI interventions were effective, particularly when baseline attendance levels were low. All commonly used QI components and BCTs were associated with significant improvements, particularly in those with poor attendance. Higher effect estimates were observed in subgroup analyses for the BCTs of 'goal setting (outcome, i.e. consequences)' (RD 0.26, 95% CI 0.16 to 0.36) and 'feedback on outcomes (consequences) of behaviour' (RD 0.22, 95% CI 0.15 to 0.29) in interventions targeting patients and of 'restructuring the social environment' (RD 0.19, 95% CI 0.12 to 0.26) and 'credible source' (RD 0.16, 95% CI 0.08 to 0.24) in interventions targeting HCPs. Phase 2 - 3457 studies were screened, of which 65 non-randomised studies were included in the review. The following theoretical domains were likely to influence attendance: 'environmental context and resources', 'social influences', 'knowledge', 'memory, attention and decision processes', 'beliefs about consequences' and 'emotions'. Phase 3 - mapping identified that interventions included BCTs targeting important barriers to/enablers of DRS attendance. However, BCTs targeting emotional factors around DRS were under-represented. QI components were unlikely to be cost-effective whereas BCTs with a high probability (≥ 0.975) of being cost-effective at a societal willingness-to-pay threshold of £20,000 per QALY included 'goal-setting (outcome)', 'feedback on outcomes of behaviour', 'social support' and 'information about health consequences'. Cost-effectiveness increased when DRS attendance was lower and with longer screening intervals. LIMITATIONS: Quality improvement/BCT coding was dependent on descriptions of intervention content in primary sources; methods for the identification of coherence of BCTs require improvement. CONCLUSIONS: Randomised controlled trial evidence indicates that QI interventions incorporating specific BCT components are associated with meaningful improvements in DRS attendance compared with usual care. Interventions generally used appropriate BCTs that target important barriers to screening attendance, with a high probability of being cost-effective. Research is needed to optimise BCTs or BCT combinations that seek to improve DRS attendance at an acceptable cost. BCTs targeting emotional factors represent a missed opportunity to improve attendance and should be tested in future studies. STUDY REGISTRATION: This study is registered as PROSPERO CRD42016044157 and PROSPERO CRD42016032990. FUNDING: The National Institute for Health Research Health Technology Assessment programme.

Journal ArticleDOI
TL;DR: The cost-effectiveness study showed that SMILE (UK) was possibly cost-effective but was also associated with lower QoL, whereas for people with epilepsy and persistent seizures, a 2-day self-management education course is cost-saving, but does not improve quality of life after 12-months or reduce anxiety or depression symptoms.
Abstract: Background Epilepsy is a common neurological condition resulting in recurrent seizures. Research evidence in long-term conditions suggests that patients benefit from self-management education and that this may improve quality of life (QoL). Epilepsy self-management education has yet to be tested in a UK setting. Objectives To determine the effectiveness and cost-effectiveness of Self-Management education for people with poorly controlled epILEpsy [SMILE (UK)]. Design A parallel pragmatic randomised controlled trial. Setting Participants were recruited from eight hospitals in London and south-east England. Participants Adults aged ≥ 16 years with epilepsy and two or more epileptic seizures in the past year, who were currently being prescribed antiepileptic drugs. Intervention A 2-day group self-management course alongside treatment as usual (TAU). The control group received TAU. Main outcome measures The primary outcome is QoL in people with epilepsy at 12-month follow-up using the Quality Of Life In Epilepsy 31-P (QOLIE-31-P) scale. Other outcomes were seizure control, impact of epilepsy, medication adverse effects, psychological distress, perceived stigma, self-mastery and medication adherence. Cost-effectiveness analyses and a process evaluation were undertaken. Randomisation A 1 : 1 ratio between trial arms using fixed block sizes of two. Blinding Participants were not blinded to their group allocation because of the nature of the study. Researchers involved in data collection and analysis remained blinded throughout. Results The trial completed successfully. A total of 404 participants were enrolled in the study [SMILE (UK), n = 205; TAU, n = 199] with 331 completing the final follow-up at 12 months [SMILE (UK), n = 163; TAU, n = 168]. In the intervention group, 61.5% completed all sessions of the course. No adverse events were found to be related to the intervention. At baseline, participants had a mean age of 41.7 years [standard deviation (SD) 14.1 years], and had epilepsy for a median of 18 years. The mean QOLIE-31-P score for the whole group at baseline was 66.0 out of 100.0 (SD 14.2). Clinically relevant levels of anxiety symptoms were reported in 53.6% of the group and depression symptoms in 28.0%. The results following an intention-to-treat analysis showed no change in any measures at the 12-month follow-up [QOLIE-31-P: SMILE (UK) mean: 67.4, SD 13.5; TAU mean: 69.5, SD 14.8]. The cost-effectiveness study showed that SMILE (UK) was possibly cost-effective but was also associated with lower QoL. The process evaluation with 20 participants revealed that a group course increased confidence by sharing with others and improved self-management behaviours. Conclusions For people with epilepsy and persistent seizures, a 2-day self-management education course is cost-saving, but does not improve QoL after 12-months or reduce anxiety or depression symptoms. A psychological intervention may help with anxiety and depression. Interviewed participants reported attending a group course increased their confidence and helped them improve their self-management. Future work More research is needed on self-management courses, with psychological components and integration with routine monitoring. Trial registration Current Controlled Trials ISRCTN57937389. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 21. See the NIHR Journals Library website for further project information.

Journal ArticleDOI
TL;DR: Compared statistical methods that combine VF and OCT data with VF-only methods are compared to establish whether or not these allow more rapid identification of glaucoma progression and (2) shorter or smaller clinical trials.
Abstract: Background: Progressive optic nerve damage in glaucoma results in vision loss, quantifiable with visual field (VF) testing. VF measurements are, however, highly variable, making identification of worsening vision (‘progression’) challenging. Glaucomatous optic nerve damage can also be measured with imaging techniques such as optical coherence tomography (OCT). Objective: To compare statistical methods that combine VF and OCT data with VF-only methods to establish whether or not these allow (1) more rapid identification of glaucoma progression and (2) shorter or smaller clinical trials. Design: Method ‘hit rate’ (related to sensitivity) was evaluated in subsets of the United Kingdom Glaucoma Treatment Study (UKGTS) and specificity was evaluated in 72 stable glaucoma patients who had 11 VF and OCT tests within 3 months (the RAPID data set). The reference progression detection method was based on Guided Progression Analysis™ (GPA) Software (Carl Zeiss Meditec Inc., Dublin, CA, USA). Index methods were based on previously described approaches [Analysis with Non-Stationary Weibull Error Regression and Spatial enhancement (ANSWERS), Permutation analyses Of Pointwise Linear Regression (PoPLR) and structure-guided ANSWERS (sANSWERS)] or newly developed methods based on Permutation Test (PERM), multivariate hierarchical models with multiple imputation for censored values (MaHMIC) and multivariate generalised estimating equations with multiple imputation for censored values (MaGIC). Setting: Ten university and general ophthalmology units (UKGTS) and a single university ophthalmology unit (RAPID). Participants: UKGTS participants were newly diagnosed glaucoma patients randomised to intraocular pressure-lowering drops or placebo. RAPID participants had glaucomatous VF loss, were on treatment and were clinically stable. Interventions: 24-2 VF tests with the Humphrey Field Analyzer and optic nerve imaging with time-domain (TD) Stratus OCT™ (Carl Zeiss Meditec Inc., Dublin, CA, USA). Main outcome measures: Criterion hit rate and specificity, time to progression, future VF prediction error, proportion progressing in UKGTS treatment groups, hazard ratios (HRs) and study sample size. Results: Criterion specificity was 95% for all tests; the hit rate was 22.2% for GPA, 41.6% for PoPLR, 53.8% for ANSWERS and 61.3% for sANSWERS (all comparisons p ≤ 0.042). Mean survival time (weeks) was 93.6 for GPA, 82.5 for PoPLR, 72.0 for ANSWERS and 69.1 for sANSWERS. The median prediction errors (decibels) when the initial trend was used to predict the final VF were 3.8 (5th to 95th percentile 1.7 to 7.6) for PoPLR, 3.0 (5th to 95th percentile 1.5 to 5.7) for ANSWERS and 2.3 (5th to 95th percentile 1.3 to 4.5) for sANSWERS. HRs were 0.57 [95% confidence interval (CI) 0.34 to 0.90; p = 0.016] for GPA, 0.59 (95% CI 0.42 to 0.83; p = 0.002) for PoPLR, 0.76 (95% CI 0.56 to 1.02; p = 0.065) for ANSWERS and 0.70 (95% CI 0.53 to 0.93; p = 0.012) for sANSWERS. Sample size estimates were not reduced using methods including OCT data. PERM hit rates were between 8.3% and 17.4%. Treatment effects were non-significant in MaHMIC and MaGIC analyses; statistical significance was altered little by incorporating imaging. Limitations: TD OCT is less precise than current imaging technology; current OCT technology would likely perform better. The size of the RAPID data set limited the precision of criterion specificity estimates. Conclusions: The sANSWERS method combining VF and OCT data had a higher hit rate and identified progression more quickly than the reference and other VF-only methods, and produced more accurate estimates of the progression rate, but did not increase treatment effect statistical significance. Similar studies with current OCT technology need to be undertaken and the statistical methods need refinement.

Journal ArticleDOI
TL;DR: The accepted criteria for a population-based AAA screening programme in women are not currently met, and a large-scale study is needed of the exact aortic size distribution for women screened at relevant ages.
Abstract: Background Abdominal aortic aneurysm (AAA) screening programmes have been established for men in the UK to reduce deaths from AAA rupture. Whether or not screening should be extended to women is uncertain. Objective To evaluate the cost-effectiveness of population screening for AAAs in women and compare a range of screening options. Design A discrete event simulation (DES) model was developed to provide a clinically realistic model of screening, surveillance, and elective and emergency AAA repair operations. Input parameters specifically for women were employed. The model was run for 10 million women, with parameter uncertainty addressed by probabilistic and deterministic sensitivity analyses. Setting Population screening in the UK. Participants Women aged ≥ 65 years, followed up to the age of 95 years. Interventions Invitation to ultrasound screening, followed by surveillance for small AAAs and elective surgical repair for large AAAs. Main outcome measures Number of operations undertaken, AAA-related mortality, quality-adjusted life-years (QALYs), NHS costs and cost-effectiveness with annual discounting. Data sources AAA surveillance data, National Vascular Registry, Hospital Episode Statistics, trials of elective and emergency AAA surgery, and the NHS Abdominal Aortic Aneurysm Screening Programme (NAAASP). Review methods Systematic reviews of AAA prevalence and, for elective operations, suitability for endovascular aneurysm repair, non-intervention rates, operative mortality and literature reviews for other parameters. Results The prevalence of AAAs (aortic diameter of ≥ 3.0 cm) was estimated as 0.43% in women aged 65 years and 1.15% at age 75 years. The corresponding attendance rates following invitation to screening were estimated as 73% and 62%, respectively. The base-case model adopted the same age at screening (65 years), definition of an AAA (diameter of ≥ 3.0 cm), surveillance intervals (1 year for AAAs with diameter of 3.0–4.4 cm, 3 months for AAAs with diameter of 4.5–5.4 cm) and AAA diameter for consideration of surgery (5.5 cm) as in NAAASP for men. Per woman invited to screening, the estimated gain in QALYs was 0.00110, and the incremental cost was £33.99. This gave an incremental cost-effectiveness ratio (ICER) of £31,000 per QALY gained. The corresponding incremental net monetary benefit at a threshold of £20,000 per QALY gained was –£12.03 (95% uncertainty interval –£27.88 to £22.12). Almost no sensitivity analyses brought the ICER below £20,000 per QALY gained; an exception was doubling the AAA prevalence to 0.86%, which resulted in an ICER of £13,000. Alternative screening options (increasing the screening age to 70 years, lowering the threshold for considering surgery to diameters of 5.0 cm or 4.5 cm, lowering the diameter defining an AAA in women to 2.5 cm and lengthening the surveillance intervals for the smallest AAAs) did not bring the ICER below £20,000 per QALY gained when considered either singly or in combination. Limitations The model for women was not directly validated against empirical data. Some parameters were poorly estimated, potentially lacking relevance or unavailable for women. Conclusion The accepted criteria for a population-based AAA screening programme in women are not currently met. Future work A large-scale study is needed of the exact aortic size distribution for women screened at relevant ages. The DES model can be adapted to evaluate screening options in men.

Journal ArticleDOI
TL;DR: There is uncertainty about which treatments are most promising, particularly with respect to treating earlier-stage injuries, and saline flush-out techniques and conservative management approaches are commonly used and may be suitable for evaluation in trials.
Abstract: BACKGROUND: Extravasation injuries are caused by unintended leakages of fluids or medicines from intravenous lines, but there is no consensus on the best treatment approaches. OBJECTIVES: To identify which treatments may be best for treating extravasation injuries in infants and young children. DESIGN: Scoping review and survey of practice. POPULATION: Children aged < 18 years with extravasation injuries and NHS staff who treat children with extravasation injuries. INTERVENTIONS: Any treatment for extravasation injury. MAIN OUTCOME MEASURES: Wound healing time, infection, pain, scarring, functional impairment, requirement for surgery. DATA SOURCES: Twelve database searches were carried out in February 2017 without date restrictions, including MEDLINE, CINAHL (Cumulative Index to Nursing and Allied Health Literature) Plus and EMBASE (Excerpta Medica dataBASE). METHODS: Scoping review - studies were screened in duplicate. Data were extracted by one researcher and checked by another. Studies were grouped by design, and then by intervention, with details summarised narratively and in tables. The survey questionnaire was distributed to NHS staff at neonatal units, paediatric intensive care units and principal oncology/haematology units. Summary results were presented narratively and in tables and figures. RESULTS: The evidence identified in the scoping review mostly comprised small, retrospective, uncontrolled group studies or case reports. The studies covered a wide range of interventions including conservative management approaches, saline flush-out techniques (with or without prior hyaluronidase), hyaluronidase (without flush-out), artificial skin treatments, debridement and plastic surgery. Few studies graded injury severity and the results sections and outcomes reported in most studies were limited. There was heterogeneity across study populations in age, types of infusate, injury severity, location of injury and the time gaps between injury identification and subsequent treatment. Some of the better evidence related to studies of flush-out techniques. The NHS survey yielded 63 responses from hospital units across the UK. Results indicated that, although most units had a written protocol or guideline for treating extravasation injuries, only one-third of documents included a staging system for grading injury severity. In neonatal units, parenteral nutrition caused most extravasation injuries. In principal oncology/haematology units, most injuries were due to vesicant chemotherapies. The most frequently used interventions were elevation of the affected area and analgesics. Warm or cold compresses were rarely used. Saline flush-out treatments, either with or without hyaluronidase, were regularly used in about half of all neonatal units. Most responders thought a randomised controlled trial might be a viable future research design, though opinions varied greatly by setting. LIMITATIONS: Paucity of good-quality studies. CONCLUSIONS: There is uncertainty about which treatments are most promising, particularly with respect to treating earlier-stage injuries. Saline flush-out techniques and conservative management approaches are commonly used and may be suitable for evaluation in trials. FUTURE WORK: Conventional randomised trials may be difficult to perform, although a randomised registry trial may be an appropriate alternative. FUNDING: The National Institute for Health Research Health Technology Assessment programme.

Journal ArticleDOI
TL;DR: Findings from the main study and the naturalistic follow-up suggest that staff training in PBS as delivered in this study is insufficient to achieve significant clinical gains beyond TAU in community ID services.
Abstract: BACKGROUND: Preliminary studies have indicated that training staff in Positive Behaviour Support (PBS) may help to reduce challenging behaviour among people with intellectual disability (ID). OBJECTIVE: To evaluate whether or not such training is clinically effective in reducing challenging behaviour in routine care. The study also included longer-term follow-up (approximately 36 months). DESIGN: A multicentre, single-blind, two-arm, parallel-cluster randomised controlled trial. The unit of randomisation was the community ID service using an independent web-based randomisation system and random permuted blocks on a 1 : 1 allocation stratified by a staff-to-patient ratio for each cluster. SETTING: Community ID services in England. PARTICIPANTS: Adults (aged > 18 years) across the range of ID with challenging behaviour [≥ 15 Aberrant Behaviour Checklist - Community total score (ABC-CT)]. INTERVENTIONS: Manual-assisted face-to-face PBS training to therapists and treatment as usual (TAU) compared with TAU only in the control arm. MAIN OUTCOME MEASURES: Carer-reported changes in challenging behaviour as measured by the ABC-CTover 12 months. Secondary outcomes included psychopathology, community participation, family and paid carer burden, family carer psychopathology, costs of care and quality-adjusted life-years (QALYs). Data on main outcome, service use and health-related quality of life were collected for the 36-month follow-up. RESULTS: A total of 246 participants were recruited from 23 teams, of whom 109 were in the intervention arm (11 teams) and 137 were in the control arm (12 teams). The difference in ABC-CTbetween the intervention and control arms [mean difference -2.14, 95% confidence interval (CI) -8.79 to 4.51;p = 0.528] was not statistically significant. No treatment effects were found for any of the secondary outcomes. The mean cost per participant in the intervention arm was £1201. Over 12 months, there was a difference in QALYs of 0.076 in favour of the intervention (95% CI 0.011 to 0.140 QALYs) and a 60% chance that the intervention is cost-effective compared with TAU from a health and social care cost perspective at the threshold of £20,000 per QALY gained. Twenty-nine participants experienced 45 serious adverse events (intervention arm,n = 19; control arm,n = 26). PBS plans were available for 33 participants. An independent assessment of the quality of these plans found that all were less than optimal. Forty-six qualitative interviews were conducted with service users, family carers, paid carers and service managers as part of the process evaluation. Service users reported that they had learned to manage difficult situations and had gained new skills, and carers reported a positive relationship with therapists. At 36 months' follow-up (n = 184), the mean ABC-CTdifference between arms was not significant (-3.70, 95% CI -9.25 to 1.85;p = 0.191). The initial cost-effectiveness of the intervention dissipated over time. LIMITATIONS: The main limitations were low treatment fidelity and reach of the intervention. CONCLUSIONS: Findings from the main study and the naturalistic follow-up suggest that staff training in PBS as delivered in this study is insufficient to achieve significant clinical gains beyond TAU in community ID services. Although there is an indication that training in PBS is potentially cost-effective, this is not maintained in the longer term. There is increased scope to develop new approaches to challenging behaviour as well as optimising the delivery of PBS in routine clinical practice. TRIAL REGISTRATION: This study is registered as NCT01680276. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full inHealth Technology Assessment; Vol. 22, No. 15. See the NIHR Journals Library website for further project information.

Journal ArticleDOI
TL;DR: This study did not find convincing evidence of a clinically important benefit for mirtazapine in addition to a SSRI or a SNRI antidepressant over placebo in primary care patients with TRD, and there was no evidence that the addition of mirtzapine was a cost-effective use of NHS resources.
Abstract: BACKGROUND: Depression is usually managed in primary care and antidepressants are often the first-line treatment, but only half of those treated respond to a single antidepressant. OBJECTIVES: To investigate whether or not combining mirtazapine with serotonin-noradrenaline reuptake inhibitor (SNRI) or selective serotonin reuptake inhibitor (SSRI) antidepressants results in better patient outcomes and more efficient NHS care than SNRI or SSRI therapy alone in treatment-resistant depression (TRD). DESIGN: The MIR trial was a two-parallel-group, multicentre, pragmatic, placebo-controlled randomised trial with allocation at the level of the individual. SETTING: Participants were recruited from primary care in Bristol, Exeter, Hull/York and Manchester/Keele. PARTICIPANTS: Eligible participants were aged ≥ 18 years; were taking a SSRI or a SNRI antidepressant for at least 6 weeks at an adequate dose; scored ≥ 14 points on the Beck Depression Inventory-II (BDI-II); were adherent to medication; and met the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, criteria for depression. INTERVENTIONS: Participants were randomised using a computer-generated code to either oral mirtazapine or a matched placebo, starting at a dose of 15 mg daily for 2 weeks and increasing to 30 mg daily for up to 12 months, in addition to their usual antidepressant. Participants, their general practitioners (GPs) and the research team were blind to the allocation. MAIN OUTCOME MEASURES: The primary outcome was depression symptoms at 12 weeks post randomisation compared with baseline, measured as a continuous variable using the BDI-II. Secondary outcomes (at 12, 24 and 52 weeks) included response, remission of depression, change in anxiety symptoms, adverse events (AEs), quality of life, adherence to medication, health and social care use and cost-effectiveness. Outcomes were analysed on an intention-to-treat basis. A qualitative study explored patients' views and experiences of managing depression and GPs' views on prescribing a second antidepressant. RESULTS: There were 480 patients randomised to the trial (mirtazapine and usual care, n = 241; placebo and usual care, n = 239), of whom 431 patients (89.8%) were followed up at 12 weeks. BDI-II scores at 12 weeks were lower in the mirtazapine group than the placebo group after adjustment for baseline BDI-II score and minimisation and stratification variables [difference -1.83 points, 95% confidence interval (CI) -3.92 to 0.27 points; p = 0.087]. This was smaller than the minimum clinically important difference and the CI included the null. The difference became smaller at subsequent time points (24 weeks: -0.85 points, 95% CI -3.12 to 1.43 points; 12 months: 0.17 points, 95% CI -2.13 to 2.46 points). More participants in the mirtazapine group withdrew from the trial medication, citing mild AEs (46 vs. 9 participants). CONCLUSIONS: This study did not find convincing evidence of a clinically important benefit for mirtazapine in addition to a SSRI or a SNRI antidepressant over placebo in primary care patients with TRD. There was no evidence that the addition of mirtazapine was a cost-effective use of NHS resources. GPs and patients were concerned about adding an additional antidepressant. LIMITATIONS: Voluntary unblinding for participants after the primary outcome at 12 weeks made interpretation of longer-term outcomes more difficult. FUTURE WORK: Treatment-resistant depression remains an area of important, unmet need, with limited evidence of effective treatments. Promising interventions include augmentation with atypical antipsychotics and treatment using transcranial magnetic stimulation. TRIAL REGISTRATION: Current Controlled Trials ISRCTN06653773; EudraCT number 2012-000090-23. FUNDING: This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 63. See the NIHR Journals Library website for further project information.

Journal ArticleDOI
TL;DR: Enteral supplementation with bovine lactoferrin does not reduce the incidence of infection, mortality or other morbidity in very preterm infants and should be combined in a meta-analysis with data from other trials.
Abstract: BACKGROUND: Infections acquired in hospital are an important cause of morbidity and mortality in very preterm infants. Several small trials have suggested that supplementing the enteral diet of very preterm infants with lactoferrin, an antimicrobial protein processed from cow's milk, prevents infections and associated complications. OBJECTIVE: To determine whether or not enteral supplementation with bovine lactoferrin (The Tatua Cooperative Dairy Company Ltd, Morrinsville, New Zealand) reduces the risk of late-onset infection (acquired > 72 hours after birth) and other morbidity and mortality in very preterm infants. DESIGN: Randomised, placebo-controlled, parallel-group trial. Randomisation was via a web-based portal and used an algorithm that minimised for recruitment site, weeks of gestation, sex and single versus multiple births. SETTING: UK neonatal units between May 2014 and September 2017. PARTICIPANTS: Infants born at < 32 weeks' gestation and aged < 72 hours at trial enrolment. INTERVENTIONS: Eligible infants were allocated individually (1 : 1 ratio) to receive enteral bovine lactoferrin (150 mg/kg/day; maximum 300 mg/day) or sucrose (British Sugar, Peterborough, UK) placebo (same dose) once daily from trial entry until a postmenstrual age of 34 weeks. Parents, caregivers and outcome assessors were unaware of group assignment. OUTCOMES: Primary outcome - microbiologically confirmed or clinically suspected late-onset infection. Secondary outcomes - microbiologically confirmed infection; all-cause mortality; severe necrotising enterocolitis (NEC); retinopathy of prematurity (ROP); bronchopulmonary dysplasia (BPD); a composite of infection, NEC, ROP, BPD and mortality; days of receipt of antimicrobials until 34 weeks' postmenstrual age; length of stay in hospital; and length of stay in intensive care, high-dependency and special-care settings. RESULTS: Of 2203 enrolled infants, primary outcome data were available for 2182 infants (99%). In the intervention group, 316 out of 1093 (28.9%) infants acquired a late-onset infection versus 334 out of 1089 (30.7%) infants in the control group [adjusted risk ratio (RR) 0.95, 95% confidence interval (CI) 0.86 to 1.04]. There were no significant differences in any secondary outcomes: microbiologically confirmed infection (RR 1.05, 99% CI 0.87 to 1.26), mortality (RR 1.05, 99% CI 0.66 to 1.68), NEC (RR 1.13, 99% CI 0.68 to 1.89), ROP (RR 0.89, 99% CI 0.62 to 1.28), BPD (RR 1.01, 99% CI 0.90 to 1.13), or a composite of infection, NEC, ROP, BPD and mortality (RR 1.01, 99% CI 0.94 to 1.08). There were no differences in the number of days of receipt of antimicrobials, length of stay in hospital, or length of stay in intensive care, high-dependency or special-care settings. There were 16 reports of serious adverse events for infants in the lactoferrin group and 10 for infants in the sucrose group. CONCLUSIONS: Enteral supplementation with bovine lactoferrin does not reduce the incidence of infection, mortality or other morbidity in very preterm infants. FUTURE WORK: Increase the precision of the estimates of effect on rarer secondary outcomes by combining the data in a meta-analysis with data from other trials. A mechanistic study is being conducted in a subgroup of trial participants to explore whether or not lactoferrin supplementation affects the intestinal microbiome and metabolite profile of very preterm infants. TRIAL REGISTRATION: Current Controlled Trials ISRCTN88261002. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 74. See the NIHR Journals Library website for further project information. This trial was also sponsored by the University of Oxford, Oxford, UK. The funder provided advice and support and monitored study progress but did not have a role in study design or data collection, analysis and interpretation.