scispace - formally typeset
Search or ask a question

Showing papers by "Abraham D. Flaxman published in 2016"


Journal ArticleDOI
Haidong Wang1, Mohsen Naghavi1, Christine Allen1, Ryan M Barber1  +841 moreInstitutions (293)
TL;DR: The Global Burden of Disease 2015 Study provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015, finding several countries in sub-Saharan Africa had very large gains in life expectancy, rebounding from an era of exceedingly high loss of life due to HIV/AIDS.

4,804 citations


Journal ArticleDOI
TL;DR: The enormous health loss attributable to viral hepatitis, and the availability of effective vaccines and treatments, suggests an important opportunity to improve public health.

1,081 citations


Journal ArticleDOI
27 Dec 2016-JAMA
TL;DR: Modeled estimates of US spending on personal health care and public health showed substantial increases from 1996 through 2013; with spending on diabetes, ischemic heart disease, and low back and neck pain accounting for the highest amounts of spending by disease category.
Abstract: Importance US health care spending has continued to increase, and now accounts for more than 17% of the US economy. Despite the size and growth of this spending, little is known about how spending on each condition varies by age and across time. Objective To systematically and comprehensively estimate US spending on personal health care and public health, according to condition, age and sex group, and type of care. Design and Setting Government budgets, insurance claims, facility surveys, household surveys, and official US records from 1996 through 2013 were collected and combined. In total, 183 sources of data were used to estimate spending for 155 conditions (including cancer, which was disaggregated into 29 conditions). For each record, spending was extracted, along with the age and sex of the patient, and the type of care. Spending was adjusted to reflect the health condition treated, rather than the primary diagnosis. Exposures Encounter with US health care system. Main Outcomes and Measures National spending estimates stratified by condition, age and sex group, and type of care. Results From 1996 through 2013, $30.1 trillion of personal health care spending was disaggregated by 155 conditions, age and sex group, and type of care. Among these 155 conditions, diabetes had the highest health care spending in 2013, with an estimated $101.4 billion (uncertainty interval [UI], $96.7 billion-$106.5 billion) in spending, including 57.6% (UI, 53.8%-62.1%) spent on pharmaceuticals and 23.5% (UI, 21.7%-25.7%) spent on ambulatory care. Ischemic heart disease accounted for the second-highest amount of health care spending in 2013, with estimated spending of $88.1 billion (UI, $82.7 billion-$92.9 billion), and low back and neck pain accounted for the third-highest amount, with estimated health care spending of $87.6 billion (UI, $67.5 billion-$94.1 billion). The conditions with the highest spending levels varied by age, sex, type of care, and year. Personal health care spending increased for 143 of the 155 conditions from 1996 through 2013. Spending on low back and neck pain and on diabetes increased the most over the 18 years, by an estimated $57.2 billion (UI, $47.4 billion-$64.4 billion) and $64.4 billion (UI, $57.8 billion-$70.7 billion), respectively. From 1996 through 2013, spending on emergency care and retail pharmaceuticals increased at the fastest rates (6.4% [UI, 6.4%-6.4%] and 5.6% [UI, 5.6%-5.6%] annual growth rate, respectively), which were higher than annual rates for spending on inpatient care (2.8% [UI, 2.8%–2.8%] and nursing facility care (2.5% [UI, 2.5%-2.5%]). Conclusions and Relevance Modeled estimates of US spending on personal health care and public health showed substantial increases from 1996 through 2013; with spending on diabetes, ischemic heart disease, and low back and neck pain accounting for the highest amounts of spending by disease category. The rate of change in annual spending varied considerably among different conditions and types of care. This information may have implications for efforts to control US health care spending.

752 citations


Journal ArticleDOI
13 Dec 2016-JAMA
TL;DR: The approach to county-level analyses with small area models used in this study has the potential to provide novel insights into US disease-specific mortality time trends and their differences across geographic regions.
Abstract: Importance County-level patterns in mortality rates by cause have not been systematically described but are potentially useful for public health officials, clinicians, and researchers seeking to improve health and reduce geographic disparities. Objectives To demonstrate the use of a novel method for county-level estimation and to estimate annual mortality rates by US county for 21 mutually exclusive causes of death from 1980 through 2014. Design, Setting, and Participants Redistribution methods for garbage codes (implausible or insufficiently specific cause of death codes) and small area estimation methods (statistical methods for estimating rates in small subpopulations) were applied to death registration data from the National Vital Statistics System to estimate annual county-level mortality rates for 21 causes of death. These estimates were raked (scaled along multiple dimensions) to ensure consistency between causes and with existing national-level estimates. Geographic patterns in the age-standardized mortality rates in 2014 and in the change in the age-standardized mortality rates between 1980 and 2014 for the 10 highest-burden causes were determined. Exposure County of residence. Main Outcomes and Measures Cause-specific age-standardized mortality rates. Results A total of 80 412 524 deaths were recorded from January 1, 1980, through December 31, 2014, in the United States. Of these, 19.4 million deaths were assigned garbage codes. Mortality rates were analyzed for 3110 counties or groups of counties. Large between-county disparities were evident for every cause, with the gap in age-standardized mortality rates between counties in the 90th and 10th percentiles varying from 14.0 deaths per 100 000 population (cirrhosis and chronic liver diseases) to 147.0 deaths per 100 000 population (cardiovascular diseases). Geographic regions with elevated mortality rates differed among causes: for example, cardiovascular disease mortality tended to be highest along the southern half of the Mississippi River, while mortality rates from self-harm and interpersonal violence were elevated in southwestern counties, and mortality rates from chronic respiratory disease were highest in counties in eastern Kentucky and western West Virginia. Counties also varied widely in terms of the change in cause-specific mortality rates between 1980 and 2014. For most causes (eg, neoplasms, neurological disorders, and self-harm and interpersonal violence), both increases and decreases in county-level mortality rates were observed. Conclusions and Relevance In this analysis of US cause-specific county-level mortality rates from 1980 through 2014, there were large between-county differences for every cause of death, although geographic patterns varied substantially by cause of death. The approach to county-level analyses with small area models used in this study has the potential to provide novel insights into US disease-specific mortality time trends and their differences across geographic regions.

200 citations


Journal ArticleDOI
TL;DR: The authors' findings demonstrate substantial disparities in diabetes prevalence, rates of diagnosis, and rates of effective treatment within the U.S. These findings should be used to target high-burden areas and select the right mix of public health strategies.
Abstract: OBJECTIVE Previous analyses of diabetes prevalence in the U.S. have considered either only large geographic regions or only individuals in whom diabetes had been diagnosed. We estimated county-level trends in the prevalence of diagnosed, undiagnosed, and total diabetes as well as rates of diagnosis and effective treatment from 1999 to 2012. RESEARCH DESIGN AND METHODS We used a two-stage modeling procedure. In the first stage, self-reported and biomarker data from the National Health and Nutrition Examination Survey (NHANES) were used to build models for predicting true diabetes status, which were applied to impute true diabetes status for respondents in the Behavioral Risk Factor Surveillance System (BRFSS). In the second stage, small area models were fit to imputed BRFSS data to derive county-level estimates of diagnosed, undiagnosed, and total diabetes prevalence, as well as rates of diabetes diagnosis and effective treatment. RESULTS In 2012, total diabetes prevalence ranged from 8.8% to 26.4% among counties, whereas the proportion of the total number of cases that had been diagnosed ranged from 59.1% to 79.8%, and the proportion of successfully treated individuals ranged from 19.4% to 31.0%. Total diabetes prevalence increased in all counties between 1999 and 2012; however, the rate of increase varied widely. Over the same period, rates of diagnosis increased in all counties, while rates of effective treatment stagnated. CONCLUSIONS Our findings demonstrate substantial disparities in diabetes prevalence, rates of diagnosis, and rates of effective treatment within the U.S. These findings should be used to target high-burden areas and select the right mix of public health strategies.

84 citations


Journal ArticleDOI
10 Feb 2016
TL;DR: This study provides, for the first time, age-specific estimates of PTSD and depression prevalence adjusted for an extensive range of covariates and is a significant advancement on the current understanding of the epidemiology in conflict-affected populations.
Abstract: Background. Despite significant research examining mental health in conflict-affected populations we do not yet have a comprehensive epidemiological model of how mental disorders are distributed, or which factors influence the epidemiology in these populations. We aim to derive prevalence estimates specific for region, age and sex of major depression, and PTSD in the general populations of areas exposed to conflict, whilst controlling for an extensive range of covariates.

51 citations


Journal ArticleDOI
26 Jan 2016-PLOS ONE
TL;DR: In this article, the authors developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency.
Abstract: Low-resource countries can greatly benefit from even small increases in efficiency of health service provision, supporting a strong case to measure and pursue efficiency improvement in low- and middle-income countries (LMICs). However, the knowledge base concerning efficiency measurement remains scarce for these contexts. This study shows that current estimation approaches may not be well suited to measure technical efficiency in LMICs and offers an alternative approach for efficiency measurement in these settings. We developed a simulation environment which reproduces the characteristics of health service production in LMICs, and evaluated the performance of Data Envelopment Analysis (DEA) and Stochastic Distance Function (SDF) for assessing efficiency. We found that an ensemble approach (ENS) combining efficiency estimates from a restricted version of DEA (rDEA) and restricted SDF (rSDF) is the preferable method across a range of scenarios. This is the first study to analyze efficiency measurement in a simulation setting for LMICs. Our findings aim to heighten the validity and reliability of efficiency analyses in LMICs, and thus inform policy dialogues about improving the efficiency of health service production in these settings.

24 citations


Journal ArticleDOI
TL;DR: It is substantiated the WHO recommendation that it is reasonable to collect VAs up to 1 year after death providing it is accepted that probability of a correct diagnosis is likely to decline month by month during this period.
Abstract: One key contextual feature in Verbal Autopsy (VA) is the time between death and survey administration, or recall period. This study quantified the effect of recall period on VA performance by using a paired dataset in which two VAs were administered for a single decedent. This study used information from the Population Health Metrics Research Consortium (PHMRC) Study, which collected VAs for “gold standard” cases where cause of death (COD) was supported by clinical criteria. This study repeated VA interviews within 3–52 months of death in PHMRC study sites in Andhra Pradesh, India, and Bohol and Manila, Philippines. The final dataset included 2113 deaths interviewed twice and with recall periods ranging from 0 to 52 months. COD was assigned by the Tariff method and its accuracy determined by comparison with the gold standard COD. The probability of a correct diagnosis of COD decreased by 0.55% per month in the period after death. Site of data collection and survey module also affected the probability of Tariff Method correctly assigning a COD. The probability of a correct diagnosis in VAs collected 3–11 months after death will, on average, be 95.9% of that in VAs collected within 3 months of death. These findings suggest that collecting VAs within 3 months of death may improve the quality of the information collected, taking the need for a period of mourning into account. This study substantiates the WHO recommendation that it is reasonable to collect VAs up to 1 year after death providing it is accepted that probability of a correct diagnosis is likely to decline month by month during this period.

21 citations


Journal ArticleDOI
TL;DR: In this article, the authors quantified facility-level technical efficiency across countries, assessed potential determinants of efficiency, and predicted the potential for additional ART expansion, and estimated how many additional ART visits could be accommodated if facilities with low efficiency thresholds reached those levels of efficiency.
Abstract: Since 2000, international funding for HIV has supported scaling up antiretroviral therapy (ART) in sub-Saharan Africa. However, such funding has stagnated for years, threatening the sustainability and reach of ART programs amid efforts to achieve universal treatment. Improving health system efficiencies, particularly at the facility level, is an increasingly critical avenue for extending limited resources for ART; nevertheless, the potential impact of increased facility efficiency on ART capacity remains largely unknown. Through the present study, we sought to quantify facility-level technical efficiency across countries, assess potential determinants of efficiency, and predict the potential for additional ART expansion. Using nationally-representative facility datasets from Kenya, Uganda and Zambia, and measures adjusting for structural quality, we estimated facility-level technical efficiency using an ensemble approach that combined restricted versions of Data Envelopment Analysis and Stochastic Distance Function. We then conducted a series of bivariate and multivariate regression analyses to evaluate possible determinants of higher or lower technical efficiency. Finally, we predicted the potential for ART expansion across efficiency improvement scenarios, estimating how many additional ART visits could be accommodated if facilities with low efficiency thresholds reached those levels of efficiency. In each country, national averages of efficiency fell below 50 % and facility-level efficiency markedly varied. Among facilities providing ART, average efficiency scores spanned from 50 % (95 % uncertainty interval (UI), 48–62 %) in Uganda to 59 % (95 % UI, 53–67 %) in Zambia. Of the facility determinants analyzed, few were consistently associated with higher or lower technical efficiency scores, suggesting that other factors may be more strongly related to facility-level efficiency. Based on observed facility resources and an efficiency improvement scenario where all facilities providing ART reached 80 % efficiency, we predicted a 33 % potential increase in ART visits in Kenya, 62 % in Uganda, and 33 % in Zambia. Given observed resources in facilities offering ART, we estimated that 459,000 new ART patients could be seen if facilities in these countries reached 80 % efficiency, equating to a 40 % increase in new patients. Health facilities in Kenya, Uganda, and Zambia could notably expand ART services if the efficiency with which they operate increased. Improving how facility resources are used, and not simply increasing their quantity, has the potential to substantially elevate the impact of global health investments and reduce treatment gaps for people living with HIV.

20 citations



Journal ArticleDOI
TL;DR: All participants made more economically-rational decisions when provided explicit probability information in a non-clinical domain, however, choices in the non- clinical domain were not related to prospect-theory concordant decision making and risk aversion tendencies in the clinical domain.
Abstract: Prospect theory suggests that when faced with an uncertain outcome, people display loss aversion by preferring to risk a greater loss rather than incurring certain, lesser cost. Providing probability information improves decision making towards the economically optimal choice in these situations. Clinicians frequently make decisions when the outcome is uncertain, and loss aversion may influence choices. This study explores the extent to which prospect theory, loss aversion, and probability information in a non-clinical domain explains clinical decision making under uncertainty. Four hundred sixty two participants (n = 117 non-medical undergraduates, n = 113 medical students, n = 117 resident trainees, and n = 115 medical/surgical faculty) completed a three-part online task. First, participants completed an iced-road salting task using temperature forecasts with or without explicit probability information. Second, participants chose between less or more risk-averse (“defensive medicine”) decisions in standardized scenarios. Last, participants chose between recommending therapy with certain outcomes or risking additional years gained or lost. In the road salting task, the mean expected value for decisions made by clinicians was better than for non-clinicians(−$1,022 vs -$1,061; <0.001). Probability information improved decision making for all participants, but non-clinicians improved more (mean improvement of $64 versus $33; p = 0.027). Mean defensive decisions decreased across training level (medical students 2.1 ± 0.9, residents 1.6 ± 0.8, faculty1.6 ± 1.1; p-trend < 0.001) and prospect-theory-concordant decisions increased (25.4%, 33.9%, and 40.7%;p-trend = 0.016). There was no relationship identified between road salting choices with defensive medicine and prospect-theory-concordant decisions. All participants made more economically-rational decisions when provided explicit probability information in a non-clinical domain. However, choices in the non-clinical domain were not related to prospect-theory concordant decision making and risk aversion tendencies in the clinical domain. Recognizing this discordance may be important when applying prospect theory to interventions aimed at improving clinical care.

Journal ArticleDOI
TL;DR: Falling was the most common cause of civilian injury in Baghdad and many injuries resulted in life-limiting disabilities, and households shouldered much of the burden after fall injury due to loss of income and/or medical expenditure, often resulting in food insecurity.
Abstract: INTRODUCTION: Falls incur nearly 35 million disability-adjusted life-years annually; 75% of which occur in low- and middle-income countries. The epidemiology of civilian injuries during conflict is relatively unknown, yet important for planning prevention initiatives, health policy and humanitarian assistance. This study aimed to determine the death and disability and household consequences of fall injuries in post-invasion Baghdad. METHODS: A two-stage, cluster randomised, community-based household survey was performed in May of 2014 to determine the civilian burden of injury from 2003 to 2014 in Baghdad. In addition to questions about household member death, households were interviewed regarding injury specifics, healthcare required, disability, relatedness to conflict and resultant financial hardship. RESULTS: Nine hundred households totaling 5148 individuals were interviewed. There were 138 fall injuries (25% of all injuries reported); fall was the most common mechanism of civilian injury in Baghdad. The rate of serious fall injuries increased from 78 to 466 per 100,000 persons in 2003 and 2013, respectively. Fall was the most common mechanism among the injured elderly (i.e. ≥65 years; 15/24 elderly unintentional injuries; 63%). However, 46 fall injuries were children aged CONCLUSION: Falls were the most common cause of civilian injury in Baghdad. In part due to the effect of prolonged insecurity on a fragile health system, many injuries resulted in life-limiting disabilities. In turn, households shouldered much of the burden after fall injury due to loss of income and/or medical expenditure, often resulting in food insecurity. Given ongoing conflict, civilian injury control initiatives, trauma care strengthening efforts and support for households of the injured is urgently needed. Language: en

Journal ArticleDOI
TL;DR: Young adults, pedestrians, motorcyclists and bicyclists were the most frequently injured or killed by RTCs, and the families of road injury victims suffered considerably from lost wages, often resulting in household food insecurity.
Abstract: Introduction Around 50 million people are killed or left disabled on the world9s roads each year; most are in middle-income cities. In addition to this background risk, Baghdad has been plagued by decades of insecurity that undermine injury prevention strategies. This study aimed to determine death and disability and household consequences of road traffic injuries (RTIs) in postinvasion Baghdad. Methods A two-stage, cluster-randomised, community-based household survey was performed in May 2014 to determine the civilian burden of injury from 2003 to 2014 in Baghdad. In addition to questions about household member death, households were interviewed regarding crash specifics, healthcare required, disability, relatedness to conflict and resultant financial hardship. Results Nine hundred households, totalling 5148 individuals, were interviewed. There were 86 RTIs (16% of all reported injuries) that resulted in 8 deaths (9% of RTIs). Serious RTIs increased in the decade postinvasion and were estimated to be 26 341 in 2013 (350 per 100 000 persons). 53% of RTIs involved pedestrians, motorcyclists or bicyclists. 51% of families directly affected by a RTI reported a significant decline in household income or suffered food insecurity. Conclusions RTIs were extremely common and have increased in Baghdad. Young adults, pedestrians, motorcyclists and bicyclists were the most frequently injured or killed by RTCs. There is a large burden of road injury, and the families of road injury victims suffered considerably from lost wages, often resulting in household food insecurity. Ongoing conflict may worsen RTI risk and undermine efforts to reduce road traffic death and disability.

Journal ArticleDOI
TL;DR: Families give coherent accounts of events leading to death but the details vary from interview to interview for the same case, which has considerable implications for the progressive roll out of VAs into civil registration and vital statistics systems.
Abstract: We believe that it is important that governments understand the reliability of the mortality data which they have at their disposable to guide policy debates. In many instances, verbal autopsy (VA) will be the only source of mortality data for populations, yet little is known about how the accuracy of VA diagnoses is affected by the reliability of the symptom responses. We previously described the effect of the duration of time between death and VA administration on VA validity. In this paper, using the same dataset, we assess the relationship between the reliability and completeness of symptom responses and the reliability and accuracy of cause of death (COD) prediction. The study was based on VAs in the Population Health Metrics Research Consortium (PHMRC) VA Validation Dataset from study sites in Bohol and Manila, Philippines and Andhra Pradesh, India. The initial interview was repeated within 3–52 months of death. Question responses were assessed for reliability and completeness between the two survey rounds. COD was predicted by Tariff Method. A sample of 4226 VAs was collected for 2113 decedents, including 1394 adults, 349 children, and 370 neonates. Mean question reliability was unexpectedly low (kappa = 0.447): 42.5% of responses positive at the first interview were negative at the second, and 47.9% of responses positive at the second had been negative at the first. Question reliability was greater for the short form of the PHMRC instrument (kappa = 0.497) and when analyzed at the level of the individual decedent (kappa = 0.610). Reliability at the level of the individual decedent was associated with COD predictive reliability and predictive accuracy. Families give coherent accounts of events leading to death but the details vary from interview to interview for the same case. Accounts are accurate but inconsistent; different subsets of symptoms are identified on each occasion. However, there are sufficient accurate and consistent subsets of symptoms to enable the Tariff Method to assign a COD. Questions which contributed most to COD prediction were also the most reliable and consistent across repeat interviews; these have been included in the short form VA questionnaire. Accuracy and reliability of diagnosis for an individual death depend on the quality of interview. This has considerable implications for the progressive roll out of VAs into civil registration and vital statistics (CRVS) systems.

Journal Article
TL;DR: Ongoing, countrywide mortality data collection is crucial for evidence-based priority setting in Nepal, and SmartVA-Analyze is found to provide useful general cause of death data, particularly in settings where death certification is unavailable.
Abstract: Background Nepal is in the midst of a disease transition, including a rapid increase of noncommunicable diseases. In order for health policy makers and planners to make informed programmatic and funding decisions, they need up to date and accurate data regarding cause of death throughout the country. Methods of improving cause of death reporting in Nepal are urgently required. Objective We sought to validate SmartVA-Analyze, an application which computer certifies verbal autopsies, to evaluate it as a method for collecting mortality data in Nepal. Method We conducted a medical record review of mortality cases at Dhulikhel Hospital, Kathmandu University Hospital. Cases with a verifiable underlying cause of death were used as gold standard reference cases. Verbal autopsies were conducted with caregivers of 48 gold standard cases. Result Of the 66 adult gold standard mortality cases reviewed, 76% were caused by cancer, cirrhosis, cardiovascular disease, COPD or injury. When assessing concordance between cause of death from verbal autopsy vs. gold standards, we found an overall agreement (Kappa) of 0.50. Kappa based on broader ICD-10 categories was 0.69. Cause-Specific Mortality Fraction Accuracy was 0.625, and disease specific measures of concordance varied widely, with sensitivities ranging from 0-100%. Conclusion Ongoing, countrywide mortality data collection is crucial for evidence-based priority setting in Nepal. Though not valid for all causes, we found SmartVA-Analyze to provide useful general cause of death data, particularly in settings where death certification is unavailable.

Journal ArticleDOI
TL;DR: Examining use of preventative leak testing before and after colorectal operations with anastomotic leaks finds that surgeons who increased their leak testing more frequently performed operations for diverticulitis, more frequently began their cases laparoscopically, and had longer mean operative times.

Journal ArticleDOI
21 Oct 2016-PLOS ONE
TL;DR: A novel method is reported to verify the reliability of epidemiological (household survey) estimates of direct war-related injury mortality dating back several decades by comparing sibling mortality reports with the frequency of independent news reports about violent historic events.
Abstract: Objectives We estimated war-related Iraqi mortality for the period 1980 through 1993. Method To test our hypothesis that deaths reported by siblings (even dating back several decades) would correspond with war events, we compared sibling mortality reports with the frequency of independent news reports about violent historic events. We used data from a survey of 4,287 adults in 2000 Iraqi households conducted in 2011. Interviewees reported on the status of their 24,759 siblings. Death rates were applied to population estimates, 1980 to 1993. News report data came from the ProQuest New York Times database. Results About half of sibling-reported deaths across the study period were attributed to direct war-related injuries. The Iran-Iraq war led to nearly 200,000 adult deaths, and the 1990–1991 First Gulf War generated another approximately 40,000 deaths. Deaths during peace intervals before and after each war were significantly lower. We found a relationship between total sibling-reported deaths and the tally of war events across the period, p = 0.02. Conclusions We report a novel method to verify the reliability of epidemiological (household survey) estimates of direct war-related injury mortality dating back several decades.

Posted Content
TL;DR: This work demonstrates that the CCA embeddings capture meaningful relationships among the codes and establishes their usefulness in predicting future elective surgery for diverticulitis, an important marker in efforts for reducing costs in healthcare.
Abstract: We propose using canonical correlation analysis (CCA) to generate features from sequences of medical billing codes. Applying this novel use of CCA to a database of medical billing codes for patients with diverticulitis, we first demonstrate that the CCA embeddings capture meaningful relationships among the codes. We then generate features from these embeddings and establish their usefulness in predicting future elective surgery for diverticulitis, an important marker in efforts for reducing costs in healthcare.

01 Jan 2016
Abstract: BackgroundSince 2000, international funding for HIV has supported scaling up antiretroviral therapy (ART) in sub-Saharan Africa. However, such funding has stagnated for years, threatening the sustainability and reach of ART programs amid efforts to achieve universal treatment. Improving health system efficiencies, particularly at the facility level, is an increasingly critical avenue for extending limited resources for ART; nevertheless, the potential impact of increased facility efficiency on ART capacity remains largely unknown. Through the present study, we sought to quantify facility-level technical efficiency across countries, assess potential determinants of efficiency, and predict the potential for additional ART expansion.MethodsUsing nationally-representative facility datasets from Kenya, Uganda and Zambia, and measures adjusting for structural quality, we estimated facility-level technical efficiency using an ensemble approach that combined restricted versions of Data Envelopment Analysis and Stochastic Distance Function. We then conducted a series of bivariate and multivariate regression analyses to evaluate possible determinants of higher or lower technical efficiency. Finally, we predicted the potential for ART expansion across efficiency improvement scenarios, estimating how many additional ART visits could be accommodated if facilities with low efficiency thresholds reached those levels of efficiency.ResultsIn each country, national averages of efficiency fell below 50 % and facility-level efficiency markedly varied. Among facilities providing ART, average efficiency scores spanned from 50 % (95 % uncertainty interval (UI), 48–62 %) in Uganda to 59 % (95 % UI, 53–67 %) in Zambia. Of the facility determinants analyzed, few were consistently associated with higher or lower technical efficiency scores, suggesting that other factors may be more strongly related to facility-level efficiency. Based on observed facility resources and an efficiency improvement scenario where all facilities providing ART reached 80 % efficiency, we predicted a 33 % potential increase in ART visits in Kenya, 62 % in Uganda, and 33 % in Zambia. Given observed resources in facilities offering ART, we estimated that 459,000 new ART patients could be seen if facilities in these countries reached 80 % efficiency, equating to a 40 % increase in new patients.ConclusionsHealth facilities in Kenya, Uganda, and Zambia could notably expand ART services if the efficiency with which they operate increased. Improving how facility resources are used, and not simply increasing their quantity, has the potential to substantially elevate the impact of global health investments and reduce treatment gaps for people living with HIV.