scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Early warning signs: targeting neonatal and infant mortality using machine learning

TL;DR: In this article, the authors used a nation-wide household survey data from India and identified important predictors of neonatal and infant mortality using multiple machine learning (ML) techniques.
Abstract: This article uses a nation-wide household survey data from India and identifies important predictors of neonatal and infant mortality using multiple machine learning (ML) techniques. The consensus ...
Citations
More filters
Journal Article
TL;DR: This work establishes a rate of convergence for mixture proportion estimation under an appropriate distributional assumption, and argues that this rate of converge is useful for analyzing weakly supervised learning algorithms that build on MPE.
Abstract: Mixture proportion estimation (MPE) is a fundamental tool for solving a number of weakly supervised learning problems – supervised learning problems where label information is noisy or missing. Previous work on MPE has established a universally consistent estimator. In this work we establish a rate of convergence for mixture proportion estimation under an appropriate distributional assumption, and argue that this rate of convergence is useful for analyzing weakly supervised learning algorithms that build on MPE. To illustrate this idea, we examine an algorithm for classification in the presence of noisy labels based on surrogate risk minimization, and show that the rate of convergence for MPE enables proof of the algorithm’s consistency. Finally, we provide a practical implementation of mixture proportion estimation and demonstrate its efficacy in classification with noisy labels.

79 citations

Journal ArticleDOI
TL;DR: In this article , the authors present a comprehensive literature review on how data-driven approaches have enabled or inhibited the successful achievement of the 17 Sustainable Development Goals to date, and they show that data-based analytics and tools contribute to achieving the 17 SDGs, e.g., by making information more reliable, supporting better-informed decision-making, implementing databased policies, prioritizing actions, and optimizing the allocation of resources.
Abstract: The United Nations’ Sustainable Development Goals (SDGs) set out to improve the quality of life of people in developed, emerging, and developing countries by covering social and economic aspects, with a focus on environmental sustainability. At the same time, data-driven technologies influence our lives in all areas and have caused fundamental economical and societal changes. This study presents a comprehensive literature review on how data-driven approaches have enabled or inhibited the successful achievement of the 17 SDGs to date. Our findings show that data-driven analytics and tools contribute to achieving the 17 SDGs, e.g., by making information more reliable, supporting better-informed decision-making, implementing data-based policies, prioritizing actions, and optimizing the allocation of resources. Based on a qualitative content analysis, results were aggregated into a conceptual framework, including the following categories: (1) uses of data-driven methods (e.g., monitoring, measurement, mapping or modeling, forecasting, risk assessment, and planning purposes), (2) resulting positive effects, (3) arising challenges, and (4) recommendations for action to overcome these challenges. Despite positive effects and versatile applications, problems such as data gaps, data biases, high energy consumption of computational resources, ethical concerns, privacy, ownership, and security issues stand in the way of achieving the 17 SDGs.

9 citations

Journal ArticleDOI
01 Sep 2022-PLOS ONE
TL;DR: In this paper , the authors explore region-level indicators to predict the persistence of child marriage in four countries in South Asia, namely Bangladesh, India, Nepal and Pakistan, and develop a prediction model that relies largely on regional and local inputs such as droughts, floods, population growth and nightlight data to model the incidence of child marriages.
Abstract: Globally, 21 percent of young women are married before their 18th birthday. Despite some progress in addressing child marriage, it remains a widespread practice, in particular in South Asia. While household predictors of child marriage have been studied extensively in the literature, the evidence base on macro-economic factors contributing to child marriage and models that predict where child marriage cases are most likely to occur remains limited. In this paper we aim to fill this gap and explore region-level indicators to predict the persistence of child marriage in four countries in South Asia, namely Bangladesh, India, Nepal and Pakistan. We apply machine learning techniques to child marriage data and develop a prediction model that relies largely on regional and local inputs such as droughts, floods, population growth and nightlight data to model the incidence of child marriages. We find that our gradient boosting model is able to identify a large proportion of the true child marriage cases and correctly classifies 77% of the true marriage cases, with a higher accuracy in Bangladesh (92% of the cases) and a lower accuracy in Nepal (70% of cases). In addition, all countries contain in their top 10 variables for classification nighttime light growth, a shock index of drought over the previous and the last two years and the regional level of education, suggesting that income shocks, regional economic activity and regional education levels play a significant role in predicting child marriage. Given the accuracy of the model to predict child marriage, our model is a valuable tool to support policy design in countries where household-level data remains limited.

1 citations

Journal ArticleDOI
TL;DR: In this article , the authors proposed a systematic approach of Closest Distance Ranking and Principal Component Analysis to deal with the unbalanced dataset, which significantly increased the performance of the machine learning-based system.
Abstract: Healthcare is a sensitive sector, and addressing the class imbalance in the healthcare domain is a time-consuming task for machine learning-based systems due to the vast amount of data. This study looks into the impact of socioeconomic disparities on the healthcare data of diabetic patients to make accurate disease predictions.This study proposed a systematic approach of Closest Distance Ranking and Principal Component Analysis to deal with the unbalanced dataset. A typical machine learning technique was used to analyze the proposed approach. The data set of pregnant diabetic women is analysed for accurate detection.The results of the case are analysed using sensitivity, which demonstrates that the minority class's lack of information makes it impossible to forecast the results. On the other hand, the unbalanced dataset was treated using the proposed technique and evaluated with the machine learning algorithm which significantly increased the performance of the system.The performance of the machine learning-based system was significantly enhanced by the unbalanced dataset which was processed with the proposed technique and evaluated with the machine learning algorithm. For the first time, an unbalanced dataset was treated with a combination of Closest Distance Ranking and Principal Component Analysis.

1 citations

References
More filters
Journal ArticleDOI
01 Oct 2001
TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Abstract: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

79,257 citations

Journal ArticleDOI
TL;DR: A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion, and specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification.
Abstract: Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest-descent minimization. A general gradient descent “boosting” paradigm is developed for additive expansions based on any fitting criterion.Specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification. Special enhancements are derived for the particular case where the individual additive components are regression trees, and tools for interpreting such “TreeBoost” models are presented. Gradient boosting of regression trees produces competitive, highly robust, interpretable procedures for both regression and classification, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire and Friedman, Hastie and Tibshirani are discussed.

17,764 citations

Journal ArticleDOI
TL;DR: In this article, a method of over-sampling the minority class involves creating synthetic minority class examples, which is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
Abstract: An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of oversampling the minority (abnormal)cla ss and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space)tha n only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space)t han varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC)and the ROC convex hull strategy.

17,313 citations

Journal ArticleDOI
TL;DR: In comparative timings, the new algorithms are considerably faster than competing methods and can handle large problems and can also deal efficiently with sparse features.
Abstract: We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include l(1) (the lasso), l(2) (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.

13,656 citations

Journal ArticleDOI
TL;DR: In this article, a method of over-sampling the minority class involves creating synthetic minority class examples, which is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
Abstract: An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.

11,512 citations

Trending Questions (1)
Data mining techniques for predicting neonatal mortality?

This article uses multiple machine learning techniques to identify predictors of neonatal and infant mortality using nation-wide household survey data from India.