scispace - formally typeset
Search or ask a question
Author

Samuel Iddi

Bio: Samuel Iddi is an academic researcher from University of Ghana. The author has contributed to research in topics: Overdispersion & Medicine. The author has an hindex of 12, co-authored 44 publications receiving 358 citations. Previous affiliations of Samuel Iddi include Katholieke Universiteit Leuven & University of Southern California.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a standardized training program was provided to 1276 facilitators in Germany, Hungary, Ireland, and Portugal to improve attitudes toward depression, knowledge on suicide, and confidence to detect suicidal behavior in four European countries and to identify specific training needs across regions and CF groups.

61 citations

Journal ArticleDOI
TL;DR: A latent time joint mixed effects model is proposed to characterize long-term disease dynamics using data from the Alzheimer's Disease Neuroimaging Initiative and Markov chain Monte Carlo methods are proposed for estimation, model selection, and inference.
Abstract: Characterization of long-term disease dynamics, from disease-free to end-stage, is integral to understanding the course of neurodegenerative diseases such as Parkinson's and Alzheimer's, and ultimately, how best to intervene. Natural history studies typically recruit multiple cohorts at different stages of disease and follow them longitudinally for a relatively short period of time. We propose a latent time joint mixed effects model to characterize long-term disease dynamics using this short-term data. Markov chain Monte Carlo methods are proposed for estimation, model selection, and inference. We apply the model to detailed simulation studies and data from the Alzheimer's Disease Neuroimaging Initiative.

43 citations

Journal ArticleDOI
TL;DR: A two-stage approach to modeling and predicting measures of cognition, function, brain imaging, fluid biomarkers, and diagnosis of individuals using multiple domains simultaneously, using a single joint mixed-effects model for all continuous outcomes yields better diagnostic classification accuracy.
Abstract: Alzheimer’s disease is the most common neurodegenerative disease and is characterized by the accumulation of amyloid-beta peptides leading to the formation of plaques and tau protein tangles in brain. These neuropathological features precede cognitive impairment and Alzheimer’s dementia by many years. To better understand and predict the course of disease from early-stage asymptomatic to late-stage dementia, it is critical to study the patterns of progression of multiple markers. In particular, we aim to predict the likely future course of progression for individuals given only a single observation of their markers. Improved individual-level prediction may lead to improved clinical care and clinical trials. We propose a two-stage approach to modeling and predicting measures of cognition, function, brain imaging, fluid biomarkers, and diagnosis of individuals using multiple domains simultaneously. In the first stage, joint (or multivariate) mixed-effects models are used to simultaneously model multiple markers over time. In the second stage, random forests are used to predict categorical diagnoses (cognitively normal, mild cognitive impairment, or dementia) from predictions of continuous markers based on the first-stage model. The combination of the two models allows one to leverage their key strengths in order to obtain improved accuracy. We characterize the predictive accuracy of this two-stage approach using data from the Alzheimer’s Disease Neuroimaging Initiative. The two-stage approach using a single joint mixed-effects model for all continuous outcomes yields better diagnostic classification accuracy compared to using separate univariate mixed-effects models for each of the continuous outcomes. Overall prediction accuracy above 80% was achieved over a period of 2.5 years. The results further indicate that overall accuracy is improved when markers from multiple assessment domains, such as cognition, function, and brain imaging, are used in the prediction algorithm as compared to the use of markers from a single domain only.

41 citations

Journal ArticleDOI
01 Nov 2020
TL;DR: The Nairobi Urban Health and Demographic Surveillance System (NUHDSS) has been in existence since 2002 and has been a vital platform for providing a more nuanced understanding of changes in the health and socioeconomic status of urban slum dwellers, and allows the elucidation of intra-urban and intra-slum differences.
Abstract: The Nairobi Urban Health and Demographic Surveillance System, managed by the African Population and Health Research Center, has been in existence since 2002. After almost two decades of surveillance, it is important to reflect on what the platform has achieved, its value-for-money, and utility in the population health and wellbeing research landscape. In this paper, we deploy a descriptive-analytical approach to articulate the historical dimensions, values, processes, challenges and lessons learned since the platform was established. We highlight its critical components, important developments over time and key findings that demonstrate its impact on livelihoods of vulnerable populations in urban slum settings of Nairobi. Additionally, this paper provides detailed background information for several multiple papers that utilize the NUHDSS data and are, together with the present paper submitted to the Journal of Global Epidemiology as part of the special issue titled “Epidemiologic Evidence from the Nairobi Urban Health and Demographic Surveillance Data”. From the evidence available, it is apparent that over the years, most health and socio-economic indicators have not significantly improved in the slum areas. The findings generated from the various thematic analytical papers underscores the need for improved programming, advocacy and policy-making that targets urban slums dwellers. We conclude that the longitudinal perspective of the Nairobi Urban Health and Demographic Surveillance System remains a vital platform for providing a more nuanced understanding of changes in the health and socio-economic status of urban slum dwellers, and allows the elucidation of intra-urban and intra-slum differences.

38 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Metaprop was applied on two published meta-analyses of HPV-infection in women with a Pap smear showing ASC-US and cure rate after treatment for cervical precancer using cold coagulation, both of which showed a pooled HPV-prevalence of 43%.
Abstract: Meta-analyses have become an essential tool in synthesizing evidence on clinical and epidemiological questions derived from a multitude of similar studies assessing the particular issue. Appropriate and accessible statistical software is needed to produce the summary statistic of interest. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. It builds further on the existing Stata procedure metan which is typically used to pool effects (risk ratios, odds ratios, differences of risks or means) but which is also used to pool proportions. Metaprop implements procedures which are specific to binomial data and allows computation of exact binomial and score test-based confidence intervals. It provides appropriate methods for dealing with proportions close to or at the margins where the normal approximation procedures often break down, by use of the binomial distribution to model the within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilize the variances. Metaprop was applied on two published meta-analyses: 1) prevalence of HPV-infection in women with a Pap smear showing ASC-US; 2) cure rate after treatment for cervical precancer using cold coagulation. The first meta-analysis showed a pooled HPV-prevalence of 43% (95% CI: 38%-48%). In the second meta-analysis, the pooled percentage of cured women was 94% (95% CI: 86%-97%). By using metaprop, no studies with 0% or 100% proportions were excluded from the meta-analysis. Furthermore, study specific and pooled confidence intervals always were within admissible values, contrary to the original publication, where metan was used.

1,444 citations

Journal Article
TL;DR: In this paper, the authors used trends in immunization coverage levels and trends are used to monitor the performance of immunization services locally, nationally and internationally to guide strategies for the eradication, elimination and control of vaccine-preventable diseases.
Abstract: Introduction WHO recommends that all children receive one dose of bacille Calmette-Guerin vaccine (BCG), three doses of diphtheria--tetanus-pertussis vaccine (DTP), three doses of either oral polio vaccine (OPV) or inactivated polio vaccine (IPV), three doses of hepatitis B vaccine, and one dose of a measles virus-containing vaccine (MVCV), either anti-measles alone or in combination with other antigens (1-9) It also recommends three doses of vaccine against infection with Haemophilus influenzae type b (Hib) (10) To boost immunity at older ages, additional immunizations are recommended for healthcare workers, travellers, high-risk groups and people in areas where the risk of specific vaccine-preventable diseases is high (11) Immunization coverage levels and trends are used (i) to monitor the performance of immunization services locally, nationally and internationally; (ii) to guide strategies for the eradication, elimination and control of vaccine-preventable diseases; (12-14) (iii) to identify areas of immunization systems that may require additional resources and focused attention; (15,16) and (iv) to assess the need to introduce new vaccines into national and local immunization systems (17) Models of vaccine-preventable disease burden frequently include immunization coverage levels among their components (18-20) Coverage levels for measles vaccine and DTP are indicators of health system performance frequently considered by funding agencies when reviewing applications for financial and technical support (21-24) Measles immunization coverage is one of the indicators for tracking progress towards Millennium Development Goal 4, to reduce child mortality (25) Furthermore, trends in immunization coverage are used to establish the link between immunization service delivery and disease occurrence and to provide a framework for setting future coverage goals (26) Trends in immunization coverage While some countries had routine immunization systems in place before 1980, major national and international development of routine, universal infant immunization systems did not begin until the late 1970s In fact, it was not until the 1980s that dramatic improvements in immunization coverage were achieved, along with an increase in coverage with the third dose of DTP vaccine (DTP3) from 20% in 1980 to 75% coverage in 1990 While some countries reported significant declines in coverage after 1990, global coverage levels remained fairly constant and began rising slowly but steadily in 2000, until DTP3 coverage worldwide had reached 81% in 2006 In 1980, fewer than 10% of the world's children lived in 20 of the 167 countries with DTP3 coverage levels greater than 80%; 84% of the world's children lived in countries where coverage was less than 50% By 1990, 108 countries (43% of all children) had DTP3 coverage levels greater than 80%, and fewer than 10% of children lived in countries with under 50% coverage Although national coverage levels can "mask" sub-national geographical or sociological pockets where coverage is much lower, in 2006, 57% of children lived in countries with greater than 80% DTP3 coverage Still, that year approximately 263 million children who reached their first birthday did not receive DTP3, but 162 million (62%) of them lived in China, India, Indonesia or Nigeria At the time this report was prepared, there remained seven countries where fewer than half of the children were vaccinated with three doses of DTP3: Angola, Central African Republic, Chad, Equatorial Guinea, Gabon, Niger and Somalia" WHO and UNICEF publish annual estimates of national immunization coverage; (27-30) such estimates have been available by country since 1980 at http://wwwwhoint/immunization_monitoring/en/globalsummary/ wucoveragecountrylistcfm and http://wwwchildinfoorg/immunization_countryreportshtml Additional analyses can be found at http://wwwwhoint/immunization_monitoring/data/en/and http://www …

279 citations

Journal ArticleDOI
TL;DR: One of the main strengths of this book is that it introduces the public domain R software and nicely explains how it can be used in computations of methods presented in the book.
Abstract: The targeted audience for this book is graduate students in engineering and medical statistics courses, and it may be useful for a senior undergraduate statistics course. To get the maximum benefit from this book, one should have a good knowledge and understanding of calculus and sufficient background in elementary probability theory to understand the central limit theorem and the law of large numbers. Some more sophisticated probability terminologies and concepts are defined for a smooth reading of the monograph. This monograph has 10 chapters, including the introduction. Chapter 2 deals with the ageing concept and some usual parametric families of probability distribution are presented in Chapter 3. Parametric and nonparametric statistical inference are nicely treated in Chapters 4 and 5. Chapter 5 also offers tests for exponentiality, which is one of the main feature of the monograph. Chapters 7 and 8 cover two-sample and regression problems, respectively. All of the preceding chapters showcase results for both complete and censored data. One of the interesting contributions is with regard to the analysis of competing risk, which is presented in Chapter 9. Finally, Chapter 10 introduces repairable systems. One of the main strengths of this book is that it introduces the public domain R software and nicely explains how it can be used in computations of methods presented in the book. This book has sufficient material and examples to cover a one semester (13week) course. However, I would be reluctant to adopt this book for one simple reason—there are no exercises. Having said that, the monograph would be useful to some applied researchers in related fields.

262 citations

Journal ArticleDOI
TL;DR: In R, functions for covariances in clustered or panel models have been somewhat scattered or available only for certain modeling functions, notably the (generalized) linear regression model, but now these are directly applicable to models from many packages, e.g., including MASS, pscl, countreg, betareg, among others.
Abstract: Clustered covariances or clustered standard errors are very widely used to account for correlated or clustered data, especially in economics, political sciences, and other social sciences. They are employed to adjust the inference following estimation of a standard least-squares regression or generalized linear model estimated by maximum likelihood. Although many publications just refer to "the" clustered standard errors, there is a surprisingly wide variety of clustered covariances, particularly due to different flavors of bias corrections. Furthermore, while the linear regression model is certainly the most important application case, the same strategies can be employed in more general models (e.g., for zero-inflated, censored, or limited responses). In R, functions for covariances in clustered or panel models have been somewhat scattered or available only for certain modeling functions, notably the (generalized) linear regression model. In contrast, an object-oriented approach to "robust" covariance matrix estimation - applicable beyond lm() and glm() - is available in the sandwich package but has been limited to the case of cross-section or time series data. Starting with sandwich 2.4.0, this shortcoming has been corrected: Based on methods for two generic functions (estfun() and bread()), clustered and panel covariances are provided in vcovCL(), vcovPL(), and vcovPC(). Moreover, clustered bootstrap covariances are provided in vcovBS(), using model update() on bootstrap samples. These are directly applicable to models from packages including MASS, pscl, countreg, and betareg, among many others. Some empirical illustrations are provided as well as an assessment of the methods' performance in a simulation study.

262 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide guidance for the organisation and delivery of clinical services and the clinical management of patients who deliberately self-harm, based on scientific evidence supplemented by expert clinical consensus and expressed as recommendations.
Abstract: Objective:To provide guidance for the organisation and delivery of clinical services and the clinical management of patients who deliberately self-harm, based on scientific evidence supplemented by expert clinical consensus and expressed as recommendations.Method:Articles and information were sourced from search engines including PubMed, EMBASE, MEDLINE and PsycINFO for several systematic reviews, which were supplemented by literature known to the deliberate self-harm working group, and from published systematic reviews and guidelines for deliberate self-harm. Information was reviewed by members of the deliberate self-harm working group, and findings were then formulated into consensus-based recommendations and clinical guidance. The guidelines were subjected to successive consultation and external review involving expert and clinical advisors, the public, key stakeholders, professional bodies and specialist groups with interest and expertise in deliberate self-harm.Results:The Royal Australian and New Ze...

115 citations