scispace - formally typeset
Search or ask a question
Author

Hans Henrik Thodberg

Bio: Hans Henrik Thodberg is an academic researcher from CERN. The author has contributed to research in topics: Bone age & Nucleon. The author has an hindex of 28, co-authored 67 publications receiving 3010 citations. Previous affiliations of Hans Henrik Thodberg include Technical University of Denmark & Novo Nordisk.


Papers
More filters
Journal ArticleDOI
TL;DR: The method reconstructs, from radiographs of the hand, the borders of 15 bones automatically and then computes ldquointrinsicrdquo bone ages for each of 13 bones (radius, ulna, and 11 short bones) and transforms the intrinsic bone ages into Greulich Pyle or Tanner Whitehouse bone age.
Abstract: Bone age rating is associated with a considerable variability from the human interpretation, and this is the motivation for presenting a new method for automated determination of bone age (skeletal maturity). The method, called BoneXpert, reconstructs, from radiographs of the hand, the borders of 15 bones automatically and then computes ldquointrinsicrdquo bone ages for each of 13 bones (radius, ulna, and 11 short bones). Finally, it transforms the intrinsic bone ages into Greulich Pyle (GP) or Tanner Whitehouse (TW) bone age. The bone reconstruction method automatically rejects images with abnormal bone morphology or very poor image quality. From the methodological point of view, BoneXpert contains the following innovations: 1) a generative model (active appearance model) for the bone reconstruction; 2) the prediction of bone age from shape, intensity, and texture scores derived from principal component analysis; 3) the consensus bone age concept that defines bone age of each bone as the best estimate of the bone age of the other bones in the hand; 4) a common bone age model for males and females; and 5) the unified modelling of TW and GP bone age. BoneXpert is developed on 1559 images. It is validated on the Greulich Pyle atlas in the age range 2-17 years yielding an SD of 0.42 years [0.37; 0.47] 95% conf, and on 84 clinical TW-rated images yielding an SD of 0.80 years [0.68; 0.93] 95% conf. The precision of the GP bone age determination (its ability to yield the same result on a repeated radiograph) is inferred under suitable assumptions from six longitudinal series of radiographs. The result is an SD on a single determination of 0.17 years [0.13; 0.21] 95% conf.

333 citations

Journal ArticleDOI
TL;DR: The RSNA Pediatric Bone Age Machine Learning Challenge showed how a coordinated approach to solving a medical imaging problem can be successfully conducted and will catalyze collaboration and development of ML tools and methods that can potentially improve diagnostic accuracy and patient care.
Abstract: Purpose The Radiological Society of North America (RSNA) Pediatric Bone Age Machine Learning Challenge was created to show an application of machine learning (ML) and artificial intelligence (AI) in medical imaging, promote collaboration to catalyze AI model creation, and identify innovators in medical imaging. Materials and Methods The goal of this challenge was to solicit individuals and teams to create an algorithm or model using ML techniques that would accurately determine skeletal age in a curated data set of pediatric hand radiographs. The primary evaluation measure was the mean absolute distance (MAD) in months, which was calculated as the mean of the absolute values of the difference between the model estimates and those of the reference standard, bone age. Results A data set consisting of 14 236 hand radiographs (12 611 training set, 1425 validation set, 200 test set) was made available to registered challenge participants. A total of 260 individuals or teams registered on the Challenge website. A total of 105 submissions were uploaded from 48 unique users during the training, validation, and test phases. Almost all methods used deep neural network techniques based on one or more convolutional neural networks (CNNs). The best five results based on MAD were 4.2, 4.4, 4.4, 4.5, and 4.5 months, respectively. Conclusion The RSNA Pediatric Bone Age Machine Learning Challenge showed how a coordinated approach to solving a medical imaging problem can be successfully conducted. Future ML challenges will catalyze collaboration and development of ML tools and methods that can potentially improve diagnostic accuracy and patient care. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Siegel in this issue.

277 citations

Journal ArticleDOI
TL;DR: In this paper, an optimal minimal neural-network interpretation of spectra (OMNIS) based on principal component analysis and artificial neural networks is presented for the purpose of classification or quantitative determination.
Abstract: A new method optimal minimal neural-network interpretation of spectra (OMNIS) based on principal component analysis and artificial neural networks is presented. OMNIS is useful whenever spectra are measured for the purpose of classification or quantitative determination. The spectra can be visible light, near-infrared (NIR) light, sound, or any other large amount of correlated data

276 citations

Journal ArticleDOI
TL;DR: MacKay's Bayesian framework for backpropagation is a practical and powerful means to improve the generalization ability of neural networks and is applied in the prediction of fat content in minced meat from near infrared spectra.
Abstract: MacKay's (1992) Bayesian framework for backpropagation is a practical and powerful means to improve the generalization ability of neural networks. It is based on a Gaussian approximation to the posterior weight distribution. The framework is extended, reviewed, and demonstrated in a pedagogical way. The notation is simplified using the ordinary weight decay parameter, and a detailed and explicit procedure for adjusting several weight decay parameters is given. Bayesian backprop is applied in the prediction of fat content in minced meat from near infrared spectra. It outperforms "early stopping" as well as quadratic regression. The evidence of a committee of differently trained networks is computed, and the corresponding improved generalization is verified. The error bars on the predictions of the fat content are computed. There are three contributors: The random noise, the uncertainty in the weights, and the deviation among the committee members. The Bayesian framework is compared to Moody's GPE (1992). Finally, MacKay and Neal's automatic relevance determination, in which the weight decay parameters depend on the input number, is applied to the data with improved results.

192 citations

Journal ArticleDOI
TL;DR: It is concluded that the DXR method offers a BMD estimate with a good correlation with distal forearm BMD, a low variation between geographical sites and a precision that potentially allows for relatively short observation intervals.
Abstract: A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bones are used to obtain a BMD estimate of the distal forearm, referred to as BMDDXR (from digital X-ray radiogrammetry, DXR). The measured dimensions for each bone are the cortical thickness and the outer width, in combination with an estimate of the cortical porosity. The short-term in vivo precision of BMDDXR was observed to be 0.60% in a clinical study of 24 women and the in vitro variation over 12 different radiological clinics was found to be 1% of the young normal BMDDXR level. In a cohort of 416 women BMDDXR was found to be closely correlated with BMD at the distal forearm measured by dual-energy X-ray absoptiometry (r= 0.86, p<0.0001) and also with BMD at the spine, total hip and femoral neck (r= 0.62, 0.69 and 0.73, respectively, p<0.0001 for all). The annual decline was estimated from the cohort to be 1.05% in the age group 55–65 years. Relative to this age-related loss, the reported short-term precision allows for monitoring intervals of 1.0 years and 1.6 years in order to detect expected age-related changes with a confidence of 80% and 95%, respectively. It is concluded that the DXR method offers a BMD estimate with a good correlation with distal forearm BMD, a low variation between geographical sites and a precision that potentially allows for relatively short observation intervals.

178 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The Pythia program as mentioned in this paper can be used to generate high-energy-physics ''events'' (i.e. sets of outgoing particles produced in the interactions between two incoming particles).
Abstract: The Pythia program can be used to generate high-energy-physics ''events'', i.e. sets of outgoing particles produced in the interactions between two incoming particles. The objective is to provide as accurate as possible a representation of event properties in a wide range of reactions, within and beyond the Standard Model, with emphasis on those where strong interactions play a role, directly or indirectly, and therefore multihadronic final states are produced. The physics is then not understood well enough to give an exact description; instead the program has to be based on a combination of analytical results and various QCD-based models. This physics input is summarized here, for areas such as hard subprocesses, initial- and final-state parton showers, underlying events and beam remnants, fragmentation and decays, and much more. Furthermore, extensive information is provided on all program elements: subroutines and functions, switches and parameters, and particle and process data. This should allow the user to tailor the generation task to the topics of interest.

6,300 citations

Proceedings Article
07 Dec 2015
TL;DR: In this paper, the authors proposed a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections using a three-step method.
Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.

3,967 citations

BookDOI
07 May 2015
TL;DR: Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.
Abstract: Discover New Methods for Dealing with High-Dimensional Data A sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data. Top experts in this rapidly evolving field, the authors describe the lasso for linear regression and a simple coordinate descent algorithm for its computation. They discuss the application of 1 penalties to generalized linear models and support vector machines, cover generalized penalties such as the elastic net and group lasso, and review numerical methods for optimization. They also present statistical inference methods for fitted (lasso) models, including the bootstrap, Bayesian methods, and recently developed approaches. In addition, the book examines matrix decomposition, sparse multivariate analysis, graphical models, and compressed sensing. It concludes with a survey of theoretical results for the lasso. In this age of big data, the number of features measured on a person or object can be large and might be larger than the number of observations. This book shows how the sparsity assumption allows us to tackle these problems and extract useful and reproducible patterns from big datasets. Data analysts, computer scientists, and theorists will appreciate this thorough and up-to-date treatment of sparse statistical modeling.

2,275 citations

Proceedings Article
01 Jan 1999

2,010 citations

Proceedings Article
02 Dec 1991
TL;DR: It is proven that a weight decay has two effects in a linear network, and it is shown how to extend these results to networks with hidden layers and non-linear units.
Abstract: It has been observed in numerical simulations that a weight decay can improve generalization in a feed-forward neural network. This paper explains why. It is proven that a weight decay has two effects in a linear network. First, it suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. Second, if the size is chosen right, a weight decay can suppress some of the effects of static noise on the targets, which improves generalization quite a lot. It is then shown how to extend these results to networks with hidden layers and non-linear units. Finally the theory is confirmed by some numerical simulations using the data from NetTalk.

1,569 citations