scispace - formally typeset
Search or ask a question
Author

Mads Ruben Burgdorff Kristensen

Bio: Mads Ruben Burgdorff Kristensen is an academic researcher from University of Copenhagen. The author has contributed to research in topics: Python (programming language) & NumPy. The author has an hindex of 9, co-authored 27 publications receiving 257 citations. Previous affiliations of Mads Ruben Burgdorff Kristensen include Nvidia & Niels Bohr Institute.

Papers
More filters
Journal ArticleDOI
TL;DR: DeepQSM can invert the magnetic dipole kernel convolution and delivers robust solutions to this ill-posed problem, enabling identification of deep brain substructures and provide information on their respective magnetic tissue properties.

100 citations

Proceedings ArticleDOI
12 Nov 2017
TL;DR: A comprehensive evaluation for a wide spectrum of scientific kernels with a large amount of representative inputs on two Intel OPMs, guided by general optimization models, demonstrates OPM's effectiveness for easing programmers' tuning efforts to reach ideal throughput for both compute-bound and memory-bound applications.
Abstract: High-bandwidth On-Package Memory (OPM) innovates the conventional memory hierarchy by augmenting a new on-package layer between classic on-chip cache and off-chip DRAM. Due to its relative location and capacity, OPM is often used as a new type of LLC. Despite the adaptation in modern processors, the performance and power impact of OPM on HPC applications, especially scientific kernels, is still unknown. In this paper, we fill this gap by conducting a comprehensive evaluation for a wide spectrum of scientific kernels with a large amount of representative inputs, including dense, sparse and medium, on two Intel OPMs: eDRAM on multicore Broadwell and MCDRAM on manycore Knights Landing. Guided by our general optimization models, we demonstrate OPM's effectiveness for easing programmers' tuning efforts to reach ideal throughput for both compute-bound and memory-bound applications.

47 citations

Proceedings ArticleDOI
19 May 2014
TL;DR: Bohrium is a runtime-system for mapping vector operations onto a number of different hardware platforms, from simple multi-core systems to clusters and GPU enabled systems, and can be used for any programming language but for now, the supported languages are limited to Python, C++ and the Net framework.
Abstract: In this paper we introduce, Bohrium, a runtime-system for mapping vector operations onto a number of different hardware platforms, from simple multi-core systems to clusters and GPU enabled systems. In order to make efficient choices Bohrium is implemented as a virtual machine that makes runtime decisions, rather than a statically compiled library, which is the more common approach. In principle, Bohrium can be used for any programming language but for now, the supported languages are limited to Python, C++ and the. Net framework, e.g. C# and F#. The primary success criteria are to maintain a complete abstraction from low-level details and to provide efficient code execution across different, current and future, processors. We evaluate the presented design through a setup that targets a multi-core CPU, an eight-node Cluster, and a GPU, all preliminary prototypes. The evaluation includes three well-known benchmark applications, Black Sholes, Shallow Water, and N-body, implemented in C++, Python, and C# respectively.

18 citations

Proceedings ArticleDOI
12 Oct 2010
TL;DR: DistNumPy, a library for doing numerical computation in Python that targets scalable distributed memory architectures, is introduced and it is found that it is possible to obtain significant speedup from using the new array-backend without changing the original Python code.
Abstract: In this paper, we introduce DistNumPy, a library for doing numerical computation in Python that targets scalable distributed memory architectures. DistNumPy extends the NumPy module[15], which is popular for scientific programming. Replacing NumPy with Dist-NumPy enables the user to write sequential Python programs that seamlessly utilize distributed memory architectures. This feature is obtained by introducing a new backend for NumPy arrays, which distribute data amongst the nodes in a distributed memory multi-processor. All operations on this new array will seek to utilize all available processors. The array itself is distributed between multiple processors in order to support larger arrays than a single node can hold in memory.We perform three experiments of sequential Python programs running on an Ethernet based cluster of SMP-nodes with a total of 64 CPU-cores. The results show an 88% CPU utilization when running a Monte Carlo simulation, 63% CPU utilization on an N-body simulation and a more modest 50% on a Jacobi solver. The primary limitation in CPU utilization is identified as SMP limitations and not the distribution aspect. Based on the experiments we find that it is possible to obtain significant speedup from using our new array-backend without changing the original Python code.

18 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations

01 Jan 2016
TL;DR: Thank you very much for downloading using mpi portable parallel programming with the message passing interface for reading a good book with a cup of coffee in the afternoon, instead they are facing with some malicious bugs inside their laptop.
Abstract: Thank you very much for downloading using mpi portable parallel programming with the message passing interface. As you may know, people have search hundreds times for their chosen novels like this using mpi portable parallel programming with the message passing interface, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some malicious bugs inside their laptop.

593 citations

Journal ArticleDOI
TL;DR: This paper indicates how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction, and provides a starting point for people interested in experimenting and contributing to the field of deep learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

590 citations

Journal ArticleDOI
TL;DR: An MRI reconstruction algorithm, which is referred to as quantitative susceptibility mapping (QSM), has been developed using a deep neural network in order to perform dipole deconvolution, which restores magnetic susceptibility source from an MRI field map.

147 citations

Journal ArticleDOI
TL;DR: In this article, the authors revisit the challenges and prospects for ocean circulation models following Griffies et al. (2010), and summarize new developments in ocean modeling, including: how new and existing observations can be used, what modeling challenges remain, and how simulations can also be used to support observations.
Abstract: We revisit the challenges and prospects for ocean circulation models following Griffies et al. (2010). Over the past decade, ocean circulation models evolved through improved understanding, numerics, spatial discretization, grid configurations, parameterizations, data assimilation, environmental monitoring, and process-level observations and modeling. Important large scale applications over the last decade are simulations of the Southern Ocean, the Meridional Overturning Circulation and its variability, and regional sea level change. Submesoscale variability is now routinely resolved in process models and permitted in a few global models, and submesoscale effects are parameterized in most global models. The scales where nonhydrostatic effects become important are beginning to be resolved in regional and process models. Coupling to sea ice, ice shelves, and high-resolution atmospheric models has stimulated new ideas and driven improvements in numerics. Observations have provided insight into turbulence and mixing around the globe and its consequences are assessed through perturbed physics models. Relatedly, parameterizations of the mixing and overturning processes in boundary layers and the ocean interior have improved. New diagnostics being used for evaluating models alongside present and novel observations are briefly referenced. The overall goal is summarizing new developments in ocean modeling, including: how new and existing observations can be used, what modeling challenges remain, and how simulations can be used to support observations.

121 citations