scispace - formally typeset
Search or ask a question
Author

Til Aach

Other affiliations: Bosch, University of Lübeck, Philips
Bio: Til Aach is an academic researcher from RWTH Aachen University. The author has contributed to research in topics: Image processing & Image segmentation. The author has an hindex of 38, co-authored 311 publications receiving 5601 citations. Previous affiliations of Til Aach include Bosch & University of Lübeck.


Papers
More filters
Journal ArticleDOI
TL;DR: This method serves three purposes: it accurately locates boundaries between changed and unchanged areas, it brings to bear a regularizing effect on these boundaries in order to smooth them, and it eliminates small regions if the original data permits this.

342 citations

Journal ArticleDOI
TL;DR: The value of combining different phenotyping technologies that analyse processes at different spatial and temporal scales is demonstrated and novel routes may be opened up for improved plant breeding as well as for mechanistic understanding of root structure and function.
Abstract: Root phenotyping is a challenging task, mainly because of the hidden nature of this organ. Only recently, imaging technologies have become available that allow us to elucidate the dynamic establishment of root structure and function in the soil. In root tips, optical analysis of the relative elemental growth rates in root expansion zones of hydroponically-grown plants revealed that it is the maximum intensity of cellular growth processes rather than the length of the root growth zone that control the acclimation to dynamic changes in temperature. Acclimation of entire root systems was studied at high throughput in agar-filled Petri dishes. In the present study, optical analysis of root system architecture showed that low temperature induced smaller branching angles between primary and lateral roots, which caused a reduction in the volume that roots access at lower temperature. Simulation of temperature gradients similar to natural soil conditions led to differential responses in basal and apical parts of the root system, and significantly affected the entire root system. These results were supported by first data on the response of root structure and carbon transport to different root zone temperatures. These data were acquired by combined magnetic resonance imaging (MRI) and positron emission tomography (PET). They indicate acclimation of root structure and geometry to temperature and preferential accumulation of carbon near the root tip at low root zone temperatures. Overall, this study demonstrated the value of combining different phenotyping technologies that analyse processes at different spatial and temporal scales. Only such an integrated approach allows us to connect differences between genotypes obtained in artificial high throughput conditions with specific characteristics relevant for field performance. Thus, novel routes may be opened up for improved plant breeding as well as for mechanistic understanding of root structure and function.

207 citations

Journal ArticleDOI
TL;DR: A new, adaptive algorithm for change detection is derived where the decision thresholds vary depending on context, thus improving detection performance substantially.
Abstract: In many conventional methods for change detection, the detections are carried out by comparing a test statistic, which is computed locally for each location on the image grid, with a global threshold. These ‘nonadaptive’ methods for change detection suffer from the dilemma of either causing many false alarms or missing considerable parts of non-stationary areas. This contribution presents a way out of this dilemma by viewing change detection as an inverse, ill-posed problem. As such, the problem can be solved using prior knowledge about typical properties of change masks. This reasoning leads to a Bayesian formulation of change detection, where the prior knowledge is brought to bear by appropriately specified a priori probabilities. Based on this approach, a new, adaptive algorithm for change detection is derived where the decision thresholds vary depending on context, thus improving detection performance substantially. The algorithm requires only a single raster scan per picture and increases the computional load only slightly in comparison to non-adaptive techniques.

192 citations

Journal ArticleDOI
Andre Salomon1, A. Goedicke1, B Schweizer1, Til Aach1, Volkmar Schulz1 
TL;DR: A generic iterative reconstruction approach to simultaneously estimate the local tracer concentration and the attenuation distribution using the segmented MR image as anatomical reference, which indicates a robust and reliable alternative to other MR-AC approaches targeting patient specific quantitative analysis in time-of-flight PET/MR.
Abstract: Medical investigations targeting a quantitative analysis of the position emission tomography (PET) images require the incorporation of additional knowledge about the photon attenuation distribution in the patient. Today, energy range adapted attenuation maps derived from computer tomography (CT) scans are used to effectively compensate for image quality degrading effects, such as attenuation and scatter. Replacing CT by magnetic resonance (MR) is considered as the next evolutionary step in the field of hybrid imaging systems. However, unlike CT, MR does not measure the photon attenuation and thus does not provide an easy access to this valuable information. Hence, many research groups currently investigate different technologies for MR-based attenuation correction (MR-AC). Typically, these approaches are based on techniques such as special acquisition sequences (alone or in combination with subsequent image processing), anatomical atlas registration, or pattern recognition techniques using a data base of MR and corresponding CT images. We propose a generic iterative reconstruction approach to simultaneously estimate the local tracer concentration and the attenuation distribution using the segmented MR image as anatomical reference. Instead of applying predefined attenuation values to specific anatomical regions or tissue types, the gamma attenuation at 511 keV is determined from the PET emission data. In particular, our approach uses a maximum-likelihood estimation for the activity and a gradient-ascent based algorithm for the attenuation distribution. The adverse effects of scattered and accidental gamma coincidences on the quantitative accuracy of PET, as well as artifacts caused by the inherent crosstalk between activity and attenuation estimation are efficiently reduced using enhanced decay event localization provided by time-of-flight PET, accurate correction for accidental coincidences, and a reduced number of unknown attenuation coefficients. First results achieved with measured whole body PET data and reference segmentation from CT showed an absolute mean difference of 0.005 cm in the lungs, 0.0009 cm in case of fat, and 0.0015 cm for muscles and blood. The proposed method indicates a robust and reliable alternative to other MR-AC approaches targeting patient specific quantitative analysis in time-of-flight PET/MR.

165 citations

Journal ArticleDOI
TL;DR: How the laboratory course is organized and how it induces students to think as actual engineers would in solving real-world tasks with limited resources are described are described.
Abstract: In today's teaching and learning approaches for first-semester students, practical courses more and more often complement traditional theoretical lectures. This practical element allows an early insight into the real world of engineering, augments student motivation, and enables students to acquire soft skills early. This paper describes a new freshman introduction course into practical engineering, which has been established within the Bachelor of Science curriculum of Electrical Engineering and Information Technology of RWTH Aachen University, Germany. The course is organized as an eight-day, full-time block laboratory for over 300 freshman students, who were supervised by more than 60 tutors from 23 institutes of the Electrical Engineering Department. Based on a threefold learning concept comprising mathematical methods, MATLAB programming, and practical engineering, the students were required to transfer mathematical basics to algorithms in MATLAB in order to control LEGO Mindstorms robots. Toward this end, a new toolbox, called the ?RWTH-Mindstorms NXT Toolbox,? was developed, which enables the robots to be controlled remotely via MATLAB from a host computer. This paper describes how the laboratory course is organized and how it induces students to think as actual engineers would in solving real-world tasks with limited resources. Evaluation results show that the project improves the students' MATLAB programming skills, enhances motivation, and enables a peer learning process.

137 citations


Cited by
More filters
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

Book ChapterDOI
15 Feb 2011

1,876 citations

01 Jan 2005
TL;DR: A systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling is presented.

1,750 citations