Bio: Marcos Martín-Fernández is an academic researcher from University of Valladolid. The author has contributed to research in topics: Tensor field & Motion estimation. The author has an hindex of 22, co-authored 87 publications receiving 1472 citations. Previous affiliations of Marcos Martín-Fernández include Brigham and Women's Hospital & Harvard University.
Papers published on a yearly basis
TL;DR: An anisotropic diffusion filter with a probabilistic-driven memory mechanism to overcome the over-filtering problem by following a tissue selective philosophy is proposed and results both in synthetic and real US images support the inclusion of the Probabilistic memory mechanism for maintaining clinical relevant structures, which are removed by the state-of-the-art filters.
Abstract: Ultrasound (US) imaging exhibits considerable difficulties for medical visual inspection and for development of automatic analysis methods due to speckle, which negatively affects the perception of tissue boundaries and the performance of automatic segmentation methods. With the aim of alleviating the effect of speckle, many filtering techniques are usually considered as a preprocessing step prior to automatic analysis methods or visual inspection. Most of the state-of-the-art filters try to reduce the speckle effect without considering its relevance for the characterization of tissue nature. However, the speckle phenomenon is the inherent response of echo signals in tissues and can provide important features for clinical purposes. This loss of information is even magnified due to the iterative process of some speckle filters, e.g., diffusion filters, which tend to produce over-filtering because of the progressive loss of relevant information for diagnostic purposes during the diffusion process. In this paper, we propose an anisotropic diffusion filter with a probabilistic-driven memory mechanism to overcome the over-filtering problem by following a tissue selective philosophy. In particular, we formulate the memory mechanism as a delay differential equation for the diffusion tensor whose behavior depends on the statistics of the tissues, by accelerating the diffusion process in meaningless regions and including the memory effect in regions where relevant details should be preserved. Results both in synthetic and real US images support the inclusion of the probabilistic memory mechanism for maintaining clinical relevant structures, which are removed by the state-of-the-art filters.
TL;DR: The findings show that comparing classifiers without a gold standard can provide a lot of interesting information, and some information present in the expert segmentations is not captured by the automatic classifiers, suggesting that common agreement alone may not be sufficient for a precise performance evaluation of brain tissue classifiers.
Abstract: In this paper, we present a set of techniques for the evaluation of brain tissue classifiers on a large data set of MR images of the head. Due to the difficulty of establishing a gold standard for this type of data, we focus our attention on methods which do not require a ground truth, but instead rely on a common agreement principle. Three different techniques are presented: the Williams’ index, a measure of common agreement; STAPLE, an Expectation Maximization algorithm which simultaneously estimates performance parameters and constructs an estimated reference standard; and Multidimensional Scaling, a visualization technique to explore similarity data. We apply these different evaluation methodologies to a set of eleven different segmentation algorithms on forty MR images. We then validate our evaluation pipeline by building a ground truth based on human expert tracings. The evaluations with and without a ground truth are compared. Our findings show that comparing classifiers without a gold standard can provide a lot of interesting information. In particular, outliers can be easily detected, strongly consistent or highly variable techniques can be readily discriminated, and the overall similarity between different techniques can be assessed. On the other hand, we also find that some information present in the expert segmentations is not captured by the automatic classifiers, suggesting that common agreement alone may not be sufficient for a precise performance evaluation of brain tissue classifiers.
TL;DR: A novel method for the boundary detection of human kidneys from three dimensional (3D) ultrasound (US) ultrasound is proposed, which is a probabilistic Bayesian method and includes a volumetric extension of the prior that forces smoothness along the depth coordinate.
Abstract: In this paper, a novel method for the boundary detection of human kidneys from three dimensional (3D) ultrasound (US) is proposed. The inherent difficulty of interpretation of such images, even by a trained expert, makes the problem unsuitable for classical methods. The method here proposed finds the kidney contours in each slice. It is a probabilistic Bayesian method. The prior defines a Markov field of deformations and imposes the restriction of contour smoothness. The likelihood function imposes a probabilistic behavior to the data, conditioned to the contour position. This second function, which is also Markov, uses an empirical model of distribution of the echographical data and a function of the gradient of the data. The model finally includes, as a volumetric extension of the prior, a term that forces smoothness along the depth coordinate. The experiments that have been carried out on echographies from real patients validate the model here proposed. A sensitivity analysis of the model parameters has also been carried out.
TL;DR: Results indicate that the proposed novel method to detect anomalies in network traffic outperforms the closely-related state-of-the-art contribution described in.
Abstract: This paper proposes a novel method to detect anomalies in network traffic, based on a nonrestricted α-stable first-order model and statistical hypothesis testing. To this end, we give statistical evidence that the marginal distribution of real traffic is adequately modeled with α-stable functions and classify traffic patterns by means of a Generalized Likelihood Ratio Test (GLRT). The method automatically chooses traffic windows used as a reference, which the traffic window under test is compared with, with no expert intervention needed to that end. We focus on detecting two anomaly types, namely floods and flash-crowds, which have been frequently studied in the literature. Performance of our detection method has been measured through Receiver Operating Characteristic (ROC) curves and results indicate that our method outperforms the closely-related state-of-the-art contribution described in. All experiments use traffic data collected from two routers at our university-a 25,000 students institution-which provide two different levels of traffic aggregation for our tests (traffic at a particular school and the whole university). In addition, the traffic model is tested with publicly available traffic traces. Due to the complexity of α-stable distributions, care has been taken in designing appropriate numerical algorithms to deal with the model.
TL;DR: This paper shows that if the image to work with has a sufficiently great amount of low-variability areas, the variance of noise and the coefficient of variation of noise can be estimated as the mode of the distribution of local variances in the image.
Abstract: In this paper, we focus on the problem of automatic noise parameter estimation for additive and multiplicative models and propose a simple and novel method to this end. Specifically we show that if the image to work with has a sufficiently great amount of low-variability areas (which turns out to be a typical feature in most images), the variance of noise (if additive) can be estimated as the mode of the distribution of local variances in the image and the coefficient of variation of noise (if multiplicative) can be estimated as the mode of the distribution of local estimates of the coefficient of variation. Additionally, a model for the sample variance distribution for an image plus noise is proposed and studied. Experiments show the goodness of the proposed method, specially in recursive or iterative filtering methods.
Technische Universität München1, ETH Zurich2, University of Bern3, Harvard University4, National Institutes of Health5, University of Debrecen6, University Hospital Heidelberg7, McGill University8, University of Pennsylvania9, French Institute for Research in Computer Science and Automation10, University at Buffalo11, Microsoft12, University of Cambridge13, Stanford University14, University of Virginia15, Imperial College London16, Massachusetts Institute of Technology17, Columbia University18, Sabancı University19, Old Dominion University20, RMIT University21, Purdue University22, General Electric23
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource
TL;DR: A comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
Abstract: This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the "MICCAI 2007 Grand Challenge" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
TL;DR: This paper provides a structured and comprehensive overview of various facets of network anomaly detection so that a researcher can become quickly familiar with every aspect of network anomalies detection.
Abstract: Network anomaly detection is an important and dynamic research area. Many network intrusion detection methods and systems (NIDS) have been proposed in the literature. In this paper, we provide a structured and comprehensive overview of various facets of network anomaly detection so that a researcher can become quickly familiar with every aspect of network anomaly detection. We present attacks normally encountered by network intrusion detection systems. We categorize existing network anomaly detection methods and systems based on the underlying computational techniques used. Within this framework, we briefly describe and compare a large number of network anomaly detection methods and systems. In addition, we also discuss tools that can be used by network defenders and datasets that researchers in network anomaly detection can use. We also highlight research directions in network anomaly detection.