scispace - formally typeset
Search or ask a question

Showing papers by "Tony F. Chan published in 2007"


Journal ArticleDOI
TL;DR: This paper proposes a natural and efficient way to achieve staircase reduction in texture extraction models of image processing using a variant of the Chambolle-Lions inf convolution energy along with approximations to Meyer's G and E norms for ameliorating staircasing in image decomposition and restoration problems.

130 citations


Journal ArticleDOI
TL;DR: A parameterization method based on Riemann surface structure is introduced, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram.
Abstract: In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks.

111 citations


BookDOI
01 Jan 2007
TL;DR: If you know are looking for the book enPDFd image processing based on partial differential equations as the choice of reading, you can find here.
Abstract: Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd image processing based on partial differential equations as the choice of reading, you can find here.

71 citations


Journal ArticleDOI
TL;DR: A general framework for multiple shifted and multiple blurred low-resolution image frames which subsumes several well-known superresolution models is proposed which allows an arbitrary pattern of missing pixels and in particular missing frames.

65 citations


Proceedings ArticleDOI
22 Oct 2007
TL;DR: In this paper, the authors describe the process and outcome of the efforts to develop a new standard for Personal Health Data (PHD) based on the existing 11073 family of standards for medical devices.
Abstract: This paper describes the process and outcome of the efforts to develop a new standard for Personal Health Data (PHD) based on the existing 11073 family of standards for medical devices. It identifies the requirements for a standard that is to be applied to small devices with limited resources of processor, memory and power and that will use short range wireless technology. It describes how existing components of 11073, such the Domain Information Model and nomenclature have been used and adapted to create the new standard.

57 citations


Journal ArticleDOI
01 May 2007
TL;DR: Two methods to accomplish the conformal parameterization of cortical surfaces by using landmarks are described, one based on pursuing an optimal Mobius transformation to minimize the landmark mismatch error and the other based on a new energy functional.
Abstract: In order to compare and integrate brain data more effectively, data from multiple subjects are typically mapped into a canonical space. One method to do this is to conformally map cortical surfaces to the sphere. It is well known that any genus zero Riemann surface can be mapped conformally to a sphere. Cortical surface is a genus zero surface. Therefore, conformal mapping offers a convenient method to parameterize cortical surfaces without angular distortion, generating an orthogonal grid on the cortex that locally preserves the metric. Although conformal mapping preserves the local geometry well, the important anatomical features, such as the sulci landmarks, are usually not aligned consistently. To compare cortical surfaces more effectively, it is advantageous to adjust the conformal parameterizations to match consistent anatomical features across subjects. This matching of cortical patterns improves the alignment of data across subjects, although it is more challenging to create a consistent conformal (orthogonal) parameterization of anatomy across subjects when landmarks are constrained to lie at specific locations in the spherical parameter space. Here we describe two methods to accomplish the task. The first approach is based on pursuing an optimal Mobius transformation to minimize the landmark mismatch error. The second approach is based on a new energy functional, to optimize the conformal parameterization of cortical surfaces by using landmarks. Experimental results on a dataset of 40 brain hemispheres showed that the landmark mismatch energy can be significantly reduced while effectively preserving conformality. The key advantage of these conformal parameterization approaches is that any local adjustments of the mapping to match landmarks do not affect the conformality of the mapping significantly. A detailed comparison between the two approaches will be discussed. The first approach can generate a map which is exactly conformal, although the landmark mismatch error is not reduced as effective as the second approach. The second approach can generate a map which significantly reduces the landmark mismatch error, but some conformality will be lost.

55 citations



Journal Article
TL;DR: A new nonparametric region-based active contour model for clutter image segmentation is proposed and to quantify the similarity between two clutter regions, it is proposed to compare their respective histograms using the Wasserstein distance.

48 citations


Journal ArticleDOI
TL;DR: This work generalizes the total variation restoration model to matrix-valued data, in particular, to diffusion tensor images (DTIs), and treats the diffusion matrix D implicitly as the product D = LL(T), which ensures positive definiteness of the tensor during the regularization flow, which is essential when regularizing DTI.
Abstract: We generalize the total variation restoration model, introduced by Rudin, Osher, and Fatemi in 1992, to matrix-valued data, in particular, to diffusion tensor images (DTIs). Our model is a natural extension of the color total variation model proposed by Blomgren and Chan in 1998. We treat the diffusion matrix D implicitly as the product D = LLT, and work with the elements of L as variables, instead of working directly on the elements of D. This ensures positive definiteness of the tensor during the regularization flow, which is essential when regularizing DTI. We perform numerical experiments on both synthetical data and 3D human brain DTI, and measure the quantitative behavior of the proposed model.

44 citations


Journal ArticleDOI
TL;DR: This work employs a variational framework, in particular the minimization of total variation (TV), to select and modify the retained wavelet coefficients so that the reconstructed images have fewer oscillations near edges while noise is smoothed.
Abstract: We propose using Partial Differential Equation (PDE) techniques in wavelet based image processing to remove noise and reduce edge artifacts generated by wavelet thresholding. We employ a variational framework, in particular the minimization of total variation (TV), to select and modify the retained wavelet coefficients so that the reconstructed images have fewer oscillations near edges while noise is smoothed. Numerical experiments show that this approach improves the reconstructed image quality in wavelet compression and in denoising.

42 citations


Journal ArticleDOI
TL;DR: A level set based variational model to capture a typical class of illusory contours such as Kanizsa triangle is proposed, which completes missing boundaries in a smooth way via Euler’s elastica, and also preserves corners by incorporating curvature information of object boundaries.
Abstract: Illusory contours, such as the classical Kanizsa triangle and square [9], are intrinsic phenomena in human vision. These contours are not completely defined by real object boundaries, but also include illusory boundaries which are not explicitly present in the images. Therefore, the major computational challenge of capturing illusory contours is to complete the illusory boundaries. In this paper, we propose a level set based variational model to capture a typical class of illusory contours such as Kanizsa triangle. Our model completes missing boundaries in a smooth way via Euler's elastica, and also preserves corners by incorporating curvature information of object boundaries. Our model can capture illusory contours regardless of whether the missing boundaries are straight lines or curves. We compare the choice of the second order Euler's elastica used in our model and that of the first order Euler's elastica developed in Nitzberg-Mumford-Shiota's work on the problem of segmentation with depth [15, 16]. We also prove that with the incorporation of curvature information of objects boundaries our model can preserve corners as completely as one wants. Finally we present the numerical results by applying our model on some standard illusory contours.

Book ChapterDOI
01 Jan 2007
TL;DR: This survey paper discusses some recent developments in variational image segmentation and active contours models, focusing on regionbased models implemented via level-set techniques, typified by the Chan–Vese (CV) model.
Abstract: This survey paper discusses some recent developments in variational image segmentation and active contours models. Our focus will be on regionbased models implemented via level-set techniques, typified by the Chan–Vese (CV) model [11]. The CV algorithm can be interpreted as a level-set implementation of the piecewise constant Mumford–Shah segmentation model and has been quite widely used. We will first present the basic CV algorithm and an extension to piecewise smooth approximations. We also discuss a recent development in convexifying the CV model to guarantee convergence to a global minimizer. Next, we discuss extensions to handle multi-channel images, including a vector-valued CV model [9], texture segmentation [10], object tracking in video [41], image registration [40], and a logic segmentation framework [49]. Then we discuss multiphase extensions to handle segmentation into an arbitrary number of regions, including the method of Vese and Chan [61] and recent developments of memory efficiency algorithms such as the piecewise constant level set method (PCLSM) of Tai et al. [36] and the multi-layer method of Chung and Vese [13]. Finally, we discuss numerically efficient methods that attempt to compute the optimal segmentation much faster than the original gradient-descent PDE-based method. These methods include the direct pointwise optimization method of Song and Chan [55], an operator-splitting method by Gibou and Fedkiw [26], and a threshold dynamics method by Esedoglu and Tsai [19].

Book ChapterDOI
30 May 2007
TL;DR: In this article, a nonparametric region-based active contour model for clutter image segmentation is proposed to quantify the similarity between two clutter regions by comparing their respective histograms using the Wasserstein distance.
Abstract: In this paper, we propose a new nonparametric region-based active contour model for clutter image segmentation. To quantify the similarity between two clutter regions, we propose to compare their respective histograms using the Wasserstein distance. Our first segmentation model is based on minimizing the Wasserstein distance between the object (resp. background) histogram and the object (resp. background) reference histogram, together with a geometric regularization term that penalizes complicated region boundaries. The minimization is achieved by computing the gradient of the level set formulation for the energy. Our second model does not require reference histograms and assumes that the image can be partitioned into two regions in each of which the local histograms are similar everywhere.

Journal Article
TL;DR: In this article, a linearized primal-dual iterative method was proposed to solve the dual formulation without regularization. But the proposed method is not suitable for the nonlinear multigrid method.
Abstract: Many variational models for image denoising restoration are formulated in primal variables that are directly linked to the solution to be restored. If the total variation related semi-norm is used in the models, one consequence is that extra regularization is needed to remedy the highly non-smooth and oscillatory coefficients for effective numerical solution. The dual formulation was often used to study theoretical properties of a primal formulation. However as a model, this formulation also offers some advantages over the primal formulation in dealing with the above mentioned oscillation and non-smoothness. This paper presents some preliminary work on speeding up the Chambolle method [J. Math. Imaging Vision, 20 (2004), pp. 89–97] for solving the dual formulation. Following a convergence rate analysis of this method, we first show why the nonlinear multigrid method encounters some difficulties in achieving convergence. Then we propose a modified smoother for the multigrid method to enable it to achieve convergence in solving a regularized Chambolle formulation. Finally, we propose a linearized primaldual iterative method as an alternative stand-alone approach to solve the dual formulation without regularization. Numerical results are presented to show that the proposed methods are much faster than the Chambolle method.

Book ChapterDOI
01 Jan 2007
TL;DR: The main improvements to mPL6 are speed and scalability, low wirelength results, adaptability to complex constraints, and robustness under low white space.
Abstract: mPL6 consists of three basic ingredients: global placement by multilevel nonlinear programming [CCS05b], discrete graph-based macro legalization followed by linear-time scan-based standard-cell legalization [CX06], and detailed placement [CX06]. It is designed for speed and scalability, low wirelength results, adaptability to complex constraints, and robustness under low white space. Compared to the 2005 implementation [CCR05], the main improvements to mPL6 are

Journal ArticleDOI
TL;DR: This paper combines the EM algorithm with a level set approach to capture the coarse scale information and the discontinuities of the concentration coefficients in positron emission tomography and utilizes a multiple level set formulation to represent the geometry of the objects in the scene.
Abstract: In positron emission tomography (PET), a radioactive compound is injected into the body to promote a tissue-dependent emission rate. Expectation maximization (EM) reconstruction algorithms are iterative techniques which estimate the concentration coefficients that provide the best fitted solution, for example, a maximum likelihood estimate. In this paper, we combine the EM algorithm with a level set approach. The level set method is used to capture the coarse scale information and the discontinuities of the concentration coefficients. An intrinsic advantage of the level set formulation is that anatomical information can be efficiently incorporated and used in an easy and natural way. We utilize a multiple level set formulation to represent the geometry of the objects in the scene. The proposed algorithm can be applied to any PET configuration, without major modifications.

Proceedings ArticleDOI
12 Nov 2007
TL;DR: An algorithm to solve most of existing active contour problems based on the approach of mean curvature motion proposed by Chambolle (2004) and the image denoising model of Rudin, Osher and Fatemi (ROF) (1992) is proposed.
Abstract: This paper proposes an algorithm to solve most of existing active contour problems based on the approach of mean curvature motion proposed by Chambolle (2004) and the image denoising model of Rudin, Osher and Fatemi (ROF) (1992). More precisely, the motion of active contours is discretized by the ROF model applied to the signed distance of the evolving contour. The advantage of this new discretization scheme is to use a time step much larger than in standard explicit schemes, which means that less iterations are needed to converge to the steady state solution. We present results on 2-D natural images.

Book ChapterDOI
01 Jan 2007
TL;DR: In this paper, Chen et al. proposed a method to solve a set of problems in the context of Mathematical Sciences at the University of Liverpool, Peach Street, Liverpool L69 7ZL, UK.
Abstract: 1 Department of Mathematics, University of California, Los Angeles, CA 90095-1555, USA. E-mail: chan@ipam.ucla.edu, url: http://www.math.ucla.edu/∼chan 2 Department of Mathematical Sciences, University of Liverpool, Peach Street, Liverpool L69 7ZL, UK. E-mail: k.chen@liverpool.ac.uk, url: http://www.liv.ac.uk/∼cmchenke 3 Department of Mathematics, University of Bergen, Bergen, Norway. E-mail: xue-cheng.tai@uib.no, url: http://www.mi.uib.no/∼tai.

Book ChapterDOI
30 May 2007
TL;DR: A segmentation algorithm incorporating high-level prior knowledge which is the shape of objects of interest is introduced in a selective manner, only to occluded boundaries.
Abstract: In this work, we address the problem of segmenting multiple objects, with possible occlusions, in a variational setting. Most segmentation algorithms based on low-level features often fail under uncertainties such as occlusions and subtle boundaries. We introduce a segmentation algorithm incorporating high-level prior knowledge which is the shape of objects of interest. A novelty in our approach is that prior shape is introduced in a selective manner, only to occluded boundaries. Further, a direct application of our framework is that it solves the segmentation with depth problem that aims to recover the spatial order of overlapping objects for certain classes of images. We also present segmentation results on synthetic and real images.

Journal ArticleDOI
TL;DR: Algorithm to automatically detect and match landmark curves on cortical surfaces to get an optimized brain conformal parametrization and an automatic landmark curve tracing method based on the principal directions of the local Weingarten matrix is proposed.
Abstract: One important problem in human brain mapping research is to locate the important anatomical features. Anatomical features on the cortical surface are usually represented by landmark curves, called sulci/gyri curves. These landmark curves are important information for neuroscientists to study brain disease and to match different cortical surfaces. Manual labelling of these landmark curves is time-consuming, especially when large sets of data have to be analyzed. In this paper, we present algorithms to automatically detect and match landmark curves on cortical surfaces to get an optimized brain conformal parametrization. First, we propose an algorithm to obtain a hypothesized landmark region/curves using the Chan-Vese segmentation method, which solves a Partial Differential Equation (PDE) on a manifold with global conformal parameterization. This is done by segmentating the high mean curvature region. Second, we propose an automatic landmark curve tracing method based on the principal directions of the local Weingarten matrix. Based on the global conformal parametrization of a cortical surface, our method adjusts the landmark curves iteratively on the spherical or rectangular parameter domain of the cortical surface along its principal direction field, using umbilic points of the surface as anchors. The landmark curves can then be mapped back onto the cortical surface. Experimental results show that the landmark curves detected by our algorithm closely resemble these manually labeled curves. Next, we applied these automatically labeled landmark curves to generate an optimized conformal parametrization of the cortical surface, in the sense that homologous features across subjects are caused to lie at the same parameter locations in a conformal grid. Experimental results show that our method can effectively help in automatically matching cortical surfaces across subjects.

Proceedings ArticleDOI
12 Apr 2007
TL;DR: In this paper, the Yamabe equation was solved with Ricci flow to conformally parameterize a brain surface via a mapping to a multi-hole disk, and the resulting parameterizations do not have any singularity points and are intrinsic and stable.
Abstract: In medical imaging, parameterized 3D surface models are of great interest for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. By solving the Yamabe equation with the Ricci flow method, we can conformally parameterize a brain surface via a mapping to a multi-hole disk. The resulting parameterizations do not have any singularities and are intrinsic and stable. To illustrate the technique, we computed parameterizations of cortical surfaces in MRI scans of the brain. We also show the parameterization results are consistent with constraints imposed on the mappings of selected landmark curves, and the resulting surfaces can be matched to each other using constrained harmonic maps. Unlike previous planar conformal parameterization methods, our algorithm does not introduce any singularity points

Journal ArticleDOI
TL;DR: It is demonstrated that dense regions are simple but useful and statistically significant patterns that can be used to identify genes and/or samples of interest and eliminate genes and-or samples corresponding to outliers, noise, or abnormalities.
Abstract: We propose and study the notion of dense regions for the analysis of categorized gene expression data and present some searching algorithms for discovering them. The algorithms can be applied to any categorical data matrices derived from gene expression level matrices. We demonstrate that dense regions are simple but useful and statistically significant patterns that can be used to 1) identify genes and/or samples of interest and 2) eliminate genes and/or samples corresponding to outliers, noise, or abnormalities. Some theoretical studies on the properties of the dense regions are presented which allow us to characterize dense regions into several classes and to derive tailor-made algorithms for different classes of regions. Moreover, an empirical simulation study on the distribution of the size of dense regions is carried out which is then used to assess the significance of dense regions and to derive effective pruning methods to speed up the searching algorithms. Real microarray data sets are employed to test our methods. Comparisons with six other well-known clustering algorithms using synthetic and real data are also conducted which confirm the superiority of our methods in discovering dense regions. The DRIFT code and a tutorial are available as supplemental material, which can be found on the Computer Society Digital Library at http://computer.org/tcbb/archives.htm.

Proceedings ArticleDOI
05 Aug 2007
TL;DR: A new, fully automatic technique for wire and scratch removal (inpainting) that works well in both textured and non-textured areas of an image.
Abstract: We present a new, fully automatic technique for wire and scratch removal (inpainting) that works well in both textured and non-textured areas of an image. [Chan et al. 2002] introduced a technique for inpainting using an Euler's elastica energy-based variational model that works well for repairing smooth areas of the image while maintaining edge detail. The technique is very slow (due to a stiff, 4th order PDE) and difficult to control. [Efros and Leung 1999] used texture synthesis techniques for inpainting and hole filling. This works well for areas of an image that contain repeating patterns.

Proceedings Article
08 Aug 2007
TL;DR: Variational PDE based inpainting techniques within the matting problem are proposed, that are largely successful in inPainting geometric features into unknown regions in the presence of sharp discontinuities.
Abstract: While current matting algorithms work very well for some natural images, their performance is questionable in the presence of sharp discontinuities in the foreground and background regions. To counter the above problem, we propose to use variational PDE based inpainting techniques within the matting problem, that are largely successful in inpainting geometric features into unknown regions.


Journal ArticleDOI
01 Dec 2007-Pamm
TL;DR: This paper describes some links between the minimization of the Total Variation and the minimizations of some binary energies.
Abstract: as proposed in [2]. A classical approach to minimize TV is to solve its Euler-Lagrange equation via apartial differential equation (PDE).The maximum flow approach is extremely fast for solving binary or TV optimization problems [3]. However its maindrawback is that it requires to build the graph. And applying this approach might be impossible when one works with largeimagesor volumes. On the contrarynote that a PDE approachto solve a TV problems[2] doesnotrequire more memorythatthe image itself. Thus one can solve the PDE tofin d a global minimizer of the binary problem.

Proceedings Article
08 Aug 2007
TL;DR: A variational PDE based model for tracking objects under occlusion is presented, where the shape prior is combined with the image term using logical operations pertaining to a unique Occlusion scenario, thus avoiding locally optimal solutions.
Abstract: We present a variational PDE based model for tracking objects under occlusion. Here, prior shape information is used to segment object boundaries that are occluded. The novelty in this work is that the shape prior is combined with the image term using logical operations pertaining to a unique occlusion scenario, thus avoiding locally optimal solutions. The model was tested on real and synthetic image sequences with promising results.

Book ChapterDOI
01 Jan 2007
TL;DR: In this article, the error bound for the H wavelet interpolation problem is shown to be bounded by the second order of the local sizes of the interpolation regions in the wavelet domain.
Abstract: We rigorously study the error bound for the H wavelet interpolation problem, which aims to recover missing wavelet coefficients based on minimizing the H norm in physical space. Our analysis shows that the interpolation error is bounded by the second order of the local sizes of the interpolation regions in the wavelet domain.

Journal ArticleDOI
01 Jul 2007
TL;DR: In comparison to physical experimentation, with numerical simulation one has the numerically simulated values of every field variable at every grid point in space and time as discussed by the authors, and one can explore sets of very complex non-linear equations such as the Einstein equations that are very difficult to investigate theoretically.
Abstract: It is often said that numerical simulation is third in the group of three ways to explore modern science: theory, experiment and simulation. Carefully executed modern numerical simulations can, however, be considered at least as relevant as experiment and theory. In comparison to physical experimentation, with numerical simulation one has the numerically simulated values of every field variable at every grid point in space and time. In comparison to theory, with numerical simulation one can explore sets of very complex non-linear equations such as the Einstein equations that are very difficult to investigate theoretically. Cyber-enabled scientific discovery is not just about numerical simulation but about every possible issue related to scientific discovery by utilizing cyberinfrastructure such as the analysis and storage of large data sets, the creation of tools that can be used by broad classes of researchers and, above all, the education and training of a cyber-literate workforce.

Proceedings ArticleDOI
12 Apr 2007
TL;DR: The solution is able to process large, 2048times2048 pixels, histology mouse brain images under a minute creating a faithful and sparse triangulation model for it having only 1.8% of its original pixel count.
Abstract: We consider here the problem of detecting and modeling the essential features present in a biological image and the construction of a compact representation for them which is suitable for numerical computation. The solution we propose employs a variational energy minimization formulation to extract noise and texture, producing a clean image containing the geometric features of interest. Such image decomposition is essential to reduce the image complexity for further processing. We are particularly motivated by the image registration problem where the goal is to align matching features in a pair of images. A combination of algorithms from combinatorial optimization and computational geometry render fast solutions at interactive or near interactive rates. We demonstrate our technique in microscopy images. We are able, for example, to process large, 2048times2048 pixels, histology mouse brain images under a minute creating a faithful and sparse triangulation model for it having only 1.8% of its original pixel count. Models for 512times512 images are typically generated in less than 5 seconds with similar reduced vertex count. These results suggest the relevance of our approach for modeling biomedical images