scispace - formally typeset
Search or ask a question

Showing papers by "Takeo Kanade published in 2011"


Proceedings ArticleDOI
06 Nov 2011
TL;DR: CoSand is proposed, a distributed cosegmentation approach for a highly variable large-scale image collection that takes advantage of a strong theoretic property in that the temperature under linear anisotropic diffusion is a submodular function; therefore, a greedy algorithm guarantees at least a constant factor approximation to the optimal solution for temperature maximization.
Abstract: The saliency of regions or objects in an image can be significantly boosted if they recur in multiple images. Leveraging this idea, cosegmentation jointly segments common regions from multiple images. In this paper, we propose CoSand, a distributed cosegmentation approach for a highly variable large-scale image collection. The segmentation task is modeled by temperature maximization on anisotropic heat diffusion, of which the temperature maximization with finite K heat sources corresponds to a K-way segmentation that maximizes the segmentation confidence of every pixel in an image. We show that our method takes advantage of a strong theoretic property in that the temperature under linear anisotropic diffusion is a submodular function; therefore, a greedy algorithm guarantees at least a constant factor approximation to the optimal solution for temperature maximization. Our theoretic result is successfully applied to scalable cosegmentation as well as diversity ranking and single-image segmentation. We evaluate CoSand on MSRC and ImageNet datasets, and show its competence both in competitive performance over previous work, and in much superior scalability.

310 citations


Journal ArticleDOI
TL;DR: A dual approach to describe the evolving 3D structure in trajectory space by a linear combination of basis trajectories is proposed and the Discrete Cosine Transform (DCT) is used as the object independent basis and it is demonstrated that it approaches Principal Component Analysis (PCA) for natural motions.
Abstract: Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes. These bases are object dependent and therefore have to be estimated anew for each video sequence. In contrast, we propose a dual approach to describe the evolving 3D structure in trajectory space by a linear combination of basis trajectories. We describe the dual relationship between the two approaches, showing that they both have equal power for representing 3D structure. We further show that the temporal smoothness in 3D trajectories alone can be used for recovering nonrigid structure from a moving camera. The principal advantage of expressing deforming 3D structure in trajectory space is that we can define an object independent basis. This results in a significant reduction in unknowns and corresponding stability in estimation. We propose the use of the Discrete Cosine Transform (DCT) as the object independent basis and empirically demonstrate that it approaches Principal Component Analysis (PCA) for natural motions. We report the performance of the proposed method, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions, including piecewise rigid motion, partially nonrigid motion (such as a facial expressions), and highly nonrigid motion (such as a person walking or dancing).

262 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This chapter describes the problem space for facial expression analysis, which includes multiple dimensions: level of description, individual differences in subjects, transitions among expressions, intensity of facial expression, deliberate versus spontaneous expression, head orientation and scene complexity, image acquisition and resolution, reliability of ground truth, databases, and the relation to other facial behaviors or nonfacial behaviors.
Abstract: This chapter introduces recent advances in facial expression analysis and recognition. The first part discusses general structure of AFEA systems. The second part describes the problem space for facial expression analysis. This space includes multiple dimensions: level of description, individual differences in subjects, transitions among expressions, intensity of facial expression, deliberate versus spontaneous expression, head orientation and scene complexity, image acquisition and resolution, reliability of ground truth, databases, and the relation to other facial behaviors or nonfacial behaviors. We note that most work to date has been confined to a relatively restricted region of this space. The last part of this chapter is devoted to a description of more specific approaches and the techniques used in recent advances. They include the techniques for face acquisition, facial data extraction and representation, facial expression recognition, and multimodal expression analysis. The chapter concludes with a discussion assessing the current status, future possibilities, and open questions about automatic facial expression analysis.

231 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This chapter reviews previously proposed algorithms for pose and illumination invariant face recognition and describes in detail two successful appearance-based algorithms for face recognition across pose, eigen light-fields, and Bayesian face subregions.
Abstract: The last decade has seen automatic face recognition evolve from small-scale research systems to a wide range of commercial products. Driven by the FERET face database and evaluation protocol, the currently best commercial systems achieve verification accuracies comparable to those of fingerprint recognizers. In these experiments, only frontal face images taken under controlled lighting conditions were used. As the use of face recognition systems expands toward less restricted environments, the development of algorithms for view and illumination invariant face recognition becomes important. However, the performance of current algorithms degrades significantly when tested across pose and illumination, as documented in a number of evaluations. In this chapter, we review previously proposed algorithms for pose and illumination invariant face recognition. We then describe in detail two successful appearance-based algorithms for face recognition across pose, eigen light-fields, and Bayesian face subregions. We furthermore show how both of these algorithms can be extended toward face recognition across pose and illumination.

168 citations


Proceedings ArticleDOI
05 Jun 2011
TL;DR: In this paper, a combination of topological and metric mapping is used to encode the coarse topology of the route as well as detailed metric information required for accurate localization, which achieves an average localization error of 2.7 m over this route.
Abstract: One of the fundamental requirements of an autonomous vehicle is the ability to determine its location on a map. Frequently, solutions to this localization problem rely on GPS information or use expensive three dimensional (3D) sensors. In this paper, we describe a method for long-term vehicle localization based on visual features alone. Our approach utilizes a combination of topological and metric mapping, which we call topometric localization, to encode the coarse topology of the route as well as detailed metric information required for accurate localization. A topometric map is created by driving the route once and recording a database of visual features. The vehicle then localizes by matching features to this database at runtime. Since individual feature matches are unreliable, we employ a discrete Bayes filter to estimate the most likely vehicle position using evidence from a sequence of images along the route. We illustrate the approach using an 8.8 km route through an urban and suburban environment. The method achieves an average localization error of 2.7 m over this route, with isolated worst case errors on the order of 10 m.

143 citations


Book
01 Nov 2011

136 citations


Journal ArticleDOI
TL;DR: This paper proposes an effective approach for automated mitosis detection using phase-contrast time-lapse microscopy, which is a nondestructive imaging modality, thereby allowing continuous monitoring of cells in culture.
Abstract: Due to the enormous potential and impact that stem cells may have on regenerative medicine, there has been a rapidly growing interest for tools to analyze and characterize the behaviors of these cells in vitro in an automated and high throughput fashion. Among these behaviors, mitosis, or cell division, is important since stem cells proliferate and renew themselves through mitosis. However, current automated systems for measuring cell proliferation often require destructive or sacrificial methods of cell manipulation such as cell lysis or in vitro staining. In this paper, we propose an effective approach for automated mitosis detection using phase-contrast time-lapse microscopy, which is a nondestructive imaging modality, thereby allowing continuous monitoring of cells in culture. In our approach, we present a probabilistic model for event detection, which can simultaneously 1) identify spatio-temporal patch sequences that contain a mitotic event and 2) localize a birth event, defined as the time and location at which cell division is completed and two daughter cells are born. Our approach significantly outperforms previous approaches in terms of both detection accuracy and computational efficiency, when applied to multipotent C3H10T1/2 mesenchymal and C2C12 myoblastic stem cell populations.

130 citations


Proceedings ArticleDOI
05 Jan 2011
TL;DR: Several algorithms for cell image analysis including microscopy image restoration, cell event detection and cell tracking in a large population are presented, integrated into an automated system capable of quantifying cell proliferation metrics in vitro in real-time.
Abstract: We present several algorithms for cell image analysis including microscopy image restoration, cell event detection and cell tracking in a large population. The algorithms are integrated into an automated system capable of quantifying cell proliferation metrics in vitro in real-time. This offers unique opportunities for biological applications such as efficient cell behavior discovery in response to different cell culturing conditions and adaptive experiment control. We quantitatively evaluated our system's performance on 16 microscopy image sequences with satisfactory accuracy for biologists' need. We have also developed a public website compatible to the system's local user interface, thereby allowing biologists to conveniently check their experiment progress online. The website will serve as a community resource that allows other research groups to upload their cell images for analysis and comparison.

116 citations


Proceedings ArticleDOI
05 Dec 2011
TL;DR: A robust-weighted extrinsic calibration algorithm that is implemented easily and has small calibration error is proposed that has calibration accuracy over 50% better than an existing state of the art approach.
Abstract: Lidar and visual imagery have been broadly utilized in computer vision and mobile robotics applications because these sensors provide complementary information. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust-weighted extrinsic calibration algorithm that is implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration data sets. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm, such as comparison of the RMS distance of the ground truth and the projected points, the effect of the number of lidar scan and image, and the effect of pose and range of the calibration target. In the experiments, we show our extrinsic calibration algorithm has calibration accuracy over 50% better than an existing state of the art approach. To evaluate the generality of our algorithm, we also colorize point clouds with different pairs of lidars and cameras calibrated by our algorithm.

103 citations


Proceedings ArticleDOI
09 Jun 2011
TL;DR: A cell tracking method based on global spatio-temporal data association which considers hypotheses of initialization, termination, translation, division and false positive in an integrated formulation and solves the global association for tree structures as a maximum-a-posteriori (MAP) problem and solve it by linear programming.
Abstract: Automated cell tracking in populations is important for research and discovery in biology and medicine. In this paper, we propose a cell tracking method based on global spatio-temporal data association which considers hypotheses of initialization, termination, translation, division and false positive in an integrated formulation. Firstly, reliable tracklets (i.e., short trajectories) are generated by linking detection responses based on frame-by-frame association. Next, these tracklets are globally associated over time to obtain final cell trajectories and lineage trees. During global association, tracklets form tree structures where a mother cell divides into two daughter cells. We formulate the global association for tree structures as a maximum-a-posteriori (MAP) problem and solve it by linear programming. This approach is quantitatively evaluated on sequences with thousands of cells captured over several days.

98 citations


Proceedings ArticleDOI
09 May 2011
TL;DR: This work reformulates the traditional least squares solution to allow the fast computation of surface normals, and proposes a new approach that obtains the normals by calculating the derivatives of the surface from a spherical range image.
Abstract: The fast and accurate computation of surface normals from a point cloud is a critical step for many 3D robotics and automotive problems, including terrain estimation, mapping, navigation, object segmentation, and object recognition. To obtain the tangent plane to the surface at a point, the traditional approach applies total least squares to its small neighborhood. However, least squares becomes computationally very expensive when applied to the millions of measurements per second that current range sensors can generate. We reformulate the traditional least squares solution to allow the fast computation of surface normals, and propose a new approach that obtains the normals by calculating the derivatives of the surface from a spherical range image. Furthermore, we show that the traditional least squares problem is very sensitive to range noise and must be normalized to obtain accurate results. Experimental results with synthetic and real data demonstrate that our proposed method is not only more efficient by up to two orders of magnitude, but provides better accuracy than the traditional least squares for practical neighborhood sizes.

Journal ArticleDOI
TL;DR: It is suggested that simple uniform distributions of growth factors immobilized to an extracellular matrix material may be as effective in directing cell migration into a wound site as more complex patterns with concentration gradients.

Journal Article
TL;DR: A short-baseline real-time stereo vision system that is capable of the simultaneous and robust estimation of the ego-motion and of the 3D structure and the independent motion of thousands of points of the environment and can be used to augment the perception of the user in complex dynamic environments.
Abstract: This paper presents a short-baseline real-time stereo vision system that is capable of the simultaneous and robust estimation of the ego-motion and of the 3D structure and the independent motion of thousands of points of the environment. Kalman lters estimate the position and velocity of world points in 3D Euclidean space. The six degrees of freedom of the ego-motion are obtained by minimizing the projection error of the current and previous clouds of static points. Experimental results with real data in indoor and outdoor environments demonstrate the robustness, accuracy and eciency of our approach. Since the baseline is as short as 13cm, the device is head-mountable, and can be used by a visually impaired person. Our proposed system can be used to augment the perception of the user in complex dynamic environments.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: An approach to identify and segment objects from scenes that a person (or robot) encounters in Activities of Daily Living (ADL) by able to link pieces of visual information from multiple images and extract the consistent patterns.
Abstract: We propose an approach to identify and segment objects from scenes that a person (or robot) encounters in Activities of Daily Living (ADL). Images collected in those cluttered scenes contain multiple objects. Each image provides only a partial, possibly very different view of each object. An object instance discovery program must be able to link pieces of visual information from multiple images and extract the consistent patterns.

Proceedings ArticleDOI
16 May 2011
TL;DR: This paper proposes to integrate LIDAR data directly into the stereo algorithm to reduce false positives while increasing the density of the resulting disparity image on textureless regions, and demonstrates with extensive experimental results that the disparity estimation is substantially improved while speeding up the stereo computation.
Abstract: The fusion of stereo and laser range finders (LIDARs) has been proposed as a method to compensate for each individual sensor's deficiencies - stereo output is dense, but noisy for large distances, while LIDAR is more accurate, but sparse. However, stereo usually performs poorly on textureless areas and on scenes containing repetitive structures, and the subsequent fusion with LIDAR leads to a degraded estimation of the 3D structure. In this paper, we propose to integrate LIDAR data directly into the stereo algorithm to reduce false positives while increasing the density of the resulting disparity image on textureless regions. We demonstrate with extensive experimental results with real data that the disparity estimation is substantially improved while speeding up the stereo computation by as much as a factor of five.

Proceedings ArticleDOI
01 Nov 2011
TL;DR: The results show that the method outperforms other state-of-the-art's iris estimation and is competitive to commercial product that use infrared ray with respect to both accuracy and robustness.
Abstract: Gaze estimation is a key technology to understand a person's interests and intents, and it is becoming more popular in daily situations such as driving scenarios. Wearable gaze estimation devices are use for long periods of time, therefore non-active sources are not desirable from a safety point of view. Gaze estimation that does not rely on active source, is performed by locating iris position. To estimate the iris position accurately, most studies use ellipse fitting in which the ellipse is defined by 5 parameters(position (x,y), rotation angle, semi-major axis and semi-minor axis). We claim that, for iris position estimation, 5 parameters are redundant because they might be influenced by non-iris edges. Therefore, we propose to use 2 parameters(position) introducing a 3D eye model(the transformation between eye and camera coordinate and eyeball/iris size). Given 3D eye model, projected ellipse that represents iris shape can be specified only by position under weak-perspective approximation. We quantitatively evaluate our method on both iris position and gaze estimation. Our results show that our method outperforms other state-of-the-art's iris estimation and is competitive to commercial product that use infrared ray with respect to both accuracy and robustness.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: To alleviate the sensitivity issue in estimating and decomposing the radial fundamental matrix, this work proposes an optimization approach that guarantees the possible solution using a prior for the principal points.
Abstract: We propose a method for geometric calibration of an active vision system, composed of a projector and a camera, using structured light projection. Unlike existing methods of self-calibration for projector-camera systems, our method estimates the intrinsic parameters of both the projector and the camera as well as extrinsic parameters except a global scale without any calibration apparatus such as a checker-pattern board. Our method is based on the decomposition of a radial fundamental matrix into intrinsic and extrinsic parameters. Dense and accurate correspondences are obtained utilizing structured light patterns consisting of Gray code and phase-shifting sinusoidal code. To alleviate the sensitivity issue in estimating and decomposing the radial fundamental matrix, we propose an optimization approach that guarantees the possible solution using a prior for the principal points. We demonstrate the stability of our method using several examples and evaluate the system quantitatively and qualitatively.

Journal ArticleDOI
TL;DR: A gyro-aided feature tracking method that remains robust under fast camera–ego rotation conditions and an eight-degree-of-freedom affine photometric warping model enables the KLT to cope with camera rolling and illumination change in an outdoor setting is presented.
Abstract: When a camera rotates rapidly or shakes severely, a conventional KLT (Kanade-Lucas-Tomasi) feature tracker becomes vulnerable to large inter-image appearance changes. Tracking fails in the KLT optimization step, mainly due to an inadequate initial condition equal to final image warping in the previous frame. In this paper, we present a gyro-aided feature tracking method that remains robust under fast camera-ego rotation conditions. The knowledge of the camera's inter-frame rotation, obtained from gyroscopes, provides an improved initial warping condition, which is more likely within the convergence region of the original KLT. Moreover, the use of an eight-degree-of-freedom affine photometric warping model enables the KLT to cope with camera rolling and illumination change in an outdoor setting. For automatic incorporation of sensor measurements, we also propose a novel camera/gyro auto-calibration method which can be applied in an in-situ or on-the-fly fashion. Only a set of feature tracks of natural landmarks is needed in order to simultaneously recover intrinsic and extrinsic parameters for both sensors. We provide a simulation evaluation for our auto-calibration method and demonstrate enhanced tracking performance for real scenes with aid from low-cost microelectromechanical system gyroscopes. To alleviate the heavy computational burden required for high-order warping, our publicly available GPU implementation is discussed for tracker parallelization.

Journal ArticleDOI
TL;DR: A method of randomized subset-based matching can effectively handle outliers and recover the correct object shape on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions.
Abstract: Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

Proceedings ArticleDOI
09 May 2011
TL;DR: A drift-free attitude estimation method that uses image line segments for the correction of accumulated errors in integrated gyro rates when an unmanned aerial vehicle (UAV) operates in urban areas and introduces a new Kalman update step that directly uses line segments rather than vanishing points.
Abstract: We present a drift-free attitude estimation method that uses image line segments for the correction of accumulated errors in integrated gyro rates when an unmanned aerial vehicle (UAV) operates in urban areas. Since man-made environments generally exhibit strong regularity in structure, a set of line segments that are either parallel or orthogonal to the gravitational direction can provide visual measurements for the absolute attitude from a calibrated camera. Line segments are robustly classified with the assumption that a single vertical vanishing point or multiple horizontal vanishing points exist. In the fusion with gyro angles, we introduce a new Kalman update step that directly uses line segments rather than vanishing points. The simulation and experiment based on urban images at distant views are provided to demonstrate that our method can serve as a robust visual attitude sensor for aerial robot navigation.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: The cell tracking system can track individual cells during the healing process and provide detailed spatio-temporal measurements of cell behaviors, and demonstrates the effectiveness of automatic cell tracking for quantitative and detailed analysis of the cell behaviors in wound healing assay in vitro.
Abstract: The wound healing assay in vitro is widely used for research and discovery in biology and medicine. This assay allows for observing the healing process in vitro in which the cells on the edges of the artificial wound migrate toward the wound area. The influence of different culture conditions can be measured by observing the change in the size of the wound area. For further investigation, more detailed measurements of the cell behaviors are required. In this paper, we present an application of automatic cell tracking in phase-contrast microscopy images to wound healing assay. The cell behaviors under three different culture conditions have been analyzed. Our cell tracking system can track individual cells during the healing process and provide detailed spatio-temporal measurements of cell behaviors. The application demonstrates the effectiveness of automatic cell tracking for quantitative and detailed analysis of the cell behaviors in wound healing assay in vitro.

Journal ArticleDOI
01 Mar 2011
TL;DR: This paper proposes a method for measuring the anatomical feature cross-sections of the foot while walking based on the multi-view stereo method and shows that the proposed method achieved the desired accuracy similar to existing 3D static foot scanners.
Abstract: Measurement of human body is useful for ergonomic design in manufacturing. We aim to accurately measure the shapes of human feet for the design of shoes, for which measuring the dynamic shape of the foot in motion is important because the foot deforms while walking or running. In this paper, we propose a method for measuring the anatomical feature cross-sections of the foot while walking. The dynamic shape of feature cross-section has never been measured, though those features are very basic and popular. Our proposed method is based on the multi-view stereo method. The target cross-sections are painted in individual colors (red, green, yellow and blue). Several nonlinear conditions are introduced in the process to find the consistent correspondence in all images. We show that the proposed method for dynamic measurement achieved the desired accuracy similar to existing 3D static foot scanners.

Journal ArticleDOI
16 Nov 2011-PLOS ONE
TL;DR: An automated vision-based system that monitors the degree of cell confluency and could be integrated with current and developing robotic cell cultures systems to achieve complete automation has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control.
Abstract: Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and objectively determine the appropriate time for cell passaging. Optimal stem cell culturing that maintains cell pluripotency while maximizing cell yields will be especially important for efficient, cost-effective stem cell-based therapies. Toward this goal we developed a real-time computer vision-based system that monitors the degree of cell confluency with a precision of 0.791±0.031 and recall of 0.559±0.043. The system consists of an automated phase-contrast time-lapse microscope and a server. Multiple dishes are sequentially imaged and the data is uploaded to the server that performs computer vision processing, predicts when cells will exceed a pre-defined threshold for optimal cell confluency, and provides a Web-based interface for remote cell culture monitoring. Human operators are also notified via text messaging and e-mail 4 hours prior to reaching this threshold and immediately upon reaching this threshold. This system was successfully used to direct the expansion of a paradigm stem cell population, C2C12 cells. Computer-directed and human-directed control subcultures required 3 serial cultures to achieve the theoretical target cell yield of 50 million C2C12 cells and showed no difference for myogenic and osteogenic differentiation. This automated vision-based system has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control, and it could be integrated with current and developing robotic cell cultures systems to achieve complete automation.

Proceedings ArticleDOI
09 Jun 2011
TL;DR: The functionality of this recent mitosis detection algorithm significantly improves state-of-the-art cell tracking systems through extensive experiments on 48 C2C12 myoblastic stem cell populations under four different conditions.
Abstract: Automated visual-tracking systems of stem cell populations in vitro allow for high-throughput analysis of time-lapse phase-contrast microscopy. In these systems, detection of mitosis, or cell division, is critical to tracking performance as mitosis causes branching of the trajectory of a mother cell into the two trajectories of its daughter cells. Recently, one mitosis detection algorithm showed its success in detecting the time and location that two daughter cells first clearly appear as a result of mitosis. This detection result can therefore helps trajectories to correctly bifurcate and the relations between mother and daughter cells to be revealed. In this paper, we demonstrate that the functionality of this recent mitosis detection algorithm significantly improves state-of-the-art cell tracking systems through extensive experiments on 48 C2C12 myoblastic stem cell populations under four different conditions.

Journal ArticleDOI
TL;DR: The proposed lens system has multiple aperture stops, which enable it to capture multidirectional parallel light rays, while a conventional telecentric lens has only one aperture stop and can capture only light rays that are perpendicular to the lens.
Abstract: We present a telecentric lens that is able to gain 3D information. The proposed lens system has multiple aperture stops, which enable it to capture multidirectional parallel light rays, while a conventional telecentric lens has only one aperture stop and can capture only light rays that are perpendicular to the lens. We explain the geometry of the multiaperture telecentric system and show that correspondences fall on a line like those in a conventional stereo. As it is a single-lens sensor, we also introduce the principles of 3D reconstruction. Unlike a conventional stereo camera, the disparity of a scene point measured by the proposed lens system is linearly proportional to the depth of a scene point.

Journal ArticleDOI
23 Nov 2011-Langmuir
TL;DR: A microfluidic device with chemically patterned stripes of the cell adhesion molecule P-selectin was designed and the behavior of HL-60 cells rolling under flow was analyzed using a high-resolution visual tracking system.
Abstract: Cell separation technology is a key tool for biological studies and medical diagnostics that relies primarily on chemical labeling to identify particular phenotypes. An emergent method of sorting cells based on differential rolling on chemically patterned substrates holds potential benefits over existing technologies, but the underlying mechanisms being exploited are not well characterized. In order to better understand cell rolling on complex surfaces, a microfluidic device with chemically patterned stripes of the cell adhesion molecule P-selectin was designed. The behavior of HL-60 cells rolling under flow was analyzed using a high-resolution visual tracking system. This behavior was then correlated to a number of established predictive models. The combination of computational modeling and widely available fabrication techniques described herein represents a crucial step toward the successful development of continuous, label-free methods of cell separation based on rolling adhesion.

Book ChapterDOI
03 Jul 2011
TL;DR: The restoration from multiple shear directions decreases the ambiguity among different individual restorations and results in restored DIC microscopy images that are directly proportional to specimen's physical measurements, which is very amenable for microscopy image analysis such as cell segmentation.
Abstract: Differential Interference Contrast (DIC) microscopy is a non-destructive imaging modality that has been widely used by biologists to capture microscopy images of live biological specimens. However, as a qualitative technique, DIC microscopy records specimen's physical properties in an indirect way by mapping the gradient of specimen's optical path length (OPL) into the image intensity. In this paper, we propose to restore DIC microscopy images by quantitatively estimating specimen's OPL from a collection of DIC images captured from multiple shear directions. We acquire the DIC images by rotating the specimen dish on the microscope stage and design an Iterative Closest Point algorithm to register the images. The shear directions of the image dataset are automatically estimated by our coarse-to-fine grid search algorithm. We develop a direct solver on a regularized quadratic cost function to restore DIC microscopy images. The restoration from multiple shear directions decreases the ambiguity among different individual restorations. The restored DIC images are directly proportional to specimen's physical measurements, which is very amenable for microscopy image analysis such as cell segmentation.

Proceedings ArticleDOI
09 Jun 2011
TL;DR: A restoration algorithm based on the microscopy imaging model is proposed and it is considered that temporal consistency when restoring time-lapse microscopy image sequences is considered when restoring artifact-free microscopy images.
Abstract: Phase contrast and differential interference contrast, which are used to capture microscopy images of living cells, contain a few artifacts such as halo and shadow-cast effect. Removing these artifacts from microscopy images facilitates automated microscopy image analysis, such as the cell segmentation that is a critical step in cell tracking systems. We propose a restoration algorithm based on the microscopy imaging model and consider temporal consistency when restoring time-lapse microscopy image sequences. The artifact-free microscopy images are restored by minimizing a regularized quadratic cost function that is adaptable to input image properties. Our method achieves high segmentation accuracy and low computational cost compared to the previous methods.

Proceedings ArticleDOI
21 Mar 2011
TL;DR: An approach to robustly align facial features to a face image even when the face is partially occluded is presented, which relies on explicit multi-modal representation of the response from each of the face feature detectors and RANSAC hypothesize-and-test search for the correct alignment over subset samplings of those in the feature response modes.
Abstract: In this paper we present an approach to robustly align facial features to a face image even when the face is partially occluded. Previous methods are vulnerable to partial occlusion of the face, since it is assumed, explicitly or implicitly, that there is no significant occlusion. In order to cope with this difficulty, our approach relies on two schemes: one is explicit multi-modal representation of the response from each of the face feature detectors, and the other is RANSAC hypothesize-and-test search for the correct alignment over subset samplings of those in the feature response modes. We evaluated the proposed method on a large number of facial images, occluded and non-occluded. The results demonstrated that the alignment is accurate and stable over a wide range of degrees and variations of occlusion.

Proceedings ArticleDOI
05 Jan 2011
TL;DR: An image indexing and matching algorithm that relies on selecting distinctive high dimensional features that compares favorably with the state of the art in image matching tasks on the University of Kentucky Recognition Benchmark dataset and on an indoor localization dataset.
Abstract: In this paper we propose an image indexing and matching algorithm that relies on selecting distinctive high dimensional features. In contrast with conventional techniques that treated all features equally, we claim that one can benefit significantly from focusing on distinctive features. We propose a bag-of-words algorithm that combines the feature distinctiveness in visual vocabulary generation. Our approach compares favorably with the state of the art in image matching tasks on the University of Kentucky Recognition Benchmark dataset and on an indoor localization dataset. We also show that our approach scales up more gracefully on a large scale Flickr dataset.