scispace - formally typeset
Search or ask a question

Showing papers by "Ishwar K. Sethi published in 2006"


Journal ArticleDOI
TL;DR: This paper proposes a new active learning approach, confidence-based active learning, based on identifying and annotating uncertain samples, which takes advantage of current classifiers' probability preserving and ordering properties and is robust without additional computational effort.
Abstract: This paper proposes a new active learning approach, confidence-based active learning, for training a wide range of classifiers. This approach is based on identifying and annotating uncertain samples. The uncertainty value of each sample is measured by its conditional error. The approach takes advantage of current classifiers' probability preserving and ordering properties. It calibrates the output scores of classifiers to conditional error. Thus, it can estimate the uncertainty value for each input sample according to its output score from a classifier and select only samples with uncertainty value above a user-defined threshold. Even though we cannot guarantee the optimality of the proposed approach, we find it to provide good performance. Compared with existing methods, this approach is robust without additional computational effort. A new active learning method for support vector machines (SVMs) is implemented following this approach. A dynamic bin width allocation method is proposed to accurately estimate sample conditional error and this method adapts to the underlying probabilities. The effectiveness of the proposed approach is demonstrated using synthetic and real data sets and its performance is compared with the widely used least certain active learning method

223 citations


Journal ArticleDOI
TL;DR: A new classifier design methodology is proposed to design classifiers with controlled confidence, which calibrates the output scores of current classifiers to the conditional error or error rates and takes advantage of the current well-developed classifier's probability preserving and ordering properties.

64 citations


Journal Article
TL;DR: This paper presents an accurate technique for automatic detection of fiducial markers in 3d brain images so that fully automatic landmark-based coregistration can be implemented in landmarkbased image-registrations.
Abstract: In this paper we present an accurate technique for automatic detection of fiducial markers in 3d brain images so that fully automatic landmark-based coregistration can be implemented. In our tests, our approach detected successfully 429 out of 430 fiducial markers that can be recognized by human eyes. Thus, in landmarkbased image-registrations, our template-based technique can eliminate the procedure that waits for users to manually pick fiducial landmarks in images.

12 citations


Journal ArticleDOI
01 Jan 2006-Scopus
TL;DR: An image processing technique for automatic fiducial detection, which is fast, automatic and accurate, has two major steps: Edge map construction and curvature-based object detection.
Abstract: Dept of Computer Science and Engineering, Oakland University, USA Institute for Scientific Computing , Wayne State University, USA Abstract. Point-based registration is one of the most popular registration methods in practice. During registration, we use external fiducial markers that are rigidly attached through the skin to the skull. Determining the coordinates of the fiducials without human selection is crucial for a fully automatic point-based image registration. In this paper, an image processing technique for automatic fiducial detection is presented. The technique, which is fast, automatic and accurate, has two major steps: Edge map construction and curvature-based object detection. Experimental results are promising compared to the manul selection by experts with no error for 56 CT images and 3% error for 66 MR images.

12 citations


01 Jun 2006
TL;DR: CASMIL as mentioned in this paper is a cost effective and efficient approach to monitor and predict deformation during surgery, allowing accurate, and real-time intra-operative information to be provided reliably to the surgeon.
Abstract: CASMIL aims to develop a cost‐effective and efficient approach to monitor and predict deformation during surgery, allowing accurate, and real‐time intra‐operative information to be provided reliably to the surgeon.

9 citations


Journal ArticleDOI
TL;DR: CASMIL aims to develop a cost‐effective and efficient approach to monitor and predict deformation during surgery, allowing accurate, and real‐time intra‐operative information to be provided reliably to the surgeon.
Abstract: Background CASMIL aims to develop a cost-effective and efficient approach to monitor and predict deformation during surgery, allowing accurate, and real-time intra-operative information to be provided reliably to the surgeon. Method CASMIL is a comprehensive Image-guided Neurosurgery System with extensive novel features. It is an integration of various modules including rigid and non-rigid body co-registration (image-image, image-atlas, and image-patient), automated 3D segmentation, brain shift predictor, knowledge based query tools, intelligent planning, and augmented reality. One of the vital and unique modules is the Intelligent Planning module, which displays the best surgical corridor on the computer screen based on tumor location, captured surgeon knowledge, and predicted brain shift using patient specific Finite Element Model. Also, it has multi-level parallel computing to provide near real-time interaction with iMRI (Intra-operative MRI). In addition, it has been securely web-enabled and optimized for remote web and PDA access. Results A version of this system is being used and tested using real patient data and is expected to be in use in the operating room at the Detroit Medical Center in the first half of 2006.

6 citations


Journal ArticleDOI
TL;DR: A new type of image feature is proposed, which consists of patterns of colors and intensities that capture the latent associations among images and primitive features in such a way that the noise and redundancy are eliminated.
Abstract: To reason about the meaning of an image, useful information should be provided with that image; however, images often contain little to no textual information about the objects they are depicting, which is the precise reason why there is a need for CBIR systems that exploit only the correlations present in the raw pixel data. In this paper, we proposed a new type of image feature, which consists of patterns of colors and intensities that capture the latent associations among images and primitive features in such a way that the noise and redundancy are eliminated. We introduced the synobin, a new term for content-based image retrieval literature, which is the equivalent of a synonym word from text retrieval, to name the bin that is synonymous with other bins of a color feature, in the sense that they are similarly used across the image database. In a formal definition, a group of synobins is given by the most important bins participating in forming of a useful pattern, that is, the bins having the highest coefficients in the linear combination defining that pattern. Incorporating our feature model into a CBIR system moves the research in image retrieval beyond simple matching of images based on their primitive features and creates a ground for learning image semantics from visual content. A system developed using our proposed feature model will have the capability of learning associations not only between semantic concepts and images, but also between semantic concepts and patterns. We evaluated the performance of our system based on the retrieval accuracy and on the perceptual similarity order among retrieved images. When compared to standard image retrieval methods, our preliminary results show that even if the feature space was reduced to only 3%-5% of the initial space, the accuracy and perceptual similarity for our system remain the same or better depending on the category of images.

4 citations


Proceedings ArticleDOI
07 May 2006
TL;DR: This method can be used to detect and locate objects from images in many different unconstrained environments, such as detecting vehicles, speed signs, etc...
Abstract: This paper describes a method for detecting and locating objects in images. The presented approach relies on finding general landmark candidates (glc) in an image. A glc is a closed region that can be either an object landmark (ol) or a non-object landmark (nol). The entire ol's are then grouped into object clusters (oc's). Given a set of training images, the method builds a database of ol and nol regions and oc's. When a query image is presented, we detect all of the ol's and group them into oc candidates. These cluster candidates are then classified using the oc's from the training database. This method can be used to detect and locate objects from images in many different unconstrained environments, such as detecting vehicles, speed signs, etc... The method is tested on two different cases and is shown to yield a high success rate.