scispace - formally typeset
Search or ask a question

Showing papers in "International journal of imaging and robotics in 2009"


Journal Article
TL;DR: The proposed computer vision-based approach for automatically detecting the presence of fire in video sequences is effective in detecting all types of uncontrolled fire in various situations, lighting conditions, and environment and performs better than the peer system with higher true positives and true negatives and lower false positives and false negatives.
Abstract: This paper presents a computer vision-based approach for automatically detecting the presence of fire in video sequences. The algorithm not only uses the color and movement attributes of fire, but also analyzes the temporal variation of fire intensity, the spatial color variation of fire, and the tendency of fire to be grouped around a central point. A cumulative time derivative matrix is used to detect areas with a high frequency luminance flicker. The fire color of each frame is aggregated in a cumulative fire color matrix using a new color model which considers both pigmentation values of the RGB color and the saturation and the intensity properties in the HSV color space. A region merging algorithm is then applied to merge the nearby fire colored moving regions to eliminate the false positives. The spatial and temporal color variations are finally applied to detect fires. Our extensive experimental results demonstrate that the proposed system is effective in detecting all types of uncontrolled fire in various situations, lighting conditions, and environment. It also performs better than the peer system with higher true positives and true negatives and lower false positives and false negatives.

65 citations


Journal Article
TL;DR: A threshold toxic concentration was identified for all NPs, beyond which no cytotoxic effects were detectable by standardized tests, and cytoplasmatic and nuclear translocation was observed and verified also during mitotic phase.
Abstract: Nanotechnologies may change to the better many sectors of industry, but considerable concern is arising about their side effects and possible risks to human life. The potential toxicity of nanoparticles (NPs) versus cells has to be much more clearly investigated than it has been done to date to define their future role in biological, medical and environmental applications. The present study performed in-vitro standardized cytotoxicity tests using Hematite, Magnetite and Valentinite nanoparticles with 3T3 cells. Biological (XTT and Brd-U assays), morphological (ESEM and TEM) and physical (EDS and x-ray diffraction) investigations were performed to evaluate cell-nanoparticle interaction and physical state after interaction. The results identified a threshold toxic concentration for all NPs, beyond which no cytotoxic effects were detectable by standardized tests. Notwithstanding these results, cytoplasmatic and nuclear translocation was observed and verified also during mitotic phase. The limits of the standardized tests are analyzed and discussed.

57 citations


Journal Article
TL;DR: A novel technique for image retrieval using the color-texture features extracted from images based on the color indexing using vector quantization to give better discrimination capability for CBIR.
Abstract: Image retrieval has become imperative area of research because of vide range of applications needing the image data search facility. Most of the research approaches in the area are either database based indexing or image processing based CBIR. The hours need is to combine these parallel going approaches of research to have better image retrieval techniques. The paper proposes a novel technique for image retrieval using the color-texture features extracted from images based on the color indexing using vector quantization. This gives better discrimination capability for CBIR. Here we are dividing the database image into 2x2 pixel windows to obtain 12 color descriptors (Per pixel Red, Green and Blue) per row of window table. Then the Kekre’s Median Codebook Generation (KMCG) is applied on window table to get 256 centre rows. The DCT is applied on this centre row vector to obtain feature set of size 256x12, which is user for image retrieval. The method takes fewer computations as compared to conventional DCT applied on complete image. The method gives the color-texture features of the image database at reduced feature set size.

50 citations


Journal Article
TL;DR: ThePanorama making using fusion of partial image pieces of desired view refers to transformation and fusion of multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas.
Abstract: The panorama making using fusion of partial image pieces of desired view has been active areas of research in recent years. The image panoramas primarily aim to enhance field of view. Image fusion plays important role to create full view panoramic mosaics from sequences of smaller picture parts. Each smaller part is stitched together to get panorama. Image Stitching is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transformation and fusion of multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. The important step in making panoramas is automatic estimation of the overlap. Overlap is the common region in consecutive picture parts. Overlap boundary indicates where one image ends and the other image begins. These images should be combined in such a way that the final image does not have any spurious artificial edges. The partial images may differ in their size and brightness. Panorama making fuses image parts together by transforming all of them to same row-size and then blends them together to minimize the brightness differences.

42 citations


Journal Article
TL;DR: The presented technique, using simple features and SVM and ELM classifiers, are effective in the recognition of handwritten Arabic (Indian) numerals, and it is shown to be superior to HMM and NM classifiers for all digits.
Abstract: This paper describes a technique using Support Vector (SVM) and Extreme Learning Machines (ELM) for automatic recognition of off-line handwritten Arabic (Indian) numerals. The features of angle, distance, horizontal, and vertical span are extracted from these numerals. The database has 44 writers with 48 samples of each digit totaling 21120 samples. A two-stage exhaustive parameter estimation technique is used to estimate the best values for the SVM parameters for this application. For SVM parameter estimation, the database is split into 4 subsets: three were used in training and validation in turn, and the fourth for testing. The SVM and ELM classifiers were trained with 75% of the data (i.e. the first 33 writers) and tested with the remaining data (i.e. writers 34 to 44) by using the estimated parameters. The recognition rates of SVM and ELM classifiers at the digit and writers’ levels are compared. The training and testing times of SVM and ELM indicate that ELM is much faster to train and test than SVM. The classification errors are analyzed and categorized. The recognition rates of SVM and ELM classifiers are compared with Hidden Markov Model (HMM) and the Nearest Mean (NM) classifiers. Using the SVM, ELM, HMM and NM classifiers the achieved average recognition rates are 99.39%, 99.45%, 97.99% and 94.35%, respectively. ELM and SVM recognition rates are better for all the digits than HMM and NM. The presented technique, using simple features and SVM and ELM classifiers, are effective in the recognition of handwritten Arabic (Indian) numerals, and it is shown to be superior to HMM and NM classifiers for all digits.

23 citations


Journal Article
TL;DR: This work presents a new method to segment images of alive and dead spermatozoa in positive phase contrast by applying an intelligent thresholding segmentation that changes the value of threshold when the binary image obtained is not fulfill the surface and eccentricity factors.
Abstract: This work presents a new method to segment images of alive and dead spermatozoa in positive phase contrast. This method improves previous segmentation methods applying an intelligent threshold combined with watershed segmentation. First, it applies an intelligent thresholding segmentation that changes the value of threshold when the binary image obtained is not fulfill the surface and eccentricity factors. Then, using the same automatic criteria, the bad segmented images are processed by means of the watershed transform. Using this new method a 90.96% of the spermatozoa have been correctly segmented. This approach could be useful to commercial Computer Assisted Semen Analysis systems that need new and more accurate segmentation processes.

20 citations


Journal Article
TL;DR: This paper presents new fast codebook search algorithm which uses sorting and centroid technique to search the closest codevector in the codebook and uses the mean absolute error as the quality factor.
Abstract: Vector Quantization(VQ) is an efficient technique for data compression and has been successfully used in various applications. In this paper we present new fast codebook search algorithm which uses sorting and centroid technique to search the closest codevector in the codebook. The proposed search algorithm is faster since it reduces number of Euclidean distance computation as compared to Exhaustive search algorithm while keep the image quality imperceptibly close to Exhaustive search algorithm. We have used the mean absolute error as the quality factor since it gives better feel of distortion. Also the proposed algorithm is compared with other codebook search algorithms given in literature and it is found that the performance parameter' average execution time and average number of Euclidean distance computation per image training vector of the proposed algorithm is considerably better compared to most of them.

20 citations


Journal Article
TL;DR: A new highly accurate three-dimensional marker-based tracking (MBT) method to estimate in-vivo arthrokinematics of high-speed sequences of biplane radiographs from a new DRSA system is presented here.
Abstract: Conventional motion measurement techniques are unable to measure knee arthrokinematics during dynamic knee motion to clinically significant levels of accuracy. Assessment with Biplane Dynamic Roentgen Stereogrammetric Analysis (DRSA) overcomes the problem but previous attempts report challenges with motion blur, dynamic accuracy, image quality, excessive radiation exposure, motion tracking artifacts and computational load. The quality of measurement is also affected by image occlusion from body segments appearing synchronously in the imaging volume due to small field of view. These disadvantages translate into reduced accuracy and excessive labor to export the kinematics. A new highly accurate three-dimensional marker-based tracking (MBT) method to estimate in-vivo arthrokinematics of high-speed sequences of biplane radiographs from a new DRSA system is presented here. Data acquired with imaging phantoms and patients (with embedded tantalum bone markers) moving at very-high speeds were analyzed for static and dynamic errors. Combination of the new DRSA instrumentation and the MBT method increases accuracy (Average dynamic error: ±0.1 mm) without loss of information and reduces significantly patient radiation exposure and the time to export joint kinematics. The method is effective with high-speed motion data acquired at much lower (reduced by 70%) radiation exposure without information loss from blurring effects due to motion. Dynamic errors were greatly reduced with increasing image resolution and acquisition rate. The interplay between accuracy, brightness, contrast, exposure and resolution for the bone markers is demonstrated for high-speed movement. The method is one order of magnitude more accurate than conventional motion analysis techniques in tracking high-speed arthrokinematics.

7 citations


Journal Article
TL;DR: A new dynamic pattern based search technique is proposed whose size is variable for each macroblock (MB) depending on the motion information of its spatially as well as temporally adjacent blocks.
Abstract: It has been a challenge for block matching motion estimation algorithms to reduce search time computation while keeping degradation as low as possible. Faster diamond search and hexagonal search patterns have been introduced to reduce search time complexity. However, in these techniques, search pattern size is fixed which may cause oversearch or undersearch depending on the magnitude of motion. In this paper, a new dynamic pattern based search technique is proposed whose size is variable for each macroblock (MB) depending on the motion information of its spatially as well as temporally adjacent blocks. Experimental analysis shows an average speed up of nearly 39% and 104% with respect to hexagonal search (HS) and diamond search (DS) respectively, keeping almost same quality in terms of average PSNR (dB).

2 citations


Journal Article
TL;DR: A novel iris recognition algorithm that divides the normalized iris image into eight regions avoids the noise, localized in some of the iris subparts corrupting the whole iris features, reduces the effect of noise due to noncooperative behavior in an authentication system.
Abstract: Noncooperative environment increases the probability of capturing heterogeneous images (regarding focus, contrast, or brightness) with several noise factors such as iris obstructions and reflections. This paper focuses a novel iris recognition algorithm, suitable for such uncontrolled environment. The proposed method divides the normalized iris image into eight regions avoids the noise, localized in some of the iris subparts corrupting the whole iris features, reduces the effect of noise due to noncooperative behavior in an authentication system. Rotation invariant features are obtained from the decomposed directional subbands coefficients of the normalized iris image block using optimal projection analysis. A fusion algorithm combines the matching scores from individual subimages based on the quality measure, improves the performance. Experimental result shows a substantial decrease of the false rejection rates in the recognition of noisy iris images.

1 citations


Journal Article
TL;DR: The fusion of these two template matching methods can well deal with the problems in template matching such as template drifting, shape rotation, appearance changes, occluded object tracking or environmental lighting condition changes.
Abstract: In this paper a Bayesian probability hybrid based template matching method is proposed for target tracking. Two different template matching methods are weighted by their matching probabilities and then combined through the Bayesian theory to give a f inal robust template updating and matching. Here the matching probability for each method is assigned with a Gaussian Probability Distribution Function (PDF). Then the template’s best matched region in the image is estimated with the maximum likelihood algorithm from the joint distribution of these two template matching PDFs. In this paper the f irst method is the commonly used Sum of the Squared Errors (SSE) template matching method. The second one is the Gaussian Mixture Models (GMMs) method, which is used to represent the template’s appearance features. With the fusion of these two template matching methods, the algorithm in this paper can well deal with the problems in template matching such as template drifting, shape rotation, appearance changes, occluded object tracking or environmental lighting condition changes.

Journal Article
TL;DR: Through extensive experimentation using standard images, more homogeneous images by using the sup norm were obtained and two algorithms for image segmentation through the mean shift are presented.
Abstract: A comparison between two algorithms for image segmentation through the mean shift is presented. These algorithms recursively apply the mean shift filtering by using the l-2 and sup norms in order to define pixel neighborhoods. In this work, through extensive experimentation using standard images, more homogeneous images, by using the sup norm, were obtained.

Journal Article
TL;DR: In this article, an iteratively regularized Gauss-Newton (IRGN) algorithm for non-linear ill-posed problem of conductance imaging has been proposed to recover a spatially varying conductivity from boundary measurement.
Abstract: The problem of conductance imaging (a.k.a impedance tomography) is to recover a spatially varying conductivity from boundary measurement, this problem is an exponentially nonlinear ill-posed problem. In this work, we investigate a two-dimensional inverse problem in conductance imaging using iteratively regularized Gauss-Newton (IRGN) algorithm for non-linear ill-posed problem.We demonstrate the efficacy of the IRGN algorithm by reconstructing the conductivity parameter relevant to the inverse problem of conductivity imaging. The complete electrode model is used for the forward problem which is the common model in biomedical/biophysics applications