scispace - formally typeset
Search or ask a question
Author

Mansour Jamzad

Other affiliations: Waseda University
Bio: Mansour Jamzad is an academic researcher from Sharif University of Technology. The author has contributed to research in topics: Digital watermarking & Watermark. The author has an hindex of 21, co-authored 132 publications receiving 1515 citations. Previous affiliations of Mansour Jamzad include Waseda University.


Papers
More filters
Journal ArticleDOI
TL;DR: Gills color changes were more precise than those found for eyes in order to evaluate the fish freshness and can be used as a green, low cost and easy method for fast and on-line assessing ofFish freshness in food industry.
Abstract: The fish freshness was evaluated using machine vision technique through color changes of eyes and gills of farmed and wild gilthead sea bream ( Sparus aurata ), being employed lightness ( L * ), redness ( a * ), yellowness ( b * ), chroma ( c * ), and total color difference (Δ E ) parameters during fish ice storage. A digital color imaging system, calibrated to provide accurate CIELAB color measurements, was employed to record the visual characteristics of eyes and gills. The region of interest was automatically selected using a computer program developed in MATLAB software. L * , b * , and Δ E of eyes increased with storage time, while c * decreased. The a * parameter of fish eyes did not show clear a trend with storage time. The L * , b * , and Δ E of fish gills increased with storage time, but a * and c * decreased. Regression analysis and artificial neural networks approaches were used to correlate the eyes and gills color parameters with the time of storage and a strong correlation was found between color parameters and storage day. Gills color changes were more precise than those found for eyes in order to evaluate the fish freshness. However, the gills cover should be removed for taking the images and thus, the method is destructive and time-consuming. Therefore, the color parameters of fish eyes can be used as a green, low cost and easy method for fast and on-line assessing of fish freshness in food industry.

102 citations

Journal ArticleDOI
TL;DR: A real-time method for extracting a few traffic parameters in highways such as, lane change detection, vehicle classification and vehicle counting, and a real time method for multiple vehicles tracking that has the capability of occlusion detection is explained.
Abstract: Real time road traffic monitoring is one of the challenging problems in machine vision, especially when one is using commercially available PCs as the main processor. In this paper, we describe a real-time method for extracting a few traffic parameters in highways such as, lane change detection, vehicle classification and vehicle counting. In addition, we will explain a real time method for multiple vehicles tracking that has the capability of occlusion detection. Our tracing algorithm uses Kalman filter and background differencing techniques. We used morphological operations for vehicle contour extraction and its recognition. Our algorithm has three phases, detection of pixels on moving objects, detection of a ''Shape of Interest'' in frame sequences and finally determination of relation among objects also in frame sequences. Our system is implemented on a PC with Pentium II 800MHZ CPU. Its processing speed was measured to be 11 frames per second. The accuracy of measurement was 96%.

101 citations

Journal ArticleDOI
TL;DR: This paper has presented a novel algorithm to estimate linear motion blur parameters such as direction and length using Radon transform to find direction and bispectrum modeling to find the length of motion.
Abstract: Motion blur is one of the most common blurs that degrades images. Restoration of such images is highly dependent on estimation of motion blur parameters. Since 1976, many researchers have developed algorithms to estimate linear motion blur parameters. These algorithms are different in their performance, time complexity, precision and robustness in noisy environments. In this paper, we have presented a novel algorithm to estimate linear motion blur parameters such as direction and length. We used Radon transform to find direction and bispectrum modeling to find the length of motion. Our algorithm is based on the combination of spatial and frequency domain analysis. The great benefit of our algorithm is its robustness and precision in noisy images. We used statistical measures to prove goodness of our model. Our method was tested on 80 standard images that were degraded with different directions and motion lengths, with additive Gaussian noise. The error tolerance average of the estimated parameters was 0.9^o in direction and 0.95 pixel in length and the standard deviations were 0.69 and 0.85, respectively.

78 citations

Journal ArticleDOI
TL;DR: A novel algorithm to estimate direction and length of motion blur, using Radon transform and fuzzy set concepts is presented, which works highly satisfactory for SNR dB and supports lower SNR compared with other algorithms.
Abstract: Motion blur is one of the most common causes of image degradation. Restoration of such images is highly dependent on accurate estimation of motion blur parameters. To estimate these parameters, many algorithms have been proposed. These algorithms are different in their performance, time complexity, precision, and robustness in noisy environments. In this paper, we present a novel algorithm to estimate direction and length of motion blur, using Radon transform and fuzzy set concepts. The most important advantage of this algorithm is its robustness and precision in noisy images. This method was tested on a wide range of different types of standard images that were degraded with different directions (between and ) and motion lengths (between and pixels). The results showed that the method works highly satisfactory for SNR dB and supports lower SNR compared with other algorithms.

67 citations

Journal ArticleDOI
TL;DR: A new SVM-based model-transferring method, in which a max-margin classifier is trained on labeled target samples and is adapted using the offset of the source classifier, abbreviated as HMCA, which can handle heterogeneous domains.
Abstract: In many real classification scenarios the distribution of test (target) domain is different from the training (source) domain. The distribution shift between the source and target domains may cause the source classifier not to gain the expected accuracy on the target data. Domain adaptation has been introduced to solve the accuracy-dropping problem caused by distribution shift phenomenon between domains. In this paper, we study model-transferring methods as a practical branch of adaptation methods, which adapt the source classifier to new domains without using the source samples. We introduce a new SVM-based model-transferring method, in which a max-margin classifier is trained on labeled target samples and is adapted using the offset of the source classifier. We call it Heterogeneous Max-Margin Classifier Adaptation Method, abbreviated as HMCA. The main strength of HMCA is its applicability for heterogeneous domains where the source and target domains may have different feature types. This property is important because the previously proposed model-transferring methods do not provide any solution for heterogeneous problems. We also introduce a new similarity metric that reliably measures adaptability between two domains according to HMCA structure. In the situation that we have access to several source classifiers, the metric can be used to select the most appropriate one for adaptation. We test HMCA on two different computer vision problems (pedestrian detection and image classification). The experimental results show the advantage in accuracy rate for our approach in comparison to several baselines. We propose a new SVM-based model-transferring method for adaptation.Our method applies adaptation in the one-dimensional discrimination space.The proposed method can handle heterogeneous domains.Based on proposed model-transferring method, we design a new metric for measuring the adaptability between two domains.

51 citations


Cited by
More filters
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of finding the best approximation operator for a given function, and the uniqueness of best approximations and the existence of best approximation operators.
Abstract: Preface 1. The approximation problem and existence of best approximations 2. The uniqueness of best approximations 3. Approximation operators and some approximating functions 4. Polynomial interpolation 5. Divided differences 6. The uniform convergence of polynomial approximations 7. The theory of minimax approximation 8. The exchange algorithm 9. The convergence of the exchange algorithm 10. Rational approximation by the exchange algorithm 11. Least squares approximation 12. Properties of orthogonal polynomials 13. Approximation of periodic functions 14. The theory of best L1 approximation 15. An example of L1 approximation and the discrete case 16. The order of convergence of polynomial approximations 17. The uniform boundedness theorem 18. Interpolation by piecewise polynomials 19. B-splines 20. Convergence properties of spline approximations 21. Knot positions and the calculation of spline approximations 22. The Peano kernel theorem 23. Natural and perfect splines 24. Optimal interpolation Appendices Index.

841 citations

Journal ArticleDOI
TL;DR: A comprehensive review of the state-of-the-art computer vision for traffic video with a critical analysis and an outlook to future research directions is presented.
Abstract: Automatic video analysis from urban surveillance cameras is a fast-emerging field based on computer vision techniques. We present here a comprehensive review of the state-of-the-art computer vision for traffic video with a critical analysis and an outlook to future research directions. This field is of increasing relevance for intelligent transport systems (ITSs). The decreasing hardware cost and, therefore, the increasing deployment of cameras have opened a wide application field for video analytics. Several monitoring objectives such as congestion, traffic rule violation, and vehicle interaction can be targeted using cameras that were typically originally installed for human operators. Systems for the detection and classification of vehicles on highways have successfully been using classical visual surveillance techniques such as background estimation and motion tracking for some time. The urban domain is more challenging with respect to traffic density, lower camera angles that lead to a high degree of occlusion, and the variety of road users. Methods from object categorization and 3-D modeling have inspired more advanced techniques to tackle these challenges. There is no commonly used data set or benchmark challenge, which makes the direct comparison of the proposed algorithms difficult. In addition, evaluation under challenging weather conditions (e.g., rain, fog, and darkness) would be desirable but is rarely performed. Future work should be directed toward robust combined detectors and classifiers for all road users, with a focus on realistic conditions during evaluation.

579 citations

Journal ArticleDOI
Yanghao Li1, Naiyan Wang, Jianping Shi, Xiaodi Hou, Jiaying Liu1 
TL;DR: This paper proposes a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN, and demonstrates that the method is complementary with other existing methods and may further improve model performance.
Abstract: Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study (Tommasi et al., 2015) shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.

453 citations