scispace - formally typeset
Search or ask a question
Author

Hadi Sadoghi Yazdi

Other affiliations: Tarbiat Modares University
Bio: Hadi Sadoghi Yazdi is an academic researcher from Ferdowsi University of Mashhad. The author has contributed to research in topics: Support vector machine & Fuzzy logic. The author has an hindex of 18, co-authored 192 publications receiving 1477 citations. Previous affiliations of Hadi Sadoghi Yazdi include Tarbiat Modares University.


Papers
More filters
Proceedings ArticleDOI
08 Dec 2008
TL;DR: This paper proposes a novel approach for human fall detection based on combination of integrated time motion images and eigenspace technique, and considers wide range of motions consisting normal daily life activities, abnormal behaviors and also unusual events.
Abstract: Falls are a major health hazard for the elderly and a serious obstacle for independent living. Since falling causes dramatic physical-psychological consequences, development of intelligent video surveillance systems is so important due to providing safe environments. To this end, this paper proposes a novel approach for human fall detection based on combination of integrated time motion images and eigenspace technique. Integrated time motion image (ITMI) is a type of spatio-temporal database that includes motion and time of motion occurrence. Applying eigenspace technique to ITMIs leads in extracting eigen-motion and finally MLP Neural Network is used for precise classification of motions and determination of a fall event. Unlike existent fall detection systems only deal with limited movement patterns, we considered wide range of motions consisting normal daily life activities, abnormal behaviors and also unusual events. Reliable recognition rate of experimental results underlines satisfactory performance of our system.

74 citations

Proceedings Article
14 Feb 2007
TL;DR: A fuzzy hybrid learning algorithm (FHLA) for the radial basis function neural network (RBFNN) which determines the number of hidden neurons in the RBFNN structure by using cluster validity indices with majority rule while the characteristics of the hidden neurons are initialized based on advanced fuzzy clustering.
Abstract: This paper presents a fuzzy hybrid learning algorithm (FHLA) for the radial basis function neural network (RBFNN). The method determines the number of hidden neurons in the RBFNN structure by using cluster validity indices with majority rule while the characteristics of the hidden neurons are initialized based on advanced fuzzy clustering. The FHLA combines the gradient method and the linear least-squared method for adjusting the RBF parameters and the neural network connection weights. The RBFNN with the proposed FHLA is used as a classifier in a face recognition system. The inputs to the RBFNN are the feature vectors obtained by combining shape information and principal component analysis. The designed RBFNN with the proposed FHLA, while providing a faster convergence in the training phase, requires a hidden layer with fewer neurons and less sensitivity to the training and testing patterns. The efficiency of the proposed method is demonstrated on the ORL and Yale face databases, and comparison with other algorithms indicates that the FHLA yields excellent recognition rate in human face recognition.

67 citations

Journal ArticleDOI
TL;DR: An online Neural Network (NN) model, is composed of two different parts for handling concept drift and class imbalance, which is handled with a forgetting function and a specific error function which assigns different importance to error in separate classes.
Abstract: “Concept drift” and class imbalance are two challenges for supervised classifiers. “Concept drift” (or non-stationarity) is changes in the underlying function being learnt, and class imbalance is a vast difference between the numbers of instances in different classes of data. Class imbalance is an obstacle for the efficiency of most classifiers. Previous methods for classifying non-stationary and imbalanced data streams mainly focus on batch solutions, in which the classification model is trained using a chunk of data. Here, we propose an online Neural Network (NN) model. The NN model, is composed of two different parts for handling concept drift and class imbalance. Concept drift is handled with a forgetting function and class imbalance is handled with a specific error function which assigns different importance to error in separate classes. The proposed method is evaluated on 3 synthetic and 8 real world datasets. The results show statistically significant improvement to previous online NN methods.

67 citations

Journal ArticleDOI
TL;DR: An online ensemble of neural network (NN) classifiers with main contribution is a two-layer approach for handling class imbalance and non-stationarity, and cost-sensitive learning is embedded into the training phase of the NNs.

60 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations