scispace - formally typeset
Search or ask a question

Showing papers presented at "Intelligent Systems Design and Applications in 2012"


Proceedings ArticleDOI
01 Nov 2012
TL;DR: Different machine learning techniques namely Support Vector Machine (SVM), Naive Bayes (NB), K-Nearest Neighbors (KNN) and Neural Network (NNet) in predicting the priority of the newly coming reports on the basis of different performance measures are evaluated.
Abstract: In bug repositories, we receive a large number of bug reports on daily basis Managing such a large repository is a challenging job Priority of a bug tells that how important and urgent it is for us to fix Priority of a bug can be classified into 5 levels from PI to P5 where PI is the highest and P5 is the lowest priority Correct prioritization of bugs helps in bug fix scheduling/assignment and resource allocation Failure of this will result in delay of resolving important bugs This requires a bug prediction system which can predict the priority of a newly reported bug Cross project validation is also an important concern in empirical software engineering where we train classifier on one project and test it for prediction on other projects In the available literature, we found very few papers for bug priority prediction and none of them dealt with cross project validation In this paper, we have evaluated the performance of different machine learning techniques namely Support Vector Machine (SVM), Naive Bayes (NB), K-Nearest Neighbors (KNN) and Neural Network (NNet) in predicting the priority of the newly coming reports on the basis of different performance measures We performed cross project validation for 76 cases of five data sets of open office and eclipse projects The accuracy of different machine learning techniques in predicting the priority of a reported bug within and across project is found above 70% except Naive Bayes technique

72 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: Predicting of risk score for heart disease in Andhra Pradesh is discussed and class association rules using feature subset selection are generated that will help physicians to predict the heart disease of a patient.
Abstract: Medical data mining is the search for relationships and patterns within the medical data that could provide useful knowledge for effective medical diagnosis. Extracting useful information from these data bases can lead to discovery of rules for later diagnosis tools. Generally medical data bases are highly voluminous in nature. If a training data set contains irrelevant and redundant features classification may produce less accurate results. Feature selection as a pre-processing step in used to reduce dimensionality, removing irrelevant data and increasing accuracy and improves comprehensibility. Associative classification is a recent and rewarding technique that applies the methodology of association into classification and achieves high classification accuracy. Most associative classification algorithms adopt exhaustive search algorithms like in Apriori, and generate huge no. of rules from which a set of high quality of rules are chosen to construct efficient classifier. Hence generating a small set of high quality rules to build classifier is a challenging task. Cardiovascular diseases are the leading cause of death globally and in India more deaths are due to CHD. Cardiovascular disease is an increasingly an important cause of death in Andhra Pradesh. Hence there is an urgent need to develop a system to predict the heart disease of people. This paper discusses prediction of risk score for heart disease in Andhra Pradesh. We generated class association rules using feature subset selection. These generated rules will help physicians to predict the heart disease of a patient.

53 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: Simulations using the CABOB model with prices procured from several cloud vendors' datasets show its effectiveness in multiple resource procurement in the realm of cloud computing.
Abstract: Multiple resource procurement from several cloud vendors participating in bidding is addressed in this paper. This is done by assigning dynamic pricing for these resources. Since we consider multiple resources to be procured from several cloud vendors bidding in an auction, the problem turns out to be one of a combinatorial auction. We pre-process the user requests, analyze the auction and declare a set of vendors bidding for the auction as winners based on the Combinatorial Auction Branch on Bids (CABOB) model. Simulations using our approach with prices procured from several cloud vendors' datasets show its effectiveness in multiple resource procurement in the realm of cloud computing.

49 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: WarpingLCSS is the novel variant of LCSS to determine occurrences of gestures without segmenting data and performs one order of magnitude faster than the Segmented LCSS, and the LCSS approaches outperform the existing template matching approaches in the dataset that suffers from boundary noise and execution variation.
Abstract: Template matching methods using Dynamic Time Warping (DTW) have been used recently for online gesture recognition from body-worn motion sensors. However, DTW has been shown sensitive under the strong presence of noise in time series. In sensor readings, labeling temporal boundaries of daily gestures precisely is rarely achievable as they are often intertwined. Moreover, the variation in daily gesture execution always exists. Therefore, here we propose two template matching methods utilizing the Longest Common Subsequence (LCSS) to improve robustness against such noise for online gesture recognition. Segmented LCSS utilizes a sliding window to define the unknown boundaries of gestures in the continuous coming sensor readings and detects efficiently a possibly shorter gesture within it. WarpingLCSS is our novel variant of LCSS to determine occurrences of gestures without segmenting data and performs one order of magnitude faster than the Segmented LCSS. The WarpingLCSS requires low-resource settings to process new arriving samples, thus it is suitable for real-time gesture recognition implemented directly on the small wearable devices. We compare our methods with the existing template matching methods based on Dynamic Time Warping (DTW) on two real-world gesture datasets from arm-worn accelerometer data. The results demonstrate that the LCSS approaches outperform the existing template matching approaches (about 12% in accuracy) in the dataset that suffers from boundary noise and execution variation.

44 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: Electrooculography is a medical test used by the ophthalmologists for monitoring eyeball movement in Rapid Eye Movement (REM) and non-REM sleep, to detect the disorders of human eyes and to measure the resting potential of the eye.
Abstract: At present most of the hospitals and diagnostic centers globally, use wireless media to exchange biomedical information for mutual availability of therapeutic case studies. The required level of security and authenticity for transmitting biomedical information through the internet is quite high. Level of security can be increased; authenticity of the information can be verified and control over the copy process can be ascertained by adding watermark as “ownership” information in multimedia content. In this proposed method different types of gray scale biomedical images can be used as added ownership (watermark) data. Electrooculography is a medical test used by the ophthalmologists for monitoring eyeball movement in Rapid Eye Movement (REM) and non-REM sleep, to detect the disorders of human eyes and to measure the resting potential of the eye. In this present work 1-D EOG signal is transformed into 2-D signal. DWT, DCT, SVD are applied on the transformed 2D signal to embed watermark in it. Extraction of watermark image is done by applying inverse DWT, inverse DCT and SVD. The Peak Signal to Noise Ratio (PSNR) of the original EOG signal vs. watermarked signal and the correlation value between the original and extracted watermark image are calculated to prove the efficacy of the proposed method.

42 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: A novel method for monitoring rechargeable Li-ion batteries with wireless communication architecture is proposed which is a low cost, low power, highly reliable, redundant and scalable system and Experimental results show that this solution is practically applicable to EV platform.
Abstract: In this era of technological revolution, automotive industry witnessed tremendous progress with the environment friendly Electric Vehicles (EVs). To account for the consumer cost, the electric vehicles should offer maximum mileage in one charge by monitoring and utilizing maximum energy from the battery pack without significantly affecting the battery life. A reliable monitoring system reduces user's anticipation of the battery life when vehicle undergoes hard real time situation, ie., rapid acceleration and braking. In this paper, a novel method for monitoring rechargeable Li-ion batteries with wireless communication architecture is proposed which is a low cost, low power, highly reliable, redundant and scalable system. Experimental results show that this solution is practically applicable to EV platform.

39 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: A survey of Cognitive Radio techniques with its IEEE 802.22 standard, various defensive methods against PUE attack, primary signal detection methods and the features of SpiderRadio, a cognitive radio device are presented.
Abstract: Cognitive radio is a new technology which compliments the wireless devices by improving the efficiency, speed and reliability. There is always a huge demand for the spectrum usage as the availability of the radio spectrum is limited. Cognitive radio technology is seen as a potential solution to the efficient utilization of available spectrum by the unlicensed legitimate users. One of the major threats of cognitive radio network is the Primary User Emulation attack. In this paper a survey of Cognitive Radio techniques with its IEEE 802.22 standard, various defensive methods against PUE attack, primary signal detection methods and the features of SpiderRadio, a cognitive radio device are presented.

35 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: In this article, the authors proposed an unsupervised outlier detection scheme for streaming data, which is based on clustering and it does not require labeled data and also assigns weights to attributes according to their respective relevance in mining task.
Abstract: Outlier detection is a very important task in many fields like network intrusion detection, credit card fraud detection, stock market analysis, detecting outlying cases in medical data etc. Outlier detection in streaming data is very challenging because streaming data cannot be scanned multiple times and also new concepts may keep evolving in coming data over time. Irrelevant attributes can be termed as noisy attributes and such attributes further magnify the challenge of working with data streams. In this paper, we propose an unsupervised outlier detection scheme for streaming data. This scheme is based on clustering as clustering is an unsupervised data mining task and it does not require labeled data. In proposed scheme both density based and partitioning clustering method are combined to take advantage of both density based and distance based outlier detection. Proposed scheme also assigns weights to attributes depending upon their respective relevance in mining task and weights are adaptive in nature. Weighted attributes are helpful to reduce or remove the effect of noisy attributes. Keeping in view the challenges of streaming data, the proposed scheme is incremental and adaptive to concept evolution. Experimental results on synthetic and real world data sets show that our proposed approach outperforms other existing approach (CORM) in terms of outlier detection rate, false alarm rate, and increasing percentages of outliers.

32 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: The most popular objectives proposed over the past years are used and it is shown how those objective correlate with each other, and their performances when they are used in the single-objective Genetic Algorithm and the Multi-Objective genetic Al algorithm and the community structure properties they tend to produce.
Abstract: Community detection in complex networks has attracted a lot of attention in recent years. Community detection can be viewed as an optimization problem, in which an objective function that captures the intuition of a community as a group of nodes with better internal connectivity than external connectivity is chosen to be optimized. Many single-objective optimization techniques have been used to solve the problem however those approaches have its drawbacks since they try optimizing one objective function and this results to a solution with a particular community structure property. More recently researchers viewed the problem as a multi-objective optimization problem and many approaches have been proposed to solve it. However which objective functions could be used with each other is still under debated since many objective functions have been proposed over the past years and in somehow most of them are similar in definition. In this paper we use Genetic Algorithm (GA) as an effective optimization technique to solve the community detection problem as a single-objective and multi-objective problem, we use the most popular objectives proposed over the past years, and we show how those objective correlate with each other, and their performances when they are used in the single-objective Genetic Algorithm and the Multi-Objective Genetic Algorithm and the community structure properties they tend to produce.

30 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: A new college admission system using hybrid recommender based on data mining techniques and knowledge discovery rules, for tackling college admissions prediction problems, which is adaptive, since it can be tuned up with other decision makers attributes performing trusted needed tasks faster and fairly.
Abstract: This paper presents a new college admission system using hybrid recommender based on data mining techniques and knowledge discovery rules, for tackling college admissions prediction problems. This is due to the huge numbers of students required to attend university colleges every year. The proposed HRSPCA system consists of two cascaded hybrid recommenders working together with the help of college predictor, for achieving high performance. The first recommender assigns student's tracks for preparatory year students. While the second recommender assigns the specialized college for students who passed the preparatory year exams successfully. The college predictor algorithm uses historical colleges GPA students admission data for predicting most probable colleges. The system analyzes student academic merits, background, student records, and the college admission criteria. Then, it predicts the likelihood university college that a student may enter. A prototype system is implemented and tested with live data available in the On Demand University Services (ODUS) database resources, at King Abdulaziz University (KAU). In addition to the high prediction accuracy rate, flexibility is an advantage, as the system can predict suitable colleges that match the students' profiles and the suitable track channels through which the students are advised to enter. The system is adaptive, since it can be tuned up with other decision makers attributes performing trusted needed tasks faster and fairly.

29 citations


Proceedings ArticleDOI
01 Nov 2012
TL;DR: An entropy based bug prediction approach using support vector regression (SVR) is proposed using conventional simple linear regression (SLR) method and found that the proposed models are good bug predictor as they have shown the significant improvement in their performance.
Abstract: Predicting software defects is one of the key areas of research in software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches namely code churn, past bugs, refactoring, number of authors, file size and age, etc by measuring the performance in terms of accuracy and complexity. Different mathematical models have also been developed in the literature to monitor the bug occurrence and fixing process. These existing mathematical models named software reliability growth models are either calendar time or testing effort dependent. The occurrence of bugs in the software is mainly due to the continuous changes in the software code. The continuous changes in the software code make the code complex. The complexity of the code changes have already been quantified in terms of entropy as follows in Hassan [9]. In the available literature, few authors have proposed entropy based bug prediction using conventional simple linear regression (SLR) method. In this paper, we have proposed an entropy based bug prediction approach using support vector regression (SVR). We have compared the results of proposed models with the existing one in the literature and have found that the proposed models are good bug predictor as they have shown the significant improvement in their performance.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This study tries to link between interval-valued fuzzy graph and fuzzy concept lattice to overcome from the issue of reducing the number of fuzzy formal concepts and their lattice structure.
Abstract: Formal Concept Analysis (FCA) with fuzzy setting has been successfully applied by researchers for data analysis and representation. Reducing the number of fuzzy formal concepts and their lattice structure are addressed as a major issues. In this study, we try to link between interval-valued fuzzy graph and fuzzy concept lattice to overcome from the issue. We show that proposed method reduces the number of fuzzy formal concepts and their lattice structure while preserving specialized and generalized concepts. Proposed link will be useful for the researcher in data analysis and processing.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: The aim of this paper is to introduce a new ranking procedure for trapezoidal intuitionistic fuzzy number (TRIFN), and the value and ambiguity index of TRIFNs have been defined.
Abstract: Techniques for ranking simple fuzzy numbers are abundant in nature. However, we lack effective methods for ranking intuitionistic fuzzy numbers(IFN). The aim of this paper is to introduce a new ranking procedure for trapezoidal intuitionistic fuzzy number(TRIFN). To serve the purpose, the value and ambiguity index of TRIFNs have been defined. In order to rank TRIFNs, we have defined a ranking function by taking sum of value and ambiguity index. To illustrate the the proposed ranking method a numerical example has been given.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: In this article, a simple and efficient method to convolve an image with a Gaussian kernel is presented, which is performed in a constant number of operations per pixel using running sums along the image rows and columns.
Abstract: This paper presents a simple and efficient method to convolve an image with a Gaussian kernel. The computation is performed in a constant number of operations per pixel using running sums along the image rows and columns. We investigate the error function used for kernel approximation and its relation to the properties of the input signal. Based on natural image statistics we propose a quadratic form kernel error function so that the SSD error of the output image is minimized. We apply the proposed approach to approximate the Gaussian kernel by linear combination of constant functions. This results in a very efficient Gaussian filtering method. Our experiments show that the proposed technique is faster than state of the art methods while preserving similar accuracy.

Proceedings ArticleDOI
01 Jan 2012
TL;DR: A posture classification system using skeletal-tracking feature of Microsoft Kinect sensor with Angular representation of the skeleton data makes the system very robust and avoids problems related to human body occlusions and motion ambiguities.
Abstract: Human posture identification for motion controlling applications is becoming more of a challenge. We present a posture classification system using skeletal-tracking feature of Microsoft Kinect sensor. Posture recovery is carried out by detecting the human body joints, its position, and orientation at the same time. Angular representation of the skeleton data makes the system very robust and avoids problems related to human body occlusions and motion ambiguities. The implemented system is tested on a class of relatively common postures comprising hundreds of human pose instances by different people, where our classifier shows an average accuracy of 94.9%, 96.7% and 96.9% for linear, exponential and priority based matching systems respectively.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: A new feature encoding technique is proposed that is based on the amalgamation of Gabor filter-based features with SURF features (G-SURF) and applied to a Support Vector Machine (SVM) classifier to address the adverse scenario of part-based signature verification.
Abstract: In the field of biometric authentication, automatic signature identification and verification has been a strong research area because of the social and legal acceptance and extensive use of the written signature as an easy method for authentication. Signature verification is a process in which the questioned signature is examined in detail in order to determine whether it belongs to the claimed person or not. Signatures provide a secure means for confirmation and authorization in legal documents. So nowadays, signature identification and verification becomes an essential component in automating the rapid processing of documents containing embedded signatures. Sometimes, part-based signature verification can be useful when a questioned signature has lost its original shape due to inferior scanning quality. In order to address the above-mentioned adverse scenario, we propose a new feature encoding technique. This feature encoding is based on the amalgamation of Gabor filter-based features with SURF features (G-SURF). Features generated from a signature are applied to a Support Vector Machine (SVM) classifier. For experimentation, 1500 (50×30) forgeries and 1200 (50×24) genuine signatures from the GPDS signature database were used. A verification accuracy of 97.05% was obtained from the experiments.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: The proposed work is to implement a novel block matching algorithm for Motion Vector Estimation which performs better than other conventional Block Matching Algorithms such as Three Step Search (TSS), New Three Step search (NTSS, and Four Step Search) etc.
Abstract: The most computationally expensive operation in entire video compression process is Motion Estimation. The challenge is to reduce the computational complexity and time of Exhaustive Search Algorithm without losing too much quality at the output. The proposed work is to implement a novel block matching algorithm for Motion Vector Estimation which performs better than other conventional Block Matching Algorithms such as Three Step Search (TSS), New Three Step Search (NTSS), and Four Step Search (FSS) etc.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: A congestion control mechanism in vehicular ad-hoc network that supports the communication of safe and unsafe messages among vehicles and infrastructure and reduces the congestion level of a node and also improves its quality of service.
Abstract: The wireless access in vehicular environment system is developed for enhancing the driving safety and comfort of automotive users. However, such system suffers from quality of service degradation for safety applications caused by the channel congestion in scenarios with high vehicle density. The present work is a congestion control mechanism in vehicular ad-hoc network. It supports the communication of safe and unsafe messages among vehicles and infrastructure. Each node maintains a control queue to store the safe messages and a service queue to store the unsafe messages. The control channel is used for the transmission of safe messages and service channel is used for the transmission of unsafe messages. Each node computes its own priority depending upon the number of waiting messages in control queue and service queue. Each node reserves a fraction of control channel and service channel dynamically depending upon the number of waiting messages in its queue. The unsafe messages at a node may also be transmitted using control channel provided the control channel is free and service channel is overloaded which helps to reduce the loss of unsafe message at a node which in turn reduces the congestion level of a node and also improves its quality of service. The available bandwidth is also distributed among the nodes dynamically depending upon their priority. The performance of the proposed scheme is evaluated on the basis of average loss of unsafe message, average delay in safe and unsafe message, storage overhead per node.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: The main purpose of this paper is to show the use of formal concept analysis (FCA) as data mining approach for mining the common hypermethylated genes between breast cancer subtypes, and how this lattice can be used as knowledge discovery and knowledge representation, becoming more interesting for the biologists.
Abstract: The main purpose of this paper is to show the use of formal concept analysis (FCA) as data mining approach for mining the common hypermethylated genes between breast cancer subtypes, by extracting formal concepts which representing sets of significant hypermethylated genes for each breast cancer subtypes, then the formal context is built which leading to construct a concept lattice which is composed of formal concepts. This lattice can be used as knowledge discovery and knowledge representation therefore, becoming more interesting for the biologists.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: The result from surgical and non-surgical face database shows that the proposed face recognition system can easily tackle illumination, pose, expression, occlusion and plastic surgery variations in face images.
Abstract: Facial plastic surgery changes facial features to large extend and thus creating a major problem to face recognition system. This paper proposes a new face recognition system using novel shape local binary texture (SLBT) feature from face images cascaded with periocular feature for plastic surgery invariant face recognition. In-spite of many uniqueness and advantages, the existing feature extraction methods are capable of extracting either shape or texture feature. A method which can extract both shape and texture feature is more attractive. The proposed SLBT can extract global shape, local shape and texture information from a face image by extracting local binary pattern (LBP) instead of direct intensity values from shape free patch of active appearance model (AAM). The experiments conducted using MUCT and plastic surgery face database shows that the SLBT feature performs better than AAM and LBP features. Further increase in recognition rate is achieved by cascading SLBT features from face with LBP features from periocular regions. The result from surgical and non-surgical face database shows that the proposed face recognition system can easily tackle illumination, pose, expression, occlusion and plastic surgery variations in face images.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: A distinct e-marketing strategy that takes advantage of attraction mechanism of firefly algorithm along with quick spread behavior of e-WOM for market campaigning and exploits the social connectedness among online users to identify the best initial seeds is presented.
Abstract: Opinions on various products are often shared by web users through online social media. These opinions play an important role in decision making criterion for prospective customers. This sharing of experiences works as electronic -word of mouth (e-WOM) publicity for a product in the internet world. The paper presents a distinct e-marketing strategy that takes advantage of attraction mechanism of firefly algorithm along with quick spread behavior of e-WOM for market campaigning. Firefly algorithm is inspired by biochemical and social aspects of real fireflies. The proposed approach analysis the current trend of market in terms of relevant product features. The user interest towards those features is extracted by mining their opinions. Subsequently market segmentation is done by clustering similar users and best segment(s) is selected for product promotion. The strategy finally exploits the social connectedness among online users to identify the best initial seeds. Thus the proposed approach is capable of attracting the attention of a large span of web users by employing a small fraction of advertising budget and has a potential in current e-marketing scenario.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: A novel approach of eye detection for facial images using Gabor Filter and support vector machine (SVM) is proposed in this paper and the success rate is 96%.
Abstract: Eye detection has many applications in computer vision systems. A novel approach of eye detection for facial images using Gabor Filter and support vector machine (SVM) is proposed in this paper. Eye/non-eye patterns are rotated by different angels using Gabor Filter and then used to train SVM. In the proposed approach first face is extracted using skin colour information and later using Lab transform and Morphological operations eye pair candidates are detected which are given to SVM classifier to classify the detected eye pair candidates as eye or non-eye. The Lab and HSV colour space are used for face extraction and to find eye pair candidates. Separable Gabor filters are used to decrease computation time and the rotation-invariant characteristics of the Gabor Filter makes this method robust against rotation. The proposed approach is tested on rotated images of the GTAV[13] database and is also experimented on videos captured at VITS and the success rate achieved is 96%.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: Proposed approach considers only the key concepts of a domain for classification instead of all the terms, which curbs the problem of dimensionality, and results in reduction of noise in final output.
Abstract: Explosive growth of data on the web demand techniques, which would enable the user to access desired information. In Information retrieval Document Classification is prerequisite. In practice many classification techniques were and are in use. Term Frequency-Inverse Document Frequency (TF-IDF) is an approach which represents documents based on the frequency of terms in documents. Limitation of this approach is high dimensionality of data. Moreover it does not consider the relations among the terms, resulting in less precise and noisy end result. In our approach we are using weighted Concept Frequency-Inverse Document Frequency (CF-IDF) with background knowledge of domain Ontology, for classification of RSS feed News Items. Metadata information of news items has been used to assign weight to the identified concepts. No trained classifiers are required as Ontology itself acts as a classifier. We have designed ontology based on news industry standards. This classification approach considers relations among the concepts and properties. It results in reduction of noise in final output. It considers only the key concepts of a domain for classification instead of all the terms, which curbs the problem of dimensionality. Evaluation of experimental results reveals that proposed approach gives better classification results.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: A saliency based approach, which simulates the recognition process of the human brain, and extracts the saliency map weighted HSV histograms by giving each pixel a weight according to its saliency.
Abstract: Person re-identification has long been a significant research direction in intelligent network surveillance. The challenging issues in person re-identification consist in pose, viewpoint and illumination changes and occlusions. In this paper, we propose a saliency based approach, which simulates the recognition process of the human brain, to tackle these issues. When people see a picture, they tend to focus on the salient areas and information in those areas is more determinant in the further matching and identification process. This so-called visual attention mechanism has long been studied and used in image segmentation, tracking, detection and recognition. To simulate this distinctive mechanism, we first calculate the saliency map which indicates the conspicuity of each pixel, and then we extract the saliency map weighted HSV histograms by giving each pixel a weight according to its saliency. We also design another feature, the salient colors, to address the occlusion problem. By opportunely combining these two features, our approach achieved state of the art performances.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: A new methodology for knowledge representation and reasoning based on generalised fuzzy Petri nets is presented, which is more flexible than the traditional one as in the former class the user has the chance to define the input/output operators.
Abstract: The aim of this paper is to present a new methodology for knowledge representation and reasoning based on generalised fuzzy Petri nets. Recently, this net model has been proposed as a new class of fuzzy Petri nets. The new class extends the existing fuzzy Petri nets by introducing two operators: t-norms and s-norms, which are supposed to function as substitute for the min and max operators. This model is more flexible than the traditional one as in the former class the user has the chance to define the input/output operators. The choice of suitable operators for a given reasoning process and the speed of reasoning process are very important, especially in real-time decision support systems. The advantages of the proposed methodology are shown in an application in train traffic control decision support.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This work presents an approach to identification of opinion leaders using the K-means clustering algorithm, which does not require knowledge of the user's opinions or membership in other forums.
Abstract: Online opinion leaders play an important role in the dissemination of information in discussion forums. They are a high-priority target group for viral marketing campaigns. On an average, an opinion leader will tell about his or her experience with a product or company to 14 other people. It is important to identify such opinion leaders from data derived from online activity of users. We present an approach to identification of opinion leaders using the K-means clustering algorithm. This approach does not require knowledge of the user's opinions or membership in other forums.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: A very simple and convenient method to segment nail surface from the user hand and use it as a biometric identifier is proposed and results from 180 users validate the contributions from this paper.
Abstract: This paper presents a new biometric system based on the outer surface of the finger nail. There has not been any attempt in utilizing nail shape and texture for human authentication. The nail bed information, imitated on the nail surface proves to be a very unique and stable biometric identifier for personal authentication. However, research literature presents some complex set-up and the use of interferometer technique for extraction of nail bed details. In this work we propose a very simple and convenient method to segment nail surface from the user hand and use it as a biometric identifier. We further extract texture feature from nail-surface and our experimental results from 180 users validate the contributions from this paper.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: A method to improve the computational efficiency of the interval type-2 fuzzy c-means clustering (IT2-FCM) based on GPU platform and applied to land-cover classification from multi-spectral satellite image.
Abstract: When processing with large data such as satellite images, the computing speed is the problem need to be resolved. This paper introduces a method to improve the computational efficiency of the interval type-2 fuzzy c-means clustering(IT2-FCM) based on GPU platform and applied to land-cover classification from multi-spectral satellite image. GPU-based calculations are high performance solution and free up the CPU. The experimental results show that the performance of the GPU is many times faster than CPU.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This work presents a novel technique which finds structurally similar words in a document through removal of stop words, and makes comparison of machine generated summary with that of human summary.
Abstract: The growth of internet has given rise to the need for better Information Retrieval (IR) techniques which help in obtaining relevant information at a faster rate. Text Summarization is one such technique which aims at producing a quick and concise summary of the Text. Of late, Key word based summary has drawn wide attention of researchers in Natural Language Processing community. The algorithm we have developed extracts key words from Kannada text documents, for which we combine GSS (Galavotti, Sebastiani, Simi)[13] coefficients and IDF(Inverse Document Frequency) methods along with TF(Term Frequency) for extracting key words and later uses these for summarization. The important objective our work is to assign a weight to each word in a sentence, the weight of a sentence is the sum of weights of all words, based on the scoring of sentences; we choose top ‘m’ sentences. A document from a given category is selected from our database custom built for this purpose. The files are obtained from Kannada Webdunia. Kannada Webdunia is a Kannada Portal which offers Political News, Cinema News, Sports news, Shopping and Jokes. Depending on the number of sentences given by the user, a summary is generated. Finally we make comparison of machine generated summary with that of human summary. Yet another objective of this work is to perform feature extraction through removal of stop words. For removing stop words we have presented a novel technique which finds structurally similar words in a document.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This paper proposes a new method for preventing SQL injection attacks in JSP web applications by using semantic comparison to check before execution, the intended structure of the SQL query.
Abstract: Web applications are becoming an important part of our daily life. So attacks against them also increases rapidly. Of these attacks, a major role is held by SQL injection attacks (SQLIA). This paper proposes a new method for preventing SQL injection attacks in JSP web applications. The basic idea is to check before execution, the intended structure of the SQL query. For this we use semantic comparison. This method prevents different kinds of injection attacks including stored procedure attack which is more difficult and less considered in the literature.