Showing papers in "International Journal of Computer Applications in 2010"
TL;DR: A number of DSLs spanning various phases of software development life cycle in terms of features that elucidates their advantages over general purpose languages and perform in depth study by practically applying a few open source DSLs: ‘Cascading’, Naked Objects Framework and RSpec.
Abstract: To match the needs of the fast paced generation, the speed of computing has also increased enormously. But, there is a limit to which the processor speed can be amplified. Hence in order to increase productivity, there is a need to change focus from processing time to programming time. Reduction in programming time can be achieved by identifying the domain to which the task belongs and using an appropriate Domain Specific Language (DSL). DSLs are constrained to use terms and concepts pertaining to an explicit domain making it much easier for the programmers to understand and learn, and cuts down the development time drastically. In this paper, we will understand what a DSL is; explore a number of DSLs spanning various phases of software development life cycle in terms of features that elucidates their advantages over general purpose languages and perform in depth study by practically applying a few open source DSLs: ‘Cascading’, Naked Objects Framework and RSpec.
438 citations
Journal Article•
TL;DR: The important role that mobile ad hoc networks play in the evolution of future wireless technologies is explained and the latest research activities in these areas of MANET_s characteristics, capabilities and applications are reviewed.
Abstract: A mobile ad hoc network (MANET), sometimes called a mobile mesh network, is a self-configuring network of mobile devices connected by wireless links. The Ad hoc networks are a new wireless networking paradigm for mobile hosts. Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Instead, hosts rely on each other to keep the network connected. It represent complex distributed systems that comprise wireless mobile nodes that can freely and dynamically self-organize into arbitrary and temporary, ‘‘ad-hoc’’ network topologies, allowing people and devices to seamlessly internetwork in areas with no pre-existing communication infrastructure. Ad hoc networking concept is not a new one, having been around in various forms for over 20 years. Traditionally, tactical networks have been the only communication networking application that followed the adhoc paradigm. Recently, the introduction of new technologies such as the Bluetooth, IEEE 802.11 and Hyperlan are helping enable eventual commercial MANET deployments outside the military domain. These recent evolutions have been generating a renewed and growing interest in the research and development of MANET. This paper attempts to provide a comprehensive overview of this dynamic field. It first explains the important role that mobile ad hoc networks play in the evolution of future wireless technologies. Then, it reviews the latest research activities in these areas of MANET_s characteristics, capabilities and applications.
424 citations
TL;DR: This paper gives an overview of major technological perspective and appreciation of the fundamental progress of speech recognition and also gives overview technique developed in each stage ofspeech recognition.
Abstract: The Speech is most prominent & primary mode of Communication among of human being. The communication among human computer interaction is called human computer interface. Speech has potential of being important mode of interaction with computer .This paper gives an overview of major technological perspective and appreciation of the fundamental progress of speech recognition and also gives overview technique developed in each stage of speech recognition. This paper helps in choosing the technique along with their relative merits & demerits. A comparative study of different technique is done as per stages. This paper is concludes with the decision on feature direction for developing technique in human computer interface system using Marathi Language.
225 citations
TL;DR: A new approach to phrase-level sentiment analysis is presented that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions, achieving results that are significantly better than baseline.
Abstract: There has been a recent swell of interest in the automatic identification and extraction of opinions, emotions, and sentiments in text. Motivation for this task comes from the desire to provide tools for information analysts in government, commercial, and political domains, who want to automatically track attitudes and feelings in the news and on-line forums. How do people feel about recent events in the Middle East? Is the rhetoric from a particular opposition group intensifying? What is the range of opinions being expressed in the world press about the best course of action in Iraq? A system that could automatically identify opinions and emotions from text would be an enormous help to someone trying to answer these kinds of questions. Researchers from many subareas of Artificial Intelligence and Natural Language Processing have been working on the automatic identification of opinions and related tasks. To date, most such work has focused on sentiment or subjectivity classification at the document or sentence level. Document classification tasks include, for example, distinguishing editorials from news articles and classifying reviews as positive or negative. A common sentence-level task is to classify sentences as subjective or objective. This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.
192 citations
TL;DR: An elaborate comparative analysis is carried out to endow these algorithms with fitness sharing, aiming to investigate whether this improves performance which can be implemented in the evolutionary algorithms.
Abstract: For a decade swarm Intelligence, an artificial intelligence discipline, is concerned with the design of intelligent multi-agent systems by taking inspiration from the collective behaviors of social insects and other animal societies. They are characterized by a decentralized way of working that mimics the behavior of the swarm. Swarm Intelligence is a successful paradigm for the algorithm with complex problems. This paper focuses on the comparative analysis of most successful methods of optimization techniques inspired by Swarm Intelligence (SI) : Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). An elaborate comparative analysis is carried out to endow these algorithms with fitness sharing, aiming to investigate whether this improves performance which can be implemented in the evolutionary algorithms.
187 citations
TL;DR: Four types of noise (Gaussian noise, Salt & Pepper noise, Speckle noise and Poisson noise) are used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter .
Abstract: Image processing is basically the use of computer algorithms to perform image processing on digital images. Digital image processing is a part of digital signal processing. Digital image processing has many significant advantages over analog image processing. Image processing allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing of images. Wavelet transforms have become a very powerful tool for de-noising an image. One of the most popular methods is wiener filter. In this work four types of noise (Gaussian noise , Salt & Pepper noise, Speckle noise and Poisson noise) is used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter . Further results have been compared for all noises.
168 citations
TL;DR: An in-depth look at how digital images can be used as a carrier to hide messages is discussed and the performance of some of the steganography tools are analyzed.
Abstract: Steganography is the art of hiding information and an effort to conceal the existence of the embedded information. It serves as a better way of securing message than cryptography which only conceals the content of the message not the existence of the message. Original message is being hidden within a carrier such that the changes so occurred in the carrier are not observable. In this paper we will discuss how digital images can be used as a carrier to hide messages. This paper also analyses the performance of some of the steganography tools. Steganography is a useful tool that allows covert transmission of information over an over the communications channel. Combining secret image with the carrier image gives the hidden image. The hidden image is difficult to detect without retrieval. This paper will take an in-depth look at this technology by introducing the reader to various concepts of Steganography, a brief history of Steganography and a look at some of the Steganographic technique.
151 citations
TL;DR: The use of color in image processing is motivated by two principal factors; first color is a powerful descriptor that often simplifies object identification and extraction from a scene, and second, human can discern thousands of color shades and intensities, compared to about only two dozen shades of gray.
Abstract: The use of color in image processing is motivated by two principal factors; First color is a powerful descriptor that often simplifies object identification and extraction from a scene. Second, human can discern thousands of color shades and intensities, compared to about only two dozen shades of gray. In RGB model, each color appears in its primary spectral components of red, green and blue. This model is based on Cartesian coordinate system. Images represented in RGB color model consist of three component images. One for each primary, when fed into an RGB monitor, these three images combines on the phosphor screen to produce a composite color image. The number of bits used to represent each pixel in RGB space is called the pixel depth. Consider an RGB image in which each of the red, green and blue images is an 8-bit image. Under these conditions each RGB color pixel is said to have a depth of 24 bit. MATLAB 7.0 2007b was used for the implementation of all results.
149 citations
TL;DR: This paper proposes a density varied DBSCAN algorithm which is capable to handle local density variation within the cluster and shows that the proposed clustering algorithm gives optimized results.
Abstract: is a base algorithm for density based clustering. It can detect the clusters of different shapes and sizes from the large amount of data which contains noise and outliers. However, it is fail to handle the local density variation that exists within the cluster. In this paper, we propose a density varied DBSCAN algorithm which is capable to handle local density variation within the cluster. It calculates the growing cluster density mean and then the cluster density variance for any core object, which is supposed to be expended further, by considering density of its -neighborhood with respect to cluster density mean. If cluster density variance for a core object is less than or equal to a threshold value and also satisfying the cluster similarity index, then it will allow the core object for expansion. The experimental results show that the proposed clustering algorithm gives optimized results.
143 citations
TL;DR: The goal of this survey is to provide a comprehensive review of different clustering techniques in data mining.
Abstract: The goal of this survey is to provide a comprehensive review of different clustering techniques in data mining.
139 citations
Journal Article•
TL;DR: In this paper, some security threats and challenges faced by WSNs are discussed.
Abstract: Wireless sensor networks have become a growing area of research and development due to the tremendous number of applications that can greatly benefit from such systems and has lead to the development of tiny, cheap, disposable and self contained battery powered computers, known as sensor nodes or “motes”, which can accept input from an attached sensor, process this input data and transmit the results wirelessly to the transit network. Despite making such sensor networks possible, the very wireless nature of the sensors presents a number of security threats when deployed for certain applications like military ,surveillances etc . The problem of security is due to the wireless nature of the sensor networks and constrained nature of resources on the wireless sensor nodes, which means that security architectures used for traditional wireless networks are not viable. Furthermore, wireless sensor networks have an additional vulnerability because nodes are often placed in a hostile or dangerous environment where they are not physically protected. In this paper we discuss some security threats and challenges faced by WSNs.
TL;DR: This paper proposes the decision tree based algorithm to construct multiclass intrusion detection system, which can decrease the training and testing time, increasing the efficiency of the system.
Abstract: Support Vector Machines (SVM) are the classifiers which were originally designed for binary classification. The classification applications can solve multi-class problems. Decision-tree-based support vector machine which combines support vector machines and decision tree can be an effective way for solving multi-class problems. This method can decrease the training and testing time, increasing the efficiency of the system. The different ways to construct the binary trees divides the data set into two subsets from root to the leaf until every subset consists of only one class. The construction order of binary tree has great influence on the classification performance. In this paper we are studying an algorithm, Tree structured multiclass SVM, which has been used for classifying data. This paper proposes the decision tree based algorithm to construct multiclass intrusion detection system.
TL;DR: The techniques of content based image retrieval are discussed, analysed and compared, and the feature like neuro fuzzy technique, color histogram, texture and edge density are introduced for accurate and effective Content Based Image Retrieval System.
Abstract: network and development of multimedia technologies are becoming more popular, users are not satisfied with the traditional information retrieval techniques. so nowadays the content based image retrieval are becoming a source of exact and fast retrieval. In this paper the techniques of content based image retrieval are discussed, analysed and compared. It also introduced the feature like neuro fuzzy technique, color histogram, texture and edge density for accurate and effective Content Based Image Retrieval System.
TL;DR: This paper will throw light on the evolution and development of various generations of mobile wireless technology along with their significance and advantages of one over the other.
Abstract: Wireless communication is the transfer of information over a distance without the use of enhanced electrical conductors or "wires”. The distances involved may be short (a few meters as in television remote control) or long (thousands or millions of kilometers for radio communications). When the context is clear, the term is often shortened to "wireless". It encompasses various types of fixed, mobile, and portable two-way radios, cellular telephones, Personal Digital Assistants (PDAs), and wireless networking. In this paper we will throw light on the evolution and development of various generations of mobile wireless technology along with their significance and advantages of one over the other. In the past few decades, mobile wireless technologies have experience 4 or 5 generations of technology revolution and evolution, namely from 0G to 4G. Current research in mobile wireless technology concentrates on advance implementation of 4G technology and 5G technology. Currently 5G term is not officially used. In 5G researches are being made on development of World Wide Wireless Web (WWWW), Dynamic Adhoc Wireless Networks (DAWN) and Real Wireless World.
TL;DR: This paper discusses the implementation of three categories of image fusion algorithms – the basic fusion algorithms, the pyramid based algorithms and the basic DWT algorithms, developed as an Image Fusion Toolkit - ImFus, using Visual C++ 6.0.
Abstract: Image Fusion is a process of combining the relevant information from a set of images, into a single image, wherein the resultant fused image will be more informative and complete than any of the input images. This paper discusses the implementation of three categories of image fusion algorithms – the basic fusion algorithms, the pyramid based algorithms and the basic DWT algorithms, developed as an Image Fusion Toolkit - ImFus, using Visual C++ 6.0. The objective of the paper is to assess the wide range of algorithms together, which is not found in the literature. The fused images were assessed using Structural Similarity Image Metric (SSIM) [10], Laplacian Mean Squared Error along with seven other simple image quality metrics that helped us measure the various image features; which were also implemented as part of the toolkit. The readings produced by the image quality metrics, based on the image quality of the fused images, were used to assess the algorithms. We used Pareto Optimization method to figure out the algorithm that consistently had the image quality metrics produce the best readings. An assessment of the quality of the fused images was additionally performed with the help of ten respondents based on their visual perception, to verify the results produced by the metric based assessment. Coincidentally, both the assessment methods matched in their raking of the algorithms. The Pareto Optimization method picked DWT with Haar fusion method as the one with the best image quality metrics readings. The result here was substantiated by the visual perception based method where it was inferred that fused images produced by DWT with Haar fusion method was marked the best 63.33% of times which was far better than any other algorithm. Both the methods also matched in assessing Morphological Pyramid method as producing fused images of inferior quality.
TL;DR: This paper describes cryptography, various symmetric key algorithms in detail and then proposes a new asymmetric key algorithm, which is the quickest and most commonly used type of encryption.
Abstract: Any communication in the language that you and I speak—that is the human language, takes the form of plain text or clear text. That is, a message in plain text can be understood by anybody knowing the language as long as the message is not codified in any manner. So, now we have to use coding scheme to ensure that information is hidden from anyone for whom it is not intended, even those who can see the coded data. Cryptography is the art of achieving security by encoding messages to make them nonreadable. Cryptography is the practice and study of hiding information. In modern times cryptography is considered a branch of both mathematics and computer science and is affiliated closely with information theory, computer security and engineering. Cryptography is used in applications present in technologically advanced societies; examples include the security of ATM cards, computer passwords and electronic commerce, which all depend on cryptography. There are two basic types of cryptography: Symmetric Key and Asymmetric Key. Symmetric key algorithms are the quickest and most commonly used type of encryption. Here, a single key is used for both encryption and decryption. There are few well-known symmetric key algorithms i.e. DES, RC2, RC4, IDEA etc. This paper describes cryptography, various symmetric key algorithms in detail and then proposes a new symmetric key algorithm. Algorithms for both encryption and decryption are provided here. The advantages of this new algorithm over the others are also explained. Categories & subject descriptors [Cryptography & Steganography]: A New Algorithm.
Journal Article•
TL;DR: The proposed algorithmic model is based on textural features such as Gray level co-occurrence matrix and Gabor responses and shows that relatively a good performance can be achieved, using KNN classifier algorithm.
Abstract: In this paper, we propose an algorithmic model for automatic classification of flowers using KNN classifier. The proposed algorithmic model is based on textural features such as Gray level co-occurrence matrix and Gabor responses. A flower image is segmented using a threshold based method. The data set has different flower species with similar appearance (small inter class variations) across different classes and varying appearance (large intra class variations) within a class. Also, the images of flowers are of different pose with cluttered background under varying lighting conditions and climatic conditions. The flower images were collected from World Wide Web in addition to the photographs taken up in a natural scene. Experimental Results are presented on a dataset of 1250 images consisting of 25 flower species. It is shown that relatively a good performance can be achieved, using KNN classifier algorithm. A qualitative comparative analysis of the proposed method with other well known existing flower classification methods is also presented. General Terms Pattern Recognition, Image Processing, Algorithms
TL;DR: An energy efficient cluster head scheme, for heterogeneous wireless sensor networks, by modifying the threshold value of a node based on which it decides to be a cluster head or not, called TDEEC (Threshold Distributed Energy Efficient Clustering) protocol is proposed.
Abstract: In recent advances, many routing protocols have been proposed based on heterogeneity with main research goals such as achieving the energy efficiency, lifetime, deployment of nodes, fault tolerance, latency, in short high reliability and robustness. In this paper, we have proposed an energy efficient cluster head scheme, for heterogeneous wireless sensor networks, by modifying the threshold value of a node based on which it decides to be a cluster head or not, called TDEEC (Threshold Distributed Energy Efficient Clustering) protocol. Simulation results show that proposed algorithm performs better as compared to others.
TL;DR: The main aim is to study the theory of edge detection for image segmentation using various computing approaches based on different techniques which have got great fruits.
Abstract: Edge is a basic feature of image. The image edges include rich information that is very significant for obtaining the image characteristic by object recognition. Edge detection refers to the process of identifying and locating sharp discontinuities in an image. So, edge detection is a vital step in image analysis and it is the key of solving many complex problems. In this paper, the main aim is to study the theory of edge detection for image segmentation using various computing approaches based on different techniques which have got great fruits.
TL;DR: Experimental results illustrate, employing feature subset selection using proposed wrapper approach has enhanced classification accuracy.
Abstract: Feature subset selection is of immense importance in the field of data mining. The increased dimensionality of data makes testing and training of general classification method difficult. Mining on the reduced set of attributes reduces computation time and also helps to make the patterns easier to understand. In this paper a wrapper approach for feature selection is proposed. As a part of feature selection step we used wrapper approach with Genetic algorithm as random search technique for subset generation ,wrapped with different classifiers/ induction algorithm namely decision tree C4.5, NaiveBayes, Bayes networks and Radial basis function as subset evaluating mechanism on four standard datasets namely Pima Indians Diabetes Dataset, Breast Cancer, Heart Stat log and Wisconsin Breast Cancer. Further the relevant attributes identified by proposed wrapper are validated using classifiers. Experimental results illustrate, employing feature subset selection using proposed wrapper approach has enhanced classification accuracy.
Journal Article•
TL;DR: Various text representation schemes and compare different classifiers used to classify text documents to the predefined classes are presented and the existing methods are compared and contrasted based on qualitative parameters.
Abstract: Text classification is one of the important research issues in the field of text mining, where the documents are classified with supervised knowledge. In literature we can find many text representation schemes and classifiers/learning algorithms used to classify text documents to the predefined categories. In this paper, we present various text representation schemes and compare different classifiers used to classify text documents to the predefined classes. The existing methods are compared and contrasted based on qualitative parameters viz., criteria used for classification, algorithms adopted and classification time complexities. General Terms Pattern Recognition, Text Mining, Algorithms
TL;DR: Light is thrown on significance of touchscreen technology, its types, components, working of different touchscreens, their applications and a comparative study among various types of touchscreen technologies.
Abstract: First computers became more visual, then they took a step further to understand vocal commands and now they have gone a step further and became „TOUCHY‟, that is skin to screen. In this paper we will throw light on significance of touchscreen technology, its types, components, working of different touchscreens, their applications and a comparative study among various types of touchscreen technologies. Recently touchscreen technology is increasingly gaining popularity as these can be seen at ATMs, cellphones, information kiosks etc. Touch screen based system allows an easy navigation around a GUI based environment. As the technology advances, people may be able to operate computers without mice and keyboards. The touchscreen is an assistive technology. This interface can be beneficial to those that have difficulty in using other input devices such as a mouse or keyboard. When used in conjunction with software such as on-screen keyboards, or other assistive technology, they can help make computing resources more available to people that have difficulty in using computers. Currently various researches are being made to develop touchscreen video projectors. The ability to transform any surface in a touchscreen means lower costs, making the technology more cost effective.
TL;DR: Different routing attacks, such as active (flooding, black hole, spoofing, wormhole) and passive(eavesdropping, traffic monitoring, traffic analysis) are described.
Abstract: Security is a major concern for protected communication between mobile nodes in a hostile environment. In hostile environments adversaries can bunch active and passive attacks against intercept able routing in embed in routing message and data packets. In this paper, we focus on fundamental security attacks in Mobile adhoc networks. MANET has no clear line of defense, so, it is accessible to both legitimate network users and malicious attackers. In the presence of malicious nodes, one of the main challenges in MANET is to design the robust security solution that can protect MANET from various routing attacks. However, these solution are not suitable for MANET resource constraints, i.e., limited bandwidth and battery power, because they introduce heavy traffic load to exchange and verifying keys. MANET can operate in isolation or in coordination with a wired infrastructure, often through a gateway node participating in both networks for traffic relay. This flexibility, along with their selforganizing capabilities, are some of MANET's biggest strengths, as well as their biggest security weaknesses. In this paper different routing attacks, such as active(flooding, black hole, spoofing, wormhole) and passive(eavesdropping, traffic monitoring, traffic analysis) are described.
TL;DR: The simulation result demonstrates that H-HEED achieves longer lifetime and more effective data packets in comparison with the HEED protocol.
Abstract: main requirements of wireless sensor network are to prolong the network lifetime and energy efficiency. In this paper, Heterogeneous - Hybrid Energy Efficient Distributed Protocol (H-HEED) for Wireless Sensor Network has been proposed to prolong the network lifetime. In this paper the impact of heterogeneity in terms of node energy in wireless sensor networks have been mentioned. Finally the simulation result demonstrates that H-HEED achieves longer lifetime and more effective data packets in comparison with the HEED protocol. KeywordsSensor Network, Network Lifetime, Heterogeneity.
TL;DR: The implementation of ANSI X9.62 ECDSA over elliptic curve P-192 is described, and related security issues are discussed.
Abstract: The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard, and was accepted in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard, and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no sub exponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-keybit is substantially greater in an algorithm that uses elliptic curves. This paper describes the implementation of ANSI X9.62 ECDSA over elliptic curve P-192, and discusses related security issues.
TL;DR: The concept of transmitting power without using wires i.e., transmitting power as microwaves from one place to another is presented in order to reduce the transmission and distribution losses.
Abstract: paper, we present the concept of transmitting power without using wires i.e., transmitting power as microwaves from one place to another is in order to reduce the transmission and distribution losses. This concept is known as Microwave Power transmission (MPT). We also discussed the technological developments in Wireless Power Transmission (WPT). The advantages, disadvantages, biological impacts and applications of WPT are also presented.
TL;DR: Performance Comparison of Median and Wiener Filters in Image de-noising for Gaussian noise, Salt & Pepper noise and Speckle noise is dealt with.
Abstract: Image filtering algorithms are applied on images to remove the different types of noise that are either present in the image during capturing or injected into the image during transmission. This paper deals with Performance Comparison of Median and Wiener Filters in Image de-noising for Gaussian noise, Salt & Pepper noise and Speckle noise.
TL;DR: This paper investigates the improvement of transient stability of a two-area power system, using UPFC (Unified Power Flow Controller) which is an effective FACTS device capable of controlling the active and reactive power flows in a transmission line by controlling appropriately its series and shunt parameters.
Abstract: development of the modern power system has led to an increasing complexity in the study of power systems, and also presents new challenges to power system stability, and in particular, to the aspects of transient stability and small-signal stability. Transient stability control plays a significant role in ensuring the stable operation of power systems in the event of large disturbances and faults, and is thus a significant area of research. This paper investigates the improvement of transient stability of a two-area power system, using UPFC (Unified Power Flow Controller) which is an effective FACTS (Flexible AC Transmission System) device capable of controlling the active and reactive power flows in a transmission line by controlling appropriately its series and shunt parameters. Simulations are carried out in Matlab/Simulink environment for the two-area power system model with UPFC to analyze the effects of UPFC on transient stability performance of the system. The performance of UPFC is compared with other FACTS devices such as Static Synchronous Series Compensator (SSSC), Thyristor Controlled Series Capacitor (TCSC), and Static Var Compensator (SVC) respectively. The simulation results demonstrate the effectiveness and robustness of the proposed UPFC on transient stability improvement of the system.
TL;DR: An attempt to recognize selected emotion categories from keyboard stroke pattern using various classifiers like Simple Logistics, SMO, Multilayer Perceptron, Random Tree, J48 and BF Tree, which is a part of WEKA tool, to analysis.
Abstract: In day to day life, emotions are becoming an important tool which helps to take not only the decisions but also to enhance learning, creative thinking and to effectively correspond in the social interaction. Several studies have been conducted comprising of classical human human interaction and human computer interaction. They concluded that for intelligent interaction, emotions play an important role. By embedding the emotions in the interaction of human with machine, machine would be in a position to sense the mood of the user and change its interaction accordingly. Hence the system will be friendlier to the user and its responses will be more similar to human behaviour. In general, human beings make use of emotions through speech, facial expression and gestures for conveying the crucial information. This paper presents an attempt to recognize selected emotion categories from keyboard stroke pattern. The emotional categories considered for our analysis are neutral, positive and negative. We have used various classifiers like Simple Logistics, SMO, Multilayer Perceptron, Random Tree, J48 and BF Tree, which is a part of WEKA tool, to analyse the selected features from keyboard stroke pattern.
TL;DR: Simulation based comparison and performance analysis on different parameters like PDF, Average e-e delay, Routing Overheads and Packet Loss on three main protocols DSR, AODV and DSDV are presented.
Abstract: Mobile Ad hoc networks are the collection of wireless nodes that can exchange information dynamically among them without pre existing fixed infrastructure. Because of highly dynamic in nature, performance of routing protocols is an important issue. In addition to this routing protocols face many challenges like limited battery backup, limited processing capability and limited memory resources. Other than efficient routing, efficient utilization of battery capacity and Security are also the major concern for routing protocols. This paper presents simulation based comparison and performance analysis on different parameters like PDF, Average e-e delay, Routing Overheads and Packet Loss. The study is about three main protocols DSR, AODV (Reactive) and DSDV (Proactive).