scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Computer Science and Applications in 2010"


Journal ArticleDOI
TL;DR: The aim is to identify a person in real time, with high efficiency and accuracy by analysing the random patters visible within the iris if an eye from some distance, by implementing modified Canny edge detector algorithm.
Abstract: In a biometric system a person is identified automatically by processing the unique features that are posed by the individual. Iris Recognition is regarded as the most reliable and accurate biometric identification system available. In Iris Recognition a person is identified by the iris which is the part of eye using pattern matching or image processing using concepts of neural networks. The aim is to identify a person in real time, with high efficiency and accuracy by analysing the random patters visible within the iris if an eye from some distance, by implementing modified Canny edge detector algorithm. The major applications of this technology so far have been: substituting for passports (automated international border crossing); aviation security and controlling access to restricted areas at airports; database access and computer login.

208 citations


Journal ArticleDOI
TL;DR: This unit talks about the basic definitions needed to understand the Project better and further defines the technical criteria to be implemented as a part of this project.
Abstract: With advancement of technology things are becoming simpler and easier for us Automatic systems are being preferred over manual system This unit talks about the basic definitions needed to understand the Project better and further defines the technical criteria to be implemented as a part of this project Keywords-component; Automation, 8051 microcontroller, LDR, LED, ADC, Relays, LCD display, Sensors, Stepper motor

106 citations


Journal ArticleDOI
TL;DR: A robust R Peak and QRS detection using Wavelet Transform has been developed and is an initial work towards establishing that the ECG signal is a signature like fingerprint, retinal signature for any individual Identification.
Abstract: In this paper a robust R Peak and QRS detection using Wavelet Transform has been developed. Wavelet Transform provides efficient localization in both time and frequency. Discrete Wavelet Transform (DWT) has been used to extract relevant information from the ECG signal in order to perform classification. Electrocardiogram (ECG) signal feature parameters are the basis for signal Analysis, Diagnosis, Authentication and Identification performance. These parameters can be extracted from the intervals and amplitudes of the signal. The first step in extracting ECG features starts from the exact detection of R Peak in the QRS Complex. The accuracy of the determined temporal locations of R Peak and QRS complex is essential for the performance of other ECG processing stages. Individuals can be identified once ECG signature is formulated. This is an initial work towards establishing that the ECG signal is a signature like fingerprint, retinal signature for any individual Identification. Analysis is carried out using MATLAB Software. The correct detection rate of the Peaks is up to 99% based on MIT-BIH ECG database.

91 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss how to influence on cloud computing and influence on this technology to take education to a wider mass of students over the country, they believe cloud computing will surely improve the current system of education and improve quality at an affordable cost.
Abstract: Cloud computing is the new technology that has various advantages and it is an adoptable technology in this present scenario. The main advantage of the cloud computing is that this technology reduces the cost effectiveness for the implementation of the Hardware, software and License for all. This is the better peak time to analyze the cloud and its implementation and better use it for the development of the quality and low cost education for all over the world. In this paper, we discuss how to influence on cloud computing and influence on this technology to take education to a wider mass of students over the country. We believe cloud computing will surely improve the current system of education and improve quality at an affordable cost.

86 citations


Journal ArticleDOI
TL;DR: Implementation of this Pareto optimization methodology for various applications and in a decision support system is discussed.
Abstract: Problems with multiple objectives can be solved by using Pareto optimization techniques in evolutionary multi-objective optimization algorithms. Many applications involve multiple objective functions and the Pareto front may contain a very large number of points. Selecting a solution from such a large set is potentially intractable for a decision maker. Previous approaches to this problem aimed to find a representative subset of the solution set. Clustering techniques can be used to organize and classify the solutions. Implementation of this methodology for various applications and in a decision support system is also discussed.

59 citations


Journal ArticleDOI
TL;DR: The proposed wireless solution monitors and controls the humidity from the remote location and whenever it crosses the set limit the LPC2148 processor will sends an SMS to a concerned plant authority(s) mobile phone via GSM network.
Abstract: The paper proposes a wireless solution, based on GSM (Global System for Mobile Communication) networks (1) for the monitoring and control of humidity in industries. This system provides ideal solution for monitoring critical plant on unmanned sites. The system is Wireless (2) therefore more adaptable and cost-effective. Utilizing Humidity sensor HSM-20G, ARM Controller LPC2148 and GSM technology this system offers a cost effective solution to wide range of remote monitoring and control applications. Historical and real time data can be accessed world wide using the GSM network. The system can also be configured to transmit data on alarm or at preset intervals to a mobile phone using SMS text messaging. The proposed system monitors and controls the humidity from the remote location and whenever it crosses the set limit the LPC2148 processor will sends an SMS to a concerned plant authority(s) mobile phone via GSM network. The concerned authority can control the system through his mobile phone by sending AT Commands to GSM MODEM and in turn to processor. Also the system provides password security against operator misuse/abuse. The system uses GSM technology (3) thus providing ubiquitous access to the system for security and automated monitoring and control of Humidity.

45 citations


Journal ArticleDOI
TL;DR: Re Receivers in RMTP send their acknowledgments to the DR’s periodically, thereby simplifying error recovery, and lost packets are recovered by selective repeat retransmissions, leading to improved throughput at the cost of minimal additional buffering at the receivers.
Abstract: This paper presents the design, implementation, and performance of a reliable multicast transport protocol (RMTP). RMTP is based on a hierarchical structure in which receivers are grouped into local regions or domains and in each domain there is a special receiver called a designated receiver (DR) which is responsible for sending acknowledgments periodically to the sender, for processing acknowledgment from receivers in its domain, and for retransmitting lost packets to the corresponding receivers. Since lost packets are recovered by local retransmissions as opposed to retransmissions from the original sender, end-to-end latency is significantly reduced, and the overall throughput is improved as well. Also, since only the DR’s send their acknowledgments to the sender, instead of all receivers sending their acknowledgments to the sender, a single acknowledgment is generated per local region, and this prevents acknowledgment implosion. Receivers in RMTP send their acknowledgments to the DR’s periodically, thereby simplifying error recovery. In addition, lost packets are recovered by selective repeat retransmissions, leading to improved throughput at the cost of minimal additional buffering at the receivers. This paper also describes the implementation of RMTP and its performance on the Internet.

42 citations


Journal ArticleDOI
TL;DR: The role of social network websites which has influence on viral marketing, and the characteristics of the most influential users to spread share viral content are explored.
Abstract: The Internet and the World Wide Web have become two key components in today's technology based organizations and businesses. As the Internet is becoming more and more popular, it is starting to make a big impact on people's day-to-day life. As a result of this revolutionary transformation towards the modern technology, social networking on the World Wide Web has become an integral part of a large number of people's lives. Social networks are websites which allow users to communicate, share knowledge about similar interests, discuss favorite topics, review and rate products/services, etc. These websites have become a powerful source in shaping public opinion on virtually every aspect of commerce. Marketers are challenged with identifying influential individuals in social networks and connecting with them in ways that encourage viral marketing content movement and there has been little empirical research study about of this website to diffuse of viral marketing content. In this article, we explore the role of social network websites which has influence on viral marketing, and the characteristics of the most influential users to spread share viral content. Structural equation modeling is used to examine the patterns of inter-correlations among the constructions and to empirically test the hypotheses.

41 citations


Journal ArticleDOI
TL;DR: An attempt has been made to develop a decision tree classification algorithm for remotely sensed satellite data using the separability matrix of the spectral distributions of probable classes in respective bands using Visual C++ 6.0 language.
Abstract: In this paper an attempt has been made to develop a decision tree classification algorithm for remotely sensed satellite data using the separability matrix of the spectral distributions of probable classes in respective bands. The spectral distance between any two classes is calculated from the difference between the minimum spectral value of a class and maximum spectral value of its preceding class for a particular band. The decision tree is then constructed by recursively partitioning the spectral distribution in a Top-Down manner. Using the separability matrix, a threshold and a band will be chosen in order to partition the training set in an optimal manner. The classified image is compared with the image classified by using classical method Maximum Likelihood Classifier (MLC). The overall accuracy was found to be 98% using the Decision Tree method and 95% using the Maximum Likelihood method with kappa values 97% and 94 % respectively. In this paper, an attempt has been made to develop a decision tree classification algorithm specifically for the classification of remotely sensed satellite data using the separability matrix of spectral distributions of probable classes. The computational efficiency is measured in terms of computational complexity measure. The proposed algorithm is coded in Visual C++ 6.0 language to develop user-friendly software for decision tree classification that requires a bitmap image of the area of interest as the basic input. For the classification of the image, the training sets are chosen for different classes and accordingly spectral separability matrix is obtained. To evaluate the accuracy of proposed method, a confusion matrix analysis was employed and kappa coefficient along with errors of omission and commission were also determined. Lastly, the classified image is compared with the image classified by using classical method MLC.

37 citations


Journal ArticleDOI
TL;DR: Similarity measures proposed here are based on the five different association measures in Information retrieval, namely simple matching, Dice, Jaccard, Overlap, Cosine coefficient, and performance of these methods is evaluated using Miller and Charle’s benchmark dataset.
Abstract: Semantic similarity measures play an important role in the extraction of semantic relations. Semantic similarity measures are widely used in Natural Language Processing (NLP) and Information Retrieval (IR). The work proposed here uses web-based metrics to compute the semantic similarity between words or terms and also compares with the state-of-the-art. For a computer to decide the semantic similarity, it should understand the semantics of the words. Computer being a syntactic machine, it can not understand the semantics. So always an attempt is made to represent the semantics as syntax. There are various methods proposed to find the semantic similarity between words. Some of these methods have used the precompiled databases like WordNet, and Brown Corpus. Some are based on Web Search Engine. The approach presented here is altogether different from these methods. It makes use of snippets returned by the Wikipedia or any encyclopedia such as Britannica Encyclopedia. The snippets are preprocessed for stop word removal and stemming. For suffix removal an algorithm by M. F. Porter is referred. Luhn’s Idea is used for extraction of significant words from the preprocessed snippets. Similarity measures proposed here are based on the five different association measures in Information retrieval, namely simple matching, Dice, Jaccard, Overlap, Cosine coefficient. Performance of these methods is evaluated using Miller and Charle’s benchmark dataset. It gives higher correlation value of 0.80 than some of the existing methods.

29 citations


Journal ArticleDOI
TL;DR: Time-frequency analysis of EEG spectrum and wavelet analysis in EEG de-noising using four different thresholds to wipe off interference and noise after decomposition of the EEG signals comes to a conclusion that the wavelet de- noising and soft threshold is a better conclusion.
Abstract: This paper proposes time-frequency analysis of EEG spectrum and wavelet analysis in EEG de-noising. In this paper, the basic idea is to use the characteristics of multi-scale multi-resolution, using four different thresholds to wipe off interference and noise after decomposition of the EEG signals. By analyzing the results, understanding the effects of four different methods, it comes to a conclusion that the wavelet de-noising and soft threshold is a better conclusion.

Journal ArticleDOI
TL;DR: In this paper, normalized least mean (NLMS) square and recursive least squares (RLS) adaptive channel estimator are described for multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems.
Abstract: In this paper, normalized least mean (NLMS) square and recursive least squares (RLS) adaptive channel estimator are described for multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. These CE methods uses adaptive estimator which are able to update parameters of the estimator continuously, so that the knowledge of channel and noise statistics are not necessary. This NLMS/RLS CE algorithm requires knowledge of the received signal only. Simulation results demonstrated that the RLS CE method has better performances compared NLMS CE method for MIMO OFDM systems. In addition, the utilizing of more multiple antennas at the transmitter and/or receiver provides a much higher performance compared with fewer antennas. Furthermore, the RLS CE algorithm provides faster convergence rate compared to NLMS CE method. Therefore, in order to combat the more channel dynamics, the RLS CE algorithm is better to use for MIMO OFDM systems.

Journal ArticleDOI
TL;DR: The main intention of writing this paper is to enable the students, computer users and novice researchers about spoofing attacks and give a small view on detection and prevention of spoofing attacked.
Abstract: The main intention of writing this paper is to enable the students, computer users and novice researchers about spoofing attacks. Spoofing means impersonating another person or computer, usually by providing false information (E-mail name, URL or IP address). Spoofing can take on many forms in the computer world, all of which involve some type false representation of information. There are a variety of methods and types of spoofing. We would like to introduce and explain following spoofing attacks in this paper: IP, ARP, E-Mail, Web, and DNS spoofing. There are no legal or constructive uses for implementing spoofing of any type. Some of the outcomes might be sport, theft, vindication or some other malicious goal. The magnitude of these attacks can be very severe; can cost us millions of dollars. This Paper describes about various spoofing types and gives a small view on detection and prevention of spoofing attacks.

Journal ArticleDOI
TL;DR: A novel approach for the detection of emotions using the cascading of Mutation Bacteria Foraging optimization and Adaptive Median Filter in highly corrupted noisy environment and using Neural Networks by which emotions are classified is presented.
Abstract: This paper presents a novel approach for the detection of emotions using the cascading of Mutation Bacteria Foraging optimization and Adaptive Median Filter in highly corrupted noisy environment. The approach involves removal of noise from the image by the combination of MBFO & AMF and then detects local, global and statistical feature form the image. The Bacterial Foraging Optimization Algorithm (BFOA), as it is called now, is currently gaining popularity in the community of researchers, for its effectiveness in solving certain difficult real-world optimization problems. Our results so far show the approach to have a promising success rate. An automatic system for the recognition of facial expressions is based on a representation of the expression, learned from a training set of pre-selected meaningful features. However, in reality the noises that may embed into an image document will affect the performance of face recognition algorithms. As a first we investigate the emotionally intelligent computers which can perceive human emotions. In this research paper four emotions namely anger, fear, happiness along with neutral is tested from database in noisy environment of salt and pepper. Very high recognition rate has been achieved for all emotions along with neutral on the training dataset as well as user defined dataset. The proposed method uses cascading of MBFO & AMF for the removal of noise and Neural Networks by which emotions are classified.

Journal ArticleDOI
TL;DR: Table driven protocol STAR and on demand routing protocols AODV, DSR based on IEEE 802.11 are surveyed and characteristic summary of these routing protocols is presented.
Abstract: The wireless adhoc network is comprised of nodes (it can be static or mobile) with wireless radio interface. These nodes are connected among themselves without central infrastructure and are free to move. It is a multihop process because of the limited transmission range of energy constrained wireless nodes. Thus, in such a multihop network system each node (also known as router) is independent, self-reliant and capable of routing the packets over the dynamic network topology and therefore routing becomes very important and basic operation of adhoc network. Many protocols are reported in this field but it is difficult to decide which one is best. In this paper table driven protocol STAR and on demand routing protocols AODV, DSR based on IEEE 802.11 are surveyed and characteristic summary of these routing protocols is presented. Their performance is analyzed on throughput, jitter, packet delivery ratio and end-to-end delay performance measuring metrics by varying CBR data traffic load and then their performance is also compared using QualNet 5.0.2 network simulator.

Journal ArticleDOI
TL;DR: This paper proposes a method to prioritize the test cases for testing component dependency in a Component Based Software Development (CBSD) environment using Greedy Approach and is found to be very effective in early fault detection as compared to non-prioritize approach.
Abstract: Software maintenance is an important and costly activity of the software development lifecycle. To ensure proper maintenance the software undergoes regression testing. It is very inefficient to re execute every test case in regression testing for small changes. Hence test case prioritization is a technique to schedule the test case in an order that maximizes some objective function. A variety of objective functions are applicable, one such function involves rate of fault detection - a measure of how quickly faults are detected within the testing process. Early fault detection can provide a faster feedback generating a scope for debuggers to carry out their task at an early stage. In this paper we propose a method to prioritize the test cases for testing component dependency in a Component Based Software Development (CBSD) environment using Greedy Approach. An Object Interaction Graph (OIG) is being generated from the UML sequence diagrams for interdependent components. The OIG is traversed to calculate the total number of inter component object interactions and intra component object interactions. Depending upon the number of interactions the objective function is calculated and the test cases are ordered accordingly. This technique is applied to components developed in Java for a software system and found to be very effective in early fault detection as compared to non-prioritize approach.

Journal ArticleDOI
TL;DR: A developed and dynamic method of designing finite impulse response filters with automatic, rapid and less computational complexity by an efficient Genetic approach.
Abstract: The main focus of this paper is to describe a developed and dynamic method of designing finite impulse response filters with automatic, rapid and less computational complexity by an efficient Genetic approach. To obtain such efficiency, specific filter coefficient coding scheme has been studied and implemented. The algorithm generates a population of genomes that represents the filter coefficient where new genomes are generated by crossover, mutation operations methods. Our proposed genetic technique has able to give better result compare to other method

Journal ArticleDOI
TL;DR: The modification in an id based cryptosystem based on the double discrete logarithm problem is proposed and the security against a conspiracy of some entities in the proposed system is considered and the possibility of establishing a more secure system is shown.
Abstract: In 1984, Shamir [1] introduced the concept of an identity-based cryptosystem. In this system, each user needs to visit a key authentication center (KAC) and identify him self before joining a communication network. Once a user is accepted, the KAC will provide him with a secret key. In this way, if a user wants to communicate with others, he only needs to know the “identity” of his communication partner and the public key of the KAC. There is no public file required in this system. However, Shamir did not succeed in constructing an identity based cryptosystem, but only in constructing an identity-based signature scheme. Meshram and Agrawal [4] have proposed an id - based cryptosystem based on double discrete logarithm problem which uses the public key cryptosystem based on double discrete logarithm problem. In this paper, we propose the modification in an id based cryptosystem based on the double discrete logarithm problem and we consider the security against a conspiracy of some entities in the proposed system and show the possibility of establishing a more secure system.

Journal ArticleDOI
TL;DR: The observation that while ignoring error terms, the maximum peak of the approximated histogram of a DCT coefficient matches the quantization step for that coefficient can help in determining compression history, i.e. if the bitmap was previously compressed and the quantized table that was used, which is particularly useful in applications like image authentication, artifact removal, and recompression with less distortion.
Abstract: Most digital image forgery detection techniques require the doubtful image to be uncompressed and in high quality. However, most image acquisition and editing tools use the JPEG standard for image compression. The histogram of Discrete Cosine Transform coefficients contains information on the compression parameters for JPEGs and previously compressed bitmaps. In this paper we present a straightforward method to estimate the quantization table from the peaks of the histogram of DCT coefficients. The estimated table is then used with two distortion measures to deem images as untouched or forged. Testing the procedure on a large set of images gave a reasonable average estimation accuracy of 80% that increases up to 88% with increasing quality factors. Forgery detection tests on four different types of tampering resulted in an average false negative rate of 7.95% and 4.35% for the two measures respectively. acquisition devices (cameras, scanners, medical imaging devices) are configured differently in order to balance compression and quality. As described in (9,10), these differences can be used to identify the source camera model of an image. Moreover, Farid (11) describes JPEG ghosts as an approach to detect parts of an image that were compressed at lower qualities than the rest of the image and uses to detect composites. In this paper we present a straightforward method for estimating the quantization table of single JPEG compressed images and bitmaps. We verify the observation that while ignoring error terms, the maximum peak of the approximated histogram of a DCT coefficient matches the quantization step for that coefficient. This can help in determining compression history, i.e. if the bitmap was previously compressed and the quantization table that was used, which is particularly useful in applications like image authentication, artifact removal, and recompression with less distortion.

Journal ArticleDOI
TL;DR: In this article, the authors present a framework which was developed by combining several text mining techniques to automate the process overcoming the difficulties in the existing methods, and also identifies the possible enhancements that could be done to enhance the effectiveness of the framework.
Abstract: The term legal research generally refers to the process of identifying and retrieving appropriate information necessary to support legal decision-making from past case records. At present, the process is mostly manual, but some traditional technologies such as keyword searching are commonly used to speed the process up. But a keyword search is not a comprehensive search to cater to the requirements of legal research as the search result includes too many false hits in terms of irrelevant case records. Hence the present generic tools cannot be used to automate legal research. This paper presents a framework which was developed by combining several ‘Text Mining’ techniques to automate the process overcoming the difficulties in the existing methods. Further, the research also identifies the possible enhancements that could be done to enhance the effectiveness of the framework.

Journal ArticleDOI
TL;DR: A study on associative neural memories in artificial neural networks and how they can be applied in various fields to get the effective outcomes is presented.
Abstract: Memory plays a major role in Artificial Neural Networks. Without memory, Neural Network can not be learned itself. One of the primary concepts of memory in neural networks is Associative neural memories. A survey has been made on associative neural memories such as Simple associative memories (SAM), Dynamic associative memories (DAM), Bidirectional Associative memories (BAM), Hopfield memories, Context Sensitive Auto-associative memories (CSAM) and so on. These memories can be applied in various fields to get the effective outcomes. We present a study on these associative memories in artificial neural networks.

Journal ArticleDOI
TL;DR: In this paper, a comparative study of the application of Gaussian Mixture Model (GMM) and Radial Basis Function (RBF) in biometric recognition of voice has been carried out and presented.
Abstract: A comparative study of the application of Gaussian Mixture Model (GMM) and Radial Basis Function (RBF) in biometric recognition of voice has been carried out and presented. The application of machine learning techniques to biometric authentication and recognition problems has gained a widespread acceptance. In this research, a GMM model was trained, using Expectation Maximization (EM) algorithm, on a dataset containing 10 classes of vowels and the model was used to predict the appropriate classes using a validation dataset. For experimental validity, the model was compared to the performance of two different versions of RBF model using the same learning and validation datasets. The results showed very close recognition accuracy between the GMM and the standard RBF model, but with GMM performing better than the standard RBF by less than 1% and the two models outperformed similar models reported in literature. The DTREG version of RBF outperformed the other two models by producing 94.8% recognition accuracy. In terms of recognition time, the standard RBF was found to be the fastest among the three models.

Journal ArticleDOI
TL;DR: An automatic switching of modulation method to reconfigure transceivers of Software Defined Radio (SDR) based wireless communication system is described, robust and efficient with processing time overhead that still allows the SDR to maintain its real-time operating objectives.
Abstract: This paper describes an automatic switching of modulation method to reconfigure transceivers of Software Defined Radio (SDR) based wireless communication system. The programmable architecture of Software Radio promotes a flexible implementation of modulation methods. This flexibility also translates into adaptively, which is used here to optimize the throughput of a wireless network, operating under varying channel conditions. It is robust and efficient with processing time overhead that still allows the SDR to maintain its real-time operating objectives. This technique is studied for digital wireless communication systems. Tests and simulations using an AWGN channel show that the SNR threshold is 5dB for the case study.

Journal ArticleDOI
TL;DR: This paper is designed to capture the essence of traffic classification methods and consider them in packet-, flow-, and application-based contexts.
Abstract: Traffic classification is a very important mathematical and statistical tool in communications and computer networking, which is used to find average and statistical information of the traffic passing through certain pipe or hub. The results achieved from a proper deployment of a traffic analysis method provide valuable insights, including: how busy a link is, the average end-toend delays, and the average packet size. These valuable information bits will help engineers to design robust networks, avoid possible congestions, and foresee future growth. This paper is designed to capture the essence of traffic classification methods and consider them in packet-, flow-, and application-based contexts.

Journal ArticleDOI
TL;DR: This system would be able to issue and return books via RFID tags and also calculates the corresponding fine associated with the time period of the absence of the book from the library database.
Abstract: Radio frequency identification (RFID) is a term that is used to describe a system that transfers the identity of an object or person wirelessly, using radio waves. It falls under the category of automatic identification technologies. This paper proposes RFID Based Library Management System that would allow fast transaction flow and will make easy to handle the issue and return of books from the library without much intervention of manual book keeping. The proposed system is based on RFID readers and passive RFID tags that are able to electronically store information that can be read with the help of the RFID reader. This system would be able to issue and return books via RFID tags and also calculates the corresponding fine associated with the time period of the absence of the book from the library database.

Journal ArticleDOI
TL;DR: Performance of a wireless link is evaluated with turbo coding in the presence of Rayleigh fading with single transmitting antenna and multiple receiving antenna and results show that there is a significant improvement in required signal to noise ratio.
Abstract: Performance of a wireless link is evaluated with turbo coding in the presence of Rayleigh fading with single transmitting antenna and multiple receiving antenna. QAM modulator is considered with maximum likelihood decoding. Performance results show that there is a significant improvement in required signal to noise ratio (SNR) to achieve a given BER. It is found that the system attains a coding gain of 14.5 dB and 13 dB corresponding to two receiving antenna and four receiving antennas respectively over the corresponding uncoded system. Further, there is an improvement in SNR of 6.5 dB for four receiving antennas over two receiving antennas for the turbo coded system.

Journal ArticleDOI
TL;DR: This paper proposes an algorithm to enhance the performance of the correlation of two WIDTs in detecting MAC spoofing Denial of Service (DoS) attacks by using the Received Signal Strength Detection Technique and Round Trip Time Detection Technique.
Abstract: Failure of addressing all IEEE 802.11i Robust Security Networks (RSNs) vulnerabilities enforces many researchers to revise robust and reliable Wireless Intrusion Detection Techniques (WIDTs). In this paper we propose an algorithm to enhance the performance of the correlation of two WIDTs in detecting MAC spoofing Denial of Service (DoS) attacks. The two techniques are the Received Signal Strength Detection Technique (RSSDT) and Round Trip Time Detection Technique (RTTDT). Two sets of experiments were done to evaluate the proposed algorithm. Absence of any false negatives and low number of false positives in all experiments demonstrated the effectiveness of these techniques.

Journal ArticleDOI
TL;DR: A new algorithm is proposed for texture based segmentation using statistical properties and it is shown that probability of each intensity value of image is calculated directly and image is formed by replacing intensity by its probability .
Abstract: Segmentation is very basic and important step in computer vision and image processing. For medical images specifically accuracy is much more important than the computational complexity and thus time required by process. But as volume of data of patients goes on increasing then it becomes necessary to think about the processing time along with accuracy. Here in this paper, new algorithm is proposed for texture based segmentation using statistical properties. For that probability of each intensity value of image is calculated directly and image is formed by replacing intensity by its probability . Variance is calculated in three different ways to extract the texture features of the mammographic images. These results of proposed algorithm are compared with well known GLCM and Watershed algorithm. Segmenting mammographic images into homogeneous texture regions representing disparate tissue types is often a useful preprocessing step in the computer-assisted detection of breast cancer. With the increasing size and number of medical images, the use of computers in facilitating their processing and analysis has become necessary. Estimation of the volume of the whole organ, parts of the organ and/or objects within an organ i.e. tumors is clinically important in the analysis of medical image. The relative change in size, shape and the spatial relationships between anatomical structures obtained from intensity distributions provide important information in clinical diagnosis for monitoring disease progression. Therefore, radiologists are particularly interested to observe the size, shape and texture of the organs and/or parts of the organ. For this, organ and tissue morphometry performed in every radiological imaging centre. Texture based image segmentation is area of intense research activity in the past few years and many algorithms were published in consequence of all this effort, starting from simple thresholding method up to the most sophisticated random field type method. The repeating occurrence of homogeneous regions of images is texture. Texture image segmentation identifies image regions that have homogeneous with respect to a selected texture measure. Recent approaches to texture based segmentation are based on linear transforms and multiresolution feature extraction (1), Markov random filed models (2,3), Wavelets (4 - 6) and fractal dimension (7). Although unsupervised texture-based image segmentation is

Journal ArticleDOI
TL;DR: This project is mainly concerned with how well OFDM system performs in reducing the Inter Symbol Interference when the transmission is made over an Additive White Gaussian Noise (AWGN) channel.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM) transmissions are emerging as important modulation technique because of its capacity of ensuring high level of robustness against any interferences. This project is mainly concerned with how well OFDM system performs in reducing the Inter Symbol Interference (ISI) when the transmission is made over an Additive White Gaussian Noise (AWGN) channel. When OFDM is considered as a low symbol rate and a long symbol duration modulation scheme, it is sensible to insert a guard interval between the OFDM symbols for the purpose of eliminating the effect of ISI with the increase of Signal to Noise Ratio (SNR).

Journal ArticleDOI
TL;DR: An evaluation method to assess the knowledge management results inside the organisation, by connecting the financial impacts with the strategy map is presented and can be of support to the enterprise human resource department.
Abstract: Knowledge development and utilization can be facilitated by human resource practices. At the organizational level, the competitive advantage depends upon the firm utilization of existing knowledge and its ability to generate new knowledge more efficiently. At the individual level, increased delegation of responsibility and freedom of creativity may better allow the discovery and utilization of local and dispersed knowledge in the organization. This paper aims at introducing an innovative organizational model to support enterprises, international companies, and governments, in developing their human resource, through the virtual human resource, as a tool for knowledge capturing and sharing inside the organization.The VHRD organizational model allows different actors (top Management, employees, and external experts) to interact and participate in the learning process, by providing non-threatening self-evaluation and individualized feedback. In this way, the model, which is based on possible patterns and rules from existing learning systems, Web 2.0 and a homogeneous set of integrated systems and technologies, can be of support to the enterprise human resource department. In addition to this, the paper presents an evaluation method to assess the knowledge management results inside the organisation, by connecting the financial impacts with the strategy map.