scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Research in Computer Science and Electronics Engineering in 2012"


Journal Article
TL;DR: Genetic algorithms are a class of optimization procedures which are well suited to the problem of training and optimize weights of Artificial Neural Networks and are shown in this paper.
Abstract: Artificial Neural Networks have a number of properties which make them psuitable to solve complex pattern classification problems. Their applications to some real world problems has been adopted by the lack of a training algorithm. This algorithms finds a nearly globally optimal set of weights in a relatively short time. Back propagation is one of the training algorithm of the Artificial neural network. However, training the neural networks using backpropagation algorithm may cause two main drawbacks: trapping into local minima and converging slowly. In view of these limitations of back-propagation neural networks, global search technique such as Genetic algorithm have been presented to overcome these shortcomings. Genetic algorithms are a class of optimization procedures which are good at exploring a large and complex space in an intelligent way. It finds values close to the global optimum. Hence, they are well suited to the problem of training and optimize weights of Artificial Neural Networks. In this paper the use of Genetic algorithms to optimize weights of Artificial Neural Networks is shown.

31 citations


Journal Article
TL;DR: An improved algorithm based on Discrete Wavelet Transform (DWT) is used to detect cloning forgery in a copy-move image forgery.
Abstract: In an age with digital media, it is no longer true that seeing is believing.In addition, digital forgeries, can be indistinguishable from authentic photographs. In a copy-move image forgery, a part of an image is copied and then pasted on a different location within the same image .In this paper an improved algorithm based on Discrete Wavelet Transform (DWT)is used to detect such cloning forgery. In this technique DWT (Discrete Wavelet Transform) is applied to the input image to yield a reduced dimensional representation.After that compressed image is divided into overlapping blocks. These blocks are then sorted and duplicated blocks are identified. Due to DWT usage, detection is first carried out on lowest level image representation so this Copy-Move detection process increases accuracy of detection process.

30 citations


Journal Article
TL;DR: A new algorithm for brain segmentation of MRI images by fuzzy c means algorithm to diagnose accurately the region of cancer for glioma growth by advanced diameter technique is presented.
Abstract: Tumor segmentation from MRI data is an important but time consuming manual task performed by medical experts. The research which addresses the diseases of the brain in the field of the vision by computer is one of the challenges in recent times in medicine, the engineers and researchers recently launched challenges to carry out innovations of technology pointed in imagery. This paper focuses on a new algorithm for brain segmentation of MRI images by fuzzy c means algorithm to diagnose accurately the region of cancer. In the first step it proceeds by noise filtering later applying FCM algorithm to segment only tumor area .In this research multiple MRI images of brain can be applied detection of glioma (Tumor) growth by advanced diameter technique.

23 citations


Journal Article
TL;DR: This paper discusses how Twitter data is used as a corpus for analysis by the application of sentiment analysis and a study of different algorithms and methods that help to track influence and impact of a particular user/brand active on the social network.
Abstract: An overwhelming number of consumers are active in social media platforms. Within these platforms consumers are sharing their true feelings about a particular brand/product, its features, customer service and how it stands the competition. With the booming of microblogs on the Web, people have begun to express their opinions on a wide variety of topics on Twitter and other similar services. In a world where information can bias public opinion it is essential to analyse the propagation and influence of information in large-scale networks. Recent research studying social media data to rank users by topical relevance have largely focused on the “retweet", “following" and “mention" relations. We also perform linguistic analysis of the collected corpus and explain discovered phenomena. Using the corpus, we build a sentiment classifier, that is able to determine positive, negative and neutral sentiments for a document. This paper discusses how Twitter data is used as a corpus for analysis by the application of sentiment analysis and a study of different algorithms and methods that help to track influence and impact of a particular user/brand active on the social network.

23 citations


Journal Article
TL;DR: The main aim is to study the edge detection method for Dental X-ray image segmentation based on a genetic algorithm approach, which is usually applied in initial stages of computer vision applications.
Abstract: Genetic Algorithm is an optimization solver, which does an analogy to Darwin evolution by combining mutation, crossover and selection step. One of the biggest advantages of Genetic Algorithm is its ability to find a global optimum. The X-ray data set, which consists of an image and its expected edge features, is used for training by the GA. Image edge detection refers to the extraction of the edges in a digital image. An edge is a boundary between the object and its background. Edge detection is most common approach to detect discontinuity in an image. Edge detection is a process to identify points in an image where discontinuities or sharp changes in intensity occur. This process is crucial to understanding the content of an image and has its applications in image analysis and machine vision. Edge detection is usually applied in initial stages of computer vision applications. In this paper, the main aim is to study the edge detection method for Dental X-ray image segmentation based on a genetic algorithm approach.

14 citations


Journal Article
TL;DR: This paper compared three types of routing protocols i.e. proactive, reactive and hybrid for mobile ad hoc networks routing protocols to improve QoS in MANETs routing protocol and compared two routing protocols for QoS parameter analysis.
Abstract: MANET is a self organized and self configurable network where the mobile nodes move arbitrarily. The mobile nodes can receive and forward packets as a router. Routing is a critical issue in MANET and hence the focus of this paper along with the performance analysis of routing protocols. We compared three types of routing protocols i.e. proactive, reactive and hybrid. All the MANETs routing protocols are explained in a deep way with QoS metrics. The performance of these routing protocols is analyzed by Q0S metrics to improve QoS in MANETs routing protocol. The comparison analysis will be carrying out about these protocols and in the last the conclusion will be presented for mobile ad hoc networks routing protocols. We compared two routing protocols (i.e. DSDV and AODV) for QoS parameter analysis using Packet delivery fractions (PDF), Average end-to-end delay of data packets, and Normalized routing load as parameters and show the simulation result using Network Simulation Tool (NS-2).

13 citations


Journal Article
TL;DR: The proposed adaptive thresholding methods for removing additive white Gaussian noise from digital images are introduced and succeeded in providing improved denoising performance to recover the shape of edges and important detailed components.
Abstract: In this paper an adaptive thresholding methods for removing additive white Gaussian noise from digital images are introduced. Some of the denoising algorithms perform thresholding of the wavelet coefficients, which have been affected by additive white Gaussian noise, by retaining only large coefficients and setting the rest to zero. However, their performance is not sufficiently effective as they are not spatially adaptive. But Curvelet are a non-adaptive technique for multi-scale object representation. Curvelet transform employed in the proposed scheme provides sparse decomposition as compared to the wavelet transform methods which being non geometrical lack sparsely and fail to show optimal rate of convergence. The proposed algorithm succeeded in providing improved denoising performance to recover the shape of edges and important detailed components. Simulation results proved that the proposed method can obtain a better image estimate than the wavelet based restoration methods.

12 citations


Journal Article
TL;DR: This paper has analyzed the performance of different VoIP codecs over the best effort service flow for WiMAX network using network simulator 2 (NS2) and results are presented in graphical form.
Abstract: Worldwide Interoperability for Microwave Access (WiMAX) is a wireless broadband technology, which supports point to multi-point (PMP) broadband wireless access. It is a fixed and mobile wireless access technology based on the IEEE 802.16 standards. Voice over Internet Protocol (VoIP), called Internet Protocol (IP) Telephony, Internet telephony or Digital Phone. It utilizes the IP network (Internet or intranets) for telephone conversations. In this paper we have analyzed the performance of different VoIP codecs over the best effort service flow for WiMAX network. The parameters considered for the evaluation of network are throughput, average delay & jitter. The simulation is done using network simulator 2 (NS2) by varying number of nodes and results are presented in graphical form.

12 citations


Journal Article
TL;DR: This work proposes a unique method by combining two of the most widely used privacy preservation techniques: K-anonymity and l-diversity, and presents a new notion of privacy called “closeness”.
Abstract: Public survey data that may Increase the exposure of privacy and census information about the particulars is called the Data sensitivity. To maintain the privacy increase the similarity in the data item and introduce redundancy in such a way that information about individual users can not be disclose. This technique also desires the actual information of the data to not change. In this work we propose a unique method by combining two of the most widely used privacy preservation techniques: K-anonymity and l-diversity. The k-anonymity privacy requirement for publishing micro data requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain “identifying” attributes) contains at least k records. Diversity requires that each equivalence class has at least well-represented (in Section 2) values for each sensitive attribute. In this article, we show that -diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. Motivated by these limitations, we propose a new notion of privacy called “closeness”. We first present the base model t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). Based on entropy based closeness and distance measure between the class of data we propose a comprehensive technique to change the dataset to preserve the privacy while keeping the original meaning intact.

12 citations


Journal Article
TL;DR: This hybrid model proposed a Nobel technique which is the combination of several compression techniques which is a lossless technique so the PSNR and MSE will go better than the old algorithms and due to DWT and DCT the authors will get good level of compression.
Abstract: Here in this hybrid model we are going to proposed a Nobel technique which is the combination of several compression techniques. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.JPEG and JPEG 2000 are two important techniques used for image compression. First we implement DWT and DCT on the original image because these are the lossy techniques and in the last we introduce Huffman Coding technique which is a lossless technique. In the end, we implement lossless technique so our PSNR and MSE will go better than the old algorithms and due to DWT and DCT we will get good level of compression. Hence overall result of hybrid compression technique is good.

10 citations


Journal Article
TL;DR: The main aim of this paper is to reduce the power dissipation and area by redusing the number of transistors by using general logic of pmos transistor, the two transistor xor gate can be implemented.
Abstract: In modern era, the number of transistors are reduced in the circuit and ultra low power design have emerged as an active research topic due to its various applications. A full adder is one of the essential component in digital circuit design, many improvements have been made to reduce the architecture of a full adder.The main aim of this paper is to reduce the power dissipation and area by redusing the number of transistors.By using general logic of pmos transistor, the two transistor xor gate can be implemented. In this paper proposes the novel design of a 2T XOR gate. The design has been compared with earlier proposed 3T, 4T and 6T XOR gates and a significant improvement in silicon area and power-delay product has been obtained. An 8-T full adder has been designed using the proposed 2-T XOR gate and its performance has been obtained. the design is simulated in Mentor graphics tool .

Journal Article
TL;DR: This paper will provide an Design and implementation of on-chip router architecture that allows routing function for each input port and distributed arbiters which gives high level of parallelism.
Abstract: Technology scaling continuously increasing number of component and complexity for System on Chip systems [1]. For effective global on-chip communication, on-chip routers provide essential routing functionality with low complexity and relatively high performance [1]. The low latency and high speed is achieved by allowing routing function for each input port and distributed arbiters which gives high level of parallelism [4]. This paper will provide an Design and implementation of on-chip router architecture.

Journal Article
TL;DR: This paper introduces an alternative way to implement CRC hardware on FPGA to speed up the CRC calculation while maintaining a very low area, and will be suitable candidate for manycommunication protocols such as 100 Gbps Ethernet.
Abstract: This paper introduces an alternative way to implement CRC hardware on FPGA to speed up the CRC calculation while maintaining a verylow area. The traditional implementations with high data throughput have very large area. In Ourdesign weused the CRC Reduced Table Lookup Algorithm (RTLA) for achieving very low area, while using pipelined architecture for having high data throughput.In our implementation we have reached a data throughput of more than 100 Gbps when the data input width is 200 bits or more, and relatively fixed maximum frequency which make doubling the data width approximately doubles the data throughput. The proposed design will be suitable candidate for manycommunication protocols such as 100 Gbps Ethernet.

Journal Article
TL;DR: This master thesis is to optimize/maximize de Jong's function1 in GA using different selection schemes (like roulette wheel, random selection, besy fit/elitist fit rank selection, tournament selection) from literature of benchmark functions commonly used in order to test optimization procedures dedicated for multidimensional, continuous optimization task.
Abstract: Genetic algorithm is a search algorithm based on the mechanics of natural selection and natural genetics. The purpose of this master thesis is to optimize/maximize de Jong's function1 in GA using different selection schemes (like roulette wheel,random selection, besy fit/elitist fit rank selection, tournament selection). For our problem the fitness function chosen is from literature of benchmark functions commonly used in order to test optimization procedures dedicated for multidimensional, continuous optimization task. The terminating Criterion is the number of iterations for which the algorithm runs

Journal Article
TL;DR: The simulated results show that PSO with SPV rule proves to be a better algorithm when applied to resource allocation and disk scheduling in grid computing.
Abstract: Grid computing can be defined as applying the resources of many computers in a network to a problem which requires a great number of computer processing cycles or access to large amounts of data. However, in the field of grid computing scheduling of tasks is a big challenge. The task scheduling problem is the problem of assigning the tasks in the system in a manner that will optimize the overall performance of the application, while assuring the correctness of the result. Each day new algorithms are proposed for assigning tasks to the resources. This is also a boon for the grid computing. In this paper we use the technique of Particle Swarm Optimization (PSO) with SPV (Shortest position value) rule to solve the task scheduling problem in grid computing. The aim of using this technique is use the given resources optimally and assign the task to the resources efficiently. The simulated results show that PSO with SPV rule proves to be a better algorithm when applied to resource allocation and disk scheduling in grid computing.

Journal Article
TL;DR: This paper addresses the major issues associated with the conventional partitional clustering algorithms, namely difficulty in determining the cluster centers and handling noise or outlier points.
Abstract: Data clustering acts as an intelligent tool, a method that allows the user to handle large volumes of data effectively. The basic function of clustering is to transform data of any origin into a more compact form, one that represents accurately the original data. Clustering algorithms are used to analyze these large collection of data by means of subdividing them into groups of similar data. Fuzzy clustering extends the crisp clustering technique in such a way that instead of an object belonging to just one cluster at a time, the object belongs to one or more clusters at the same time with appropriate membership values assigned to the object in a cluster. This paper addresses the major issues associated with the conventional partitional clustering algorithms, namely difficulty in determining the cluster centers and handling noise or outlier points. Integration of fuzzy logic in data mining subjugates these traditional methods to handle natural data which are often vague. The study provides an analysis of two fuzzy clustering algorithms videlicet fuzzy c- means and adaptive fuzzy clustering algorithm and its illustration on different fields.

Journal Article
TL;DR: Three methods to combat carrier frequency offset are compared: Time domain CP based method, frequency domain based Moose and Classen method, and an improved performance of the present scheme is confirmed through extensive MATLAB simulation results.
Abstract: The demand for high-speed mobile wireless communications is rapidly growing. Orthogonal Frequency Division Multiplexing (OFDM) has become a key element for achieving the high data capacity and spectral efficiency requirements for wireless communication systems because of it multicarrier modulation techniques. But its main drawback is the effect of carrier frequency offset (CFO) produced by the receiver local oscillator or by Doppler shift. This frequency offset breaks the orthogonality among the subcarriers and hence causes intercarrier interference (ICI) in the OFDM symbol, which greatly degrades the overall system performance. In this paper we will study the effects of CFO upon signal to noise ratio (SNR) for an OFDM system, and also estimate the amount of carrier frequency offset. We compare three methods to combat carrier frequency offset: Time domain CP based method, frequency domain based Moose and Classen method. The improved performance of the present scheme is confirmed through extensive MATLAB simulation results.

Journal Article
TL;DR: This paper presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and expensive process of partitioning data over a scalable network of nodes.
Abstract: Cloud computing creates a virtual paradigm for sharing data and computations over a scalable network of nodes [1] [2].

Journal Article
TL;DR: This paper compares the performance of conventional Array multiplier and Array multiplier using compressor techniques with the help of Cadence Tool.
Abstract: Multiplication represents a fundamental building block in all DSP tasks. Due to the large latency inherent in multiplication, methods have been devised to minimize the delay. Two methods are common in current implementations: regular Arrays and Wallace trees. For higher order multiplications, a huge number of adders are to be used to perform the partial product addition. Reduction of adders by introducing special kind of adders that are capable to add five/six/seven bits per decade. These adders are called compressors. These compressors make the multipliers faster as compared to the conventional design .In this paper we compare the performance of conventional Array multiplier and Array multiplier using compressor techniques with the help of Cadence Tool.

Journal Article
TL;DR: Non linear modulation techniques that are self phase modulation and cross phase modulation in an optical fiber system are analyzed and it is seen that which type of modulation is better for long transmission in single mode optical fiber.
Abstract: Use of optical fiber communication is widely use due to its better bit rate and bandwidth and high carrier frequency with low power consumption So, in this paper, we have analyzed non linear modulation techniques that are self phase modulation (SPM) and cross phase modulation (CPM) in an optical fiber system and discussed how these cause dispersion in input signal These effects are simulated using OPTISYSTEM tool at a bit rate of 10Gbps and analyzed by eye pattern method with respect to bit error rate and Q factor Simulation results of self phase modulation and cross phase modulation obtained in OPTISYSTEM tool which is made by OPTIWAVE INC are compared with each other Formula for bit error rate (BER) is implemented in MATLAB and its value is obtained by taking the value of Q factor from the design implemented in OPTISYSTEM and further variations in the value of BER are studied for both types of non linear effects and see that which type of modulation is better for long transmission in single mode optical fiber

Journal Article
TL;DR: The rising of Text Mining Technique as unforeseen-part of the Data Mining and Data Warehouse Methodologies is introduced for improving its role, performances and productivities and also used in different research areas.
Abstract: Text Data Mining or Knowledge-Discovery in Text (KDT) technique refers generally to the process of extracting interesting and non-trivial information and knowledge from unstructured text. Text mining technique is a deviation on a countryside called data mining that tries to find interesting patterns from large databases; text mining also known as the Intelligent Text Analysis (ITA). Text mining is a young interdisciplinary field which draws on information retrieval, data mining, machine learning, statistics and computational linguistics. Text Mining Technique (TMT) is the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. Text mining, sometimes alternately referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. In this paper, we introduce the rising of Text Mining Technique as unforeseen-part of the Data Mining and Data Warehouse Methodologies; for improving its role, performances and productivities and also used in different research areas.

Journal Article
TL;DR: SafeQ uses a novel technique to encode both data and queries such that a storage node can correctly process encoded queries over encoded data without knowing their values, to preserve privacy.
Abstract: In many wireless sensor network applications, the data collection sink (base station) needs to find the aggregated statistics of the network. Readings from sensor nodes are aggregated at intermediate nodes to reduce the communication cost. However, the previous optimally secure in-network aggregation protocols against multiple corrupted nodes require two round-trip communications between each node and the base station The architecture of two-tiered sensor networks, where storage nodes serve as an intermediate tier between sensors and a sink for storing data and processing queries, has been widely adopted because of the benefits of power and storage saving for sensors as well as the efficiency of query processing. SafeQ also allows a sink to detect compromised storage nodes when they misbehave. To preserve privacy, SafeQ uses a novel technique to encode both data and queries such that a storage node can correctly process encoded queries over encoded data without knowing their values. Our protocol achieves one round-trip communication to satisfy optimal security without the result-checking phase, by conducting aggregation along with the verification.

Journal Article
TL;DR: Flood is a general or temporary condition of partial or complete inundation of normally dry land areas from overflow of inland or tidal waters or from the unusual and rapid accumulation or runoff of surface waters from any source as discussed by the authors.
Abstract: Decision makers worldwide face a difficult challenge in developing an effective response to the threat of water-induced disasters. After prayers to the rain gods, answered in excess in parts of our country, now, the focus has shifted to floods. Many states in our country are flood prone due to heavy rain or otherwise. The flood causes loss to human life and wide spread damage to property. Unimaginable damage to agriculture takes place affecting the States planning and upset the financial budgeting there by slowing down the whole economy of the country. The term "flood" is a general or temporary condition of partial or complete inundation of normally dry land areas from overflow of inland or tidal waters or from the unusual and rapid accumulation or runoff of surface waters from any source. Heavy down pore in the form of rain, brings down more water than can be disposed off by combined factors natural and man made systems causes flooding. The rivers overflow embankments may be breached. Generally rains following storm and hurricane are heavy and bring unmanageable amount of water causing flash floods. The frequency or probability of a flood usually is described by assigning a recurrence interval to the flood at each gaging station. This is accomplished by statistically evaluating long-term annual peak stream flows at a station.

Journal Article
TL;DR: Network-on-Chip (NoC) is a general purpose on-chip communication concept that offers high throughput, which is the basic requirement to deal with complexity of modern systems.
Abstract: Network-on-Chip (NoC) is a general purpose on-chip communication concept that offers high throughput, which is the basic requirement to deal with complexity of modern systems. Arbiter is used in NoC Router when number of input port are request for same output port. In this paper we are design Matrix Arbiter for NoC architecture. When all input port are request for same output port in this situation matrix arbiter first form a matrix 5*5 After that matrix arbiter assign the Priority to all input request and generate the grant signal. In this paper we are analyze the Area, power.

Journal Article
TL;DR: In this article, the authors tried to make a systematic study on the issues of ICT use in education and also to find three significant reasons namely the social, economic and environmental demands of green ICT.
Abstract: The future of the nation, the youths are in colleges, now. The India became the future IT hub for the world as per experts. This study covers the sustenance of ICT for preserving energy for this technology and along with preventing mother Earth from hazardous carbon emissions which is major cause of global warming. The paper outlines the policies of Indian government towards green ICT. The study identifies the need of eco sustainable or green ICT implementation at professional education institutes and also identifies the green parameters for information and communication technology. The objective of the study was to simply raise awareness of Green ICT implementation need at professional education institutes. In the present work, the authors tried to make a systematic study on the issues of ICT use in education and also to find three significant reasons namely the social, economic and environmental demands of green ICT.

Journal Article
TL;DR: A simple fault detection technique to detect the multiple faults using single sensor and optimal neural network has been designed for optimal performance of the engine on the basis of Classification Accuracy.
Abstract: Car technology is advancing at amazing speed so it's no surprise that at least more than hundreds of car models are coming up in each year with newer technology and innovations. The new technologies are necessary to meet increased transport demands in future and satisfy the need for the safer, faster and more sustainable mobility of persons and goods. But day by day the maintenance of the vehicle is difficult because of the scarcity of skilled mechanic in all over the world [1, 2]. Automobile engine is a complex system and sometimes the problems can be a bit tricky to diagnose. To diagnose the problem correctly, lots of knowledge and experience is required. Engine problems are caused primarily by improper maintenance or fatigue caused by normal wear and tear and also worn out or clogged car parts. Worn out parts may cause overheating of the engine, engine surging and other problems. When the problem arises and it’s not properly diagnosed and repaired in time then it may create some other severe problems and ultimately the engine may bring to a halt the working [3,4]. Looking to this aspect it is very necessary to diagnose the fault in initial stage and for that automatic fault detection system is necessary. Many researchers have suggested the fault detection techniques by implementing a separate sensor for separate fault but that makes the system very complex. At the same time maintenance of sensor will be an additional job than the maintenance of vehicle. Therefore, it is proposed a simple fault detection technique to detect the multiple faults using single sensor [5, 6]. A microphone is used as a sensor to collect the dynamic information of an automobile engine in normal and faulty condition. The features are extracted using MALAB software then the detailed analysis is carried out using Artificial Neural Networks (ANN). Comparison of all types of ANN has been done on the basis of Average Classification Accuracy. Finally optimal neural network has been designed for optimal performance of the engine on the basis of Classification Accuracy.

Journal Article
TL;DR: In this paper, all known methods of IDPS are reviewed and the main features of agents are intelligence and mobility which is the core motivation to us to designed cost effective Agent based Intrusion Prevention System (AIPS).
Abstract: Internet makes life easier, provides a best platform to do business and increases the employment and lots more the list is endless. Everything has two sides; the dark side of internet is its openness. Data are most precious in computer world, and any valuable things are viable target for thieves (hackers, crackers or intruder). Security threats are always around in such openness (Internet) environment. Intruders always search for vulnerabilities [16] or flaws in target system and attack using different techniques. The main features of agents are intelligence and mobility which is the core motivation to us to designed cost effective Agent based Intrusion Prevention System (AIPS). In this paper we have reviewed all known methods of IDPS.

Journal Article
TL;DR: A user friendly automated traffic control system which will automatically detect a vehicle using the RFID active tag attached to the vehicle and as soon as the vehicle passes by a reader, this process would lead to identification of each vehicle reducing traffic malfunction and also reducing security problems.
Abstract: This paper is based on traffic control system using RFID In the metropolitan city like Mumbai, Kolkata we have a severe malfunction of traffic control and various security problems Firstly, there are number of vehicles on road in such cities leading to mismanagement, Secondly breaking of traffic rules is quite obvious in such cities, thirdly nowadays we have severe security problems in traffic system due anti social elements This paper suggest a user friendly automated traffic control system which will automatically detect a vehicle using the RFID active tag attached to the vehicle and as soon as the vehicle passes by a reader, this process would lead to identification of each vehicle reducing traffic malfunction and also reducing security problems This could be only possible by use of RFID tickets and mesh network can be used to make the traffic control smooth and travelling very precise This paper basically deals with the identification and positioning a vehicle for automated traffic control system

Journal Article
TL;DR: This paper presents a feasible study of "Motion Estimation by using VHDL for four-step search algorithm", which is a type of block matching algorithm and use for finding motion vector.
Abstract: Motion estimation is one of the key elements of many video compression schemes. Motion estimation is the process of determining motion vectors that describe the transformation from one image to another; usually from adjacent frames in a video sequence. In this paper we discuss four-step search algorithm, which is a type of block matching algorithm and use for finding motion vector. Four-step search algorithm estimate the amount of motion on a block-by-block basis, i.e. for each block in the current frame, a block from the previous frame is found, that is said to match this block based on a certain criterion. We consider 64* 64 pixel images and find motion vector. This paper presents a feasible study of “Motion Estimation by using VHDL for four-step search algorithm”.

Journal Article
TL;DR: The purpose of this master thesis is to identify suitable routing protocols for use with WSN based on the limitations of the technology and propose an enhanced protocol for WSN.
Abstract: Wireless mobile ad-hoc networks are characterized as network of nodes without any physical connections. In these types of networks there is no fixed topology due to the mobility of nodes, interference, multipath propagation, environmental conditions and path loss. The purpose of this master thesis is to study, understand, analyze and discuss three mobile ad-hoc routing protocols DSDV, AODV and DSR out of which the first one is proactive protocols, which depends on the routing tables which are maintained at each node. The other two are reactive protocols, which find a route to a destination on demand, whenever communication is needed. Considering the same parameters the DSR protocol transfers more data than both AODV and DSDV protocols, but due to the fact that changes in paths are avoided the losses in AODV is less as compared to DSR protocol. This work is to analyze the routing protocols for wireless networks based on their performance. This is done theoretically as well as through simulation. Basically what is to be done, to identify suitable routing protocols for use with WSN based on the limitations of the technology and propose an enhanced protocol for WSN.