scispace - formally typeset
Search or ask a question

Showing papers by "Sri Ramakrishna Engineering College published in 2013"


Journal ArticleDOI
TL;DR: The main objective is to arrive at job assignments that could achieve minimum response time, maximum resource utilization, and a well-balanced load across all the resources involved in a grid.
Abstract: Computational grids provide a massive source of processing power, providing the means to support processor intensive applications. The strong burstiness and unpredictability of the available resources raise the need to make applications robust against the dynamics of grid environment. The two main techniques that are most suitable to cope with the dynamic nature of the grid are load balancing and job replication. In this work, we develop a load-balancing algorithm by juxtaposes the strong points of neighbor-based and cluster-based load-balancing methods. We then integrate the proposed load-balancing approach with fault-tolerant scheduling namely MinRC and develop a performance-driven fault-tolerant load-balancing algorithm or PD_MinRC for independent jobs. In order to improve system flexibility, reliability, and save system resource, PD_MinRC employs passive replication scheme. Our main objective is to arrive at job assignments that could achieve minimum response time, maximum resource utilization, and a well-balanced load across all the resources involved in a grid. Experiments were conducted to show the applicability of PD_MinRC. One advantage of our approach is the relatively low overhead and robust performance against resource failures and inaccuracies in performance prediction information.

53 citations


Proceedings ArticleDOI
20 Mar 2013
TL;DR: A bus system using wireless sensor networks (WSNs) aimed at helping the elder people for independent navigation and the blind people in the bus station is proposed.
Abstract: Talking signs, guide cane, echolocations are all useful in navigating the visually challenged people to reach their destination, but the main objective is not reached that it fails to join them with traffic. In this project we propose a bus system using wireless sensor networks (WSNs). The blind people in the bus station is provided with a ZigBee unit which is recognized by the ZigBee in the bus and the indication is made in the bus that the blind people is present in the station. So the bus stops at the particular station. The desired bus that the blind want to take is notified to him with the help of speech recognition system HM2007. The blind gives the input about the place he has to reach using microphones and the voice recognition system recognizes it. The input is then analyzed by the microcontroller which generates the bus numbers corresponding to the location provided by the blind. These bus numbers are converted into audio output using the voice synthesizer APR 9600. The ZigBee transceiver in the bus sends the bus number to the transceiver with the blind and the bus number is announced to the blind through the headphones. The blind takes the right bus parked in front of him and when the destination is reached it is announced by means of the GPS-634R which is connected with the controller and voice synthesizer which produces the audio output. This project is also aimed at helping the elder people for independent navigation.

32 citations


Journal ArticleDOI
TL;DR: This paper presents a MATLAB simulation of practically realizable control techniques such as a conventional proportional-integral-derivative (PID) controller, a fuzzy controller, an adaptive artificial neural network proportional- Integral-Derivative-PID controller and a pulse width modulation technique based PID controller for achieving better performance during the operating conditions.
Abstract: Brushless direct current (BLDC) motors are widely used in industrial, aerospace, medical, machine tools, aerospace and control applications. Nowadays they are becoming popular due to their advantages such as reduced maintenance, better speed-torque characteristics, good dynamic performance, noiseless operation, wide speed range, compact size, high torque to volume ratio, low moment of inertia and high efficiency. However, very few control techniques are available for controlling the BLDC drive systems. This paper presents a MATLAB simulation of practically realizable control techniques such as a conventional proportional-integral-derivative (PID) controller, a fuzzy controller, an adaptive artificial neural network proportional-integral-derivative (ANN-PID) controller and a pulse width modulation technique based PID controller for achieving better performance during the operating conditions. The simulation results are presented to highlight the effectiveness of these controllers such as speed of response,...

24 citations


Journal ArticleDOI
TL;DR: In this article, a proportional integral-derivative (PID) controller and model reference adaptive controller (MRAC) for the brushless direct current (BLDC) servomotor drive system is described.
Abstract: This paper describes the design and implementation of a proportional-integral-derivative (PID) controller and model reference adaptive controller (MRAC) for the brushless direct current (BLDC) servomotor drive system. The performance of the BLDC servomotor drive system is analyzed under different operating conditions such as change in inertia, resistance, load disturbance and change in reference speed. Simulation and experimental results are presented to prove that the parameter variations will considerably affect the performance of the PID controller based BLDC drive but not the MRAC based BLDC drive due to its inherent capability to adapt itself to the changing environment. Moreover, determining mechanical load parameters accurately for the small DC servomotor drives is a great challenge to the control system designers as they are very essential for the design of controllers. This paper also presents a simple and accurate method for determining the mechanical parameters of motor and load.

21 citations


Proceedings ArticleDOI
21 Feb 2013
TL;DR: The simulation of Permanent Magnet Synchronous Motor drive using Iterative Learning (ILC) Controller shows that ILC controller reduces the torque ripples much better than Pi-controller.
Abstract: This paper proposes the simulation of Permanent Magnet Synchronous Motor (PMSM) drive using Iterative Learning (ILC) Controller. The Iterative Learning Controller an approach to improve the tracking performance of a system that operates repetitively over a fixed time interval. In this paper, traditional Pi-controller based torque control method is compared with the Iterative Learning Controller based torque control method and the results are obtained. The simulation result shows that ILC controller reduces the torque ripples much better than Pi-controller. The design, analysis, and simulation of ILC based controller method is simulated using MATLAB SIMULINK R2010b.

17 citations


Proceedings ArticleDOI
03 Jul 2013
TL;DR: In this research work, 16-bit and 64 bit adder is designed and comparison is made between all types of adders in terms of delay.
Abstract: In modern VLSI design, the occurrence of delays is predictable. Many digital systems that process data may have delays. Design requires thorough understanding of algorithms, recurrence structures, energy and wire tradeoffs, circuit design techniques, circuit sizing and system constraints. In this research work, 16-bit and 64 bit adder is designed and comparison is made between all types of adders in terms of delay. Xilinx ISE is used for simulation and synthesis Delay of 13.88 ns for a 16 bit Ling adder and 64 bit Sparse 2 adder has a delay of 35.026 ns. Area is also measured and comparison is made.

13 citations


Journal ArticleDOI
TL;DR: In this article, the structural properties of zinc oxide nanopowders were studied using X-ray diffraction technique (XRD) and it was observed that doping of aluminium significantly reduces the crystallite size and also affects the intensity ratio of crystal planes.

12 citations


Proceedings ArticleDOI
24 Jul 2013
TL;DR: Aloe vera stems were collected and the inner green flesh was used for the synthesis of silver nanoparticles, showing considerable growth inhibition of two of the well-known pathogenic bacteria species.
Abstract: Aloe vera is a stem less or very shortstemmed succulent plant growing to 60-100 cm (24-39 in) tall, spreading by offsets. The leaves are thick and fleshy, green to grey-green, with some varieties showing white flecks on the upper and lower stem surfaces. In recent years, researchers in the field of nanotechnology are finding that metal nanoparticles have all kinds of previously unexpected benefits.A new branch of nanotechnology is nanobiotechnology. Nanobiotechnology represents an economic alternative for chemical and physical methods of nanoparticles formation. When compare to physical and chemical methods biological methods have emerged as an alternative to the conventional methods for synthesis of NPs. Synthesis of inorganic nanoparticles by biological systems makes nanoparticles more biocompatible and environmentally benign. Many bacterial as well as fungal species have been used for silver nanoparticles synthesis. But most of them have been reported to accumulate silver nanoparticles intracellularly. Intracellular synthesis always takes longer reaction times and also demands subsequent extraction and recovery steps. On the contrary, plant extract mediated synthesis always takes place extracellularly, and the reaction times have also been reported to be very short compared to that of microbial synthesis. In the present study young fresh plant stems were collected and the inner green flesh was used for the synthesis of silver nanoparticles. 20 mL of aqueous extract was added to 20 mL of 1mM silver nitrate solution. The solution was allowed to react at room temperature and incubated in dark condition. After 24hrs the color change was observed and the extract was subjected to UV - Visible studies. Though the plasmon band is broad due to the presence of components in extract which are also being read in the spectrophotometric range, it is observed that the silver surface plasmon resonance (SPR) occurs at 460nm. Here is no change in peak position, suggesting that nucleation of silver nanoparticles starts with initiation of reaction time only, and the size remains unchanged throughout the course of reaction. The antibacterial activity was conducted against E.coli, and Bacillus Species. Bioreduced silver nanoparticles showed considerable growth inhibition of two of the well-known pathogenic bacteria species. Zones of 11 mm and 10 mm were observed for E. coli and Bacillus Species respectively.

11 citations



Journal ArticleDOI
TL;DR: Experimental results demonstrate that ENGA gives higher fault coverage, reduced transitions and compact test vectors for most of the asynchronous benchmark circuits when compared with those generated by Weighted Sum Genetic Algorithm WSGA.
Abstract: A new multi-objective genetic algorithm has been proposed for testing crosstalk delay faults in asynchronous sequential circuits that reduces average power dissipation during test application. The proposed Elitist Non-dominated Sorting Genetic Algorithm ENGA-based Automatic Test Pattern Generation ATPG for crosstalk induced delay faults generates test pattern set that has high fault coverage and low switching activity. Redundancy is introduced in ENGA-based ATPG by modifying the fault dropping phase and hence a very good reduction in transition activity is achieved. Tests are generated for several asynchronous SIS benchmark circuits. Experimental results demonstrate that ENGA gives higher fault coverage, reduced transitions and compact test vectors for most of the asynchronous benchmark circuits when compared with those generated by Weighted Sum Genetic Algorithm WSGA.

9 citations


Proceedings ArticleDOI
29 Apr 2013
TL;DR: This research is based on recent advances in the machine learning based microarray gene expression data analysis with three feature selection algorithms, and plays an important role for cancer classification.
Abstract: Analysis of gene expression is important in many fields of biological research in order to retrieve the required information. As the time advances, the illness in general and cancer in particular have become more and more complex and complicated, in detecting, analyzing and curing. Cancer research is one of the major research areas in the medical field. Accurate prediction of different tumor types has great value in providing better treatment and toxicity minimization on the patients. To minimize it, the data mining algorithms are important tool and the most extensively used approach to classify gene expression data and plays an important role for cancer classification. One of the major challenges is to discover how to extract useful information from datasets. This research is based on recent advances in the machine learning based microarray gene expression data analysis with three feature selection algorithms.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: The proposed modular multiplier is done on the proposed architecture of digit serial multipliers using existing mathematical algorithms and reaches the maximum speed and hence the area gets reduced.
Abstract: High speed modular multiplier is a primary requirement of multi-core processors because of critical applications such as security and high performance, many of which requires efficient and reliable hardware implementations. The classical methods are developed using Barrett's reduction and Montgomery multiplications. But the intermediate quotient is very large and main trade-off of this project is speed. To overcome this karatsuba multiplication is used to enhance the speed and potential of parallel processing. The result shows about the comparison of area and power of the existing algorithms. From the result our proposed method achieves high speed compared with other modular multipliers. The different type of algorithms developed as architectures and the digit serial basic concept is established to each algorithms. Also it is used to increase the potential achieve the parallel implementation. The proposed modular multiplier is done on the proposed architecture of digit serial multipliers using existing mathematical algorithms. This method reaches the maximum speed and hence the area gets reduced. Implementation of the proposed modular algorithm in this project have the less power consumption compared to their counterparts with similar modular algorithms.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: The concept of combination table is introduced in the proposed work and it is found that the flipflops after merging reduces the dynamic power about 23.68% and the total power about 8.55%.
Abstract: The clock power is the major dynamic power source in sequential VLSI circuits. In order to reduce this power a lots of techniques are available. One of them is the use of multi-bit flip-flop. In this technique the power consumption is reduced by replacing some flip-flop with multi-bit flip-flops. This may cause some performance degradation to the original circuit. To avoid the performance degradation the flip-flops that are going to merge must satisfy certain timing constraint. The concept of combination table is introduced in the proposed work. The combination table contains the flip-flops that can be merged. According to the experimental results it is found that the flipflops after merging reduces the dynamic power about 23.68% and the total power about 8.55%. It is also found that the global clock buffer is reduced to 37.84%.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: The proposed Kernelized Fuzzy Possibilistic C-Means (KFPCM) algorithm combines the advantages of both FCM and PCM and uses the Kernel-Induced Distance measure, which helps in obtaining better clustering results in case of high dimensional data.
Abstract: Data clustering is most commonly used in several clustering applications. Due to the fast development of the internet and its applications, several high dimensional data clustering algorithms has come into existence. It is very complicated to handle high dimensional data clustering by using the traditional clustering algorithms. Hence, in order to overcome this difficulty, a Kernelized Fuzzy Possibilistic C-Means (KFPCM) algorithm has been proposed for effective clustering results. The proposed KFPCM uses a distance measure which is based on the Kernel-Induced Distance Measure. FPCM combines the advantages of both FCM and PCM, moreover the Kernel-Induced Distance measure helps in obtaining better clustering results in case of high dimensional data. The proposed KFPCM is evaluated using the UCI Machine Learning Repository (Iris and Wine dataset) in terms of clustering accuracy and execution time. The results prove the effectiveness of the proposed KFPCM.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: The proposed fault-detection method can detect the fault in less decoding cycles (almost in three) and when the data read is error free, it can obviously reduce memory access time.
Abstract: As a result of technology scaling and higher integration densities there may be variations in parameters and noise levels which will lead to larger error rates at various levels of the computations. As far as memory applications are concerned the soft errors and single event upsets are always a matter of problem. The paper mainly focuses on the design of an efficient Majority Logic Detector/Decoder (MLDD) for fault detection along with correction of fault for memory applications, by considerably reducing fault detection time. The error detection and correction method is done by one step majority logic decoding and is made effective for Euclidean Geometry Low Density Parity Check Codes (EG-LDPC). Even though majority decodable codes can correct large number of errors, they need high decoding time for detection of errors and ML Decoding method may take same fault detecting time for both erroneous and error free code words, which in turn delays the memory performance. The proposed fault-detection method can detect the fault in less decoding cycles (almost in three). When the data read is error free, it can obviously reduce memory access time. The technique keeps the area overhead minimal and power consumption low for large code word sizes.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: A low power CMOS Static Random Access Memory (SRAM) based Field Programmable Gate Arrays (FPGA) architecture is being presented in this paper that employs the fast and low-power SRAM blocks that are based on 10T SRAM cells.
Abstract: A low power CMOS Static Random Access Memory (SRAM) based Field Programmable Gate Arrays (FPGA) architecture is being presented in this paper. The architecture presented here is based on CMOS logic and CMOS SRAMs that are used for on-chip dynamic reconfiguration. This architecture employs the fast and low-power SRAM blocks that are based on 10T SRAM cells. These blocks are employed in fast access of the configuration bits by using the shadow SRAM technique. The dynamic reconfiguration delay is being hidden behind the computation delay through the use of shadow SRAM cells. The combined effect of both the SRAM memory cells and the shadow SRAM scheme enables to support in reducing the delay and also to achieve reduced power consumption. Experimental results show reduced delay of about 8.035ns and power consumption of about 0.015W for the 10T SRAM memory cell with an overhead in area, relative to 4T and 6T SRAM cells. Also, the experimental results include the values of delay of about 8.979ns and power consumption of about 0.052W, achieved for the LB of FPGA architecture which employs CMOS SRAMs using the 10T SRAM memory cells in it.

Journal ArticleDOI
TL;DR: EFDD Protocol-Early False Data Detection Protocol is analyzed which addresses the two possibilities in a simple and secure way considering the constraints of sensor nodes and reduces data transmission by dropping false data earlier and it also reduces computation when compared to the existing schemes.
Abstract: In Wireless Sensor Network (WSN) sensors are densely deployed where the intruders can compromise some sensor nodes and inject false data in order to raise false alarms, reduce network lifetime, utilize bandwidth resources and so on. False Data Injection can possibly occur in Data Aggregation (DA) and Data Forwarding (DF). This paper analyses EFDD Protocol-Early False Data Detection Protocol which addresses the two possibilities in a simple and secure way considering the constraints of sensor nodes. The main idea is the selection of the network structure; this protocol will work effectively in Spatial/Semantic Correlation Tree Structure (SCT). False Data Detection in DA is done using some monitoring nodes which will monitor the Data Aggregator. EFDD in SCT structure reduce the counterfeit data transmission when compared to other structure in a better way. The result shows that EFDD reduce data transmission by dropping false data earlier and it also reduces computation when compared to the existing schemes.

Journal ArticleDOI
TL;DR: The proposed machine learning based imputation method performs well and was able to impute the missing values even in the worst cases with more than more than 50% of missing values.
Abstract: The success of data mining relies on the purity of the data set. Before performing the data mining, th e data has to be cleaned. An unprocessed data set may contain noisy or missing values which is a critical researc h issue in the pre-processing stage. Imputation methods are be ing used to solve the missing value problems. In th is proposed work, a machine learning based imputation method is proposed by using the mutual information by exclusively interpolating two different section of the same dataset. For designing the proposed model, a radial basis function based neural network has been used. The performance of the proposed algorithm has been measured with respect to different rate or percenta ge of missing values in the data set and the result s has been compared with existing simple and efficient imputat ion methods also. To evaluate the performance, the standard WDBC data set has been used. The proposed algorithm performs well and was able to impute the missing values even in the worst cases with more th an 50% of missing values. Instead of using simple q uality measure such as Mean Square Error (MSE) to evaluate the imputed data quality, in this study, the quali ty is measured in terms of classification performance. Th e results arrived were more significant and compara ble.

Proceedings ArticleDOI
20 Mar 2013
TL;DR: In this paper, the authors proposed a proportional plus integral controller (PI) working in transient state in parallel with Iterative learning control (ILC) with Space Vector Pulse Width Modulation (SVPWM) to suppress the speed ripples in PMSM driven by field oriented control.
Abstract: Permanent-magnet synchronous motor (PMSM) are mostly used in high-performance industrial servo applications in which torque smoothness is necessary. The main disadvantage of PMSM is torque ripples, which induce speed pulsations that deteriorate the performance of drives particularly at lower speeds. This paper propose the proportional plus integral controller (PI)working in transient state in parallel with Iterative learning control (ILC) working in steady state with Space Vector Pulse Width Modulation (SVPWM) to suppress the speed ripples in PMSM driven by field oriented control. The speed controller compares the reference speed with actual speed, and thus produce the reference current i q ref value which in turn reduces the speed error. According to i q ref value SVPWM pulses are produced and given to the three phase voltage source inverter. The advantages of the proposed method for speed ripple reduction is designed, analysed and simulated using simulink library in MATLAB R2009a. The results obtained from simulation show the significant reduction in speed ripples using proportional plus integral controller (PI) plug in Iterative learning control (ILC) with Space Vector Pulse Width Modulation (SVPWM). This method shows better speed ripple reduction in comparison with the conventional method.

Proceedings ArticleDOI
24 Jul 2013
TL;DR: In this article, an iron oxide nanoparticle has been synthesized by a simple wet chemical reduction using plant phytochemicals and their anti microbial activities were evaluated. The resultant particles where analyzed by UV-Visible spectroscopy and particle size analyser by dynamic light scattering DLS and zeta potential measurement.
Abstract: Iron oxide nanoparticles have been intensively investigated due to their magnetic characteristics, and their potential applications in the area of bioscience and medicine. Iron oxide nanoparticles with their non-toxic and biocompatible effects can be used in biomedical applications ranging from drug delivery to hyperthermia for tumor targeted therapy. Targeted drug delivery is achieved by super Para magnetism by which the required magnetic alignment is achieved at normal temperature. Surface shaping of iron oxide nanoparticles is done accordingly for coating of plant extracts which acts as the required drug for cancer therapy. Certain plant extracts contains poly phenols which performs the role of reduction and precipitations. such methods do not harm the environment and fall under “Green synthesis”. In this work iron oxide nanoparticle has been synthesized by a simple wet chemical reduction using plant phytochemicals and their anti microbial activities were evaluated. The particles showed even size distribution and had anti microbial activity even in minimal concentrations. The resultant particles where analyzed by UV-Visible spectroscopy and particle size analyserby dynamic light scattering DLS and zeta potential measurement.

Journal Article
TL;DR: In this paper, a production-distribution inventory model with shortages and unit cost dependent demand has been formulated along with possible constraints, and the model is illustrated with a numerical example.
Abstract: A Production-Distribution inventory model with shortages and unit cost dependent demand has been formulated along with possible constraints. In most of the real world situations, the cost parameters are imprecise in nature. Hence, the unit cost is imposed here in fuzzy environment. Due to complexity, the proposed model has been solved by LINGO software. The model is also solved for without shortages as the special case. The model is illustrated with a numerical example.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted many inspections and observations in the chassis assembly of an automobile industry and identified nearly 20 intolerable risks in the assembly line, for which the intolerable risks are converted into tolerable through small projects and those which cannot be converted through Safe Job Procedures (SJP).
Abstract: Chassis assemblies in any automobile industries are vulnerable to several hazards. In order to reduce the hazards we need to study and carry out Hazard Identification & Risk Assessment in New Chassis assembly to provide engineering solutions for preventing Accidents / dangerous occurrences and to evaluate the Risk. We have conducted many inspections and observation in the chassis assembly of an automobile industry. From our observations and findings we identified nearly 20 intolerable risks in the chassis line, for which the intolerable risks are converted into tolerable through small projects and those which cannot be converted are reduced through Safe Job Procedures (SJP). All the significant risks are controlled by Occupational Health & Safety Management Program, Standard operating procedures & Operation Control Procedure. From the project, the hazard is minimized or eliminated which ensures safety in the assembly line as well as the safety of workers.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: In this project truncation error is not more than 1 ulp (unit of least position), so there is no need of error compensation circuits, and the final output will be précised.
Abstract: Truncated multipliers offers significant improvements in area, delay, and power. The proposed method finally reduces the number of full adders and half adders during the tree reduction. While using this proposed method experimentally, area can be saved. The output is in the form of LSB and MSB. Finally the LSB part is compressed by using operations such as deletion, reduction, truncation, rounding and final addition. In previous related papers, to reduce the truncation error by adding error compensation circuits. In this project truncation error is not more than 1 ulp (unit of least position). So there is no need of error compensation circuits, and the final output will be precised.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: A decode-aware compression technique to improve both compression and decompression efficiencies and an efficient combination of bitmask-based compression and run length encoding of repetitive patterns is proposed.
Abstract: Reconfigurable system uses bitstream compression to reduce the bitstream size and the memory requirement. The communication bandwidth is improved by reducing the reconfiguration time. Existing research has explored efficient compression with slow decompression or fast decompression at the cost of compression efficiency. This paper proposes a decode-aware compression technique to improve both compression and decompression efficiencies. The three major contributions of this paper are: i) Efficient bitmask selection technique that can create a large set of matching patterns; ii) Proposes a bitmask based compression using the bitmask and dictionary selection technique that can significantly reduce the memory requirement iii) Efficient combination of bitmask-based compression and run length encoding of repetitive patterns. The original bitstream can be generated using the decompression engine.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: Experimental results indicate that the proposed design can detect multiple bit errors and effectively recover the data with reduced area and the area is reduced from 1665 to 345.
Abstract: Motion Estimation (ME) is the process of determining the motion vectors that describe the transformation from one 2D image to other, usually from adjacent frames in a video sequence. The process of ME is the critical part of any video coding system as the video quality will be affected if an error has occurred in the ME. In order to test Motion Estimation in a video coding system an Error Detection and Correction Architecture (EDCA) is designed based on the Residue-and-Quotient (RQ) code. Multiple bit errors in the processing element (PE) i.e., key component of a ME, can be detected and the original data can be recovered effectively by using EDCA design. Experimental results indicate that the proposed design can detect multiple bit errors and effectively recover the data with reduced area. The area is reduced from 1665 to 345.

Journal ArticleDOI
TL;DR: The proposed image denoising algorithm in the new shiftable and modified version of discrete wavelet transform, is the dual-tree discrete wavelets transform and shows a better performance in noise suppression and edge preservation.
Abstract: Images are often corrupted by noise owing to channel communication errors, defective image acquisition plans or devices, engine sparks, power interference and atmospheric electrical emissions. In proposed method a new wavelet shrinkage algorithm based on fuzzy logic and the lifting scheme is used. In particular, intra-scale dependency within wavelet coefficients is modeled using a fuzzy feature. This fuzzy feature differentiates between important coefficients, namely, image discontinuity coefficients and noisy coefficients. The same is used for enhancing wavelet coefficients' information in the shrinkage step, which results in the fuzzy membership function shrinking the wavelet coefficients based on the fuzzy feature. Examine that the proposed image denoising algorithm in the new shiftable and modified version of discrete wavelet transform, is the dual-tree discrete wavelet transform. Also that the lifting scheme is there it allows a faster implementation of the WT and a fully in-place calculation of the WT. In addition, no extra memory is needed and the original signal can be replaced with its WT. The method transforms an image into the wavelet domain using lifting based wavelet filters in the wavelet domain and finally transforms the result into the spatial domain and the fuzzy threshold is used to extract out the Speckle in highest subbands. Experimental Result shows that the resultant enhanced image obtained has a better performance in noise suppression and edge preservation.

Proceedings ArticleDOI
21 Feb 2013
TL;DR: This method rectifies the problem of segmenting words from lines using bounding box (BB) method, which suppresses the structure of character in word segmentation, and solves the issue of spatial measure and threshold.
Abstract: Document image analysis methods fail in case of freestyle handwritten documents, in which texts are curvilinear and gaps between words are nonuniform. This paper introduces a relatively simple method, which is more tolerant to such cases. In the proposed method, word segmentation requires the document to be already segmented into text lines. The proposed system begins with pre-processing the scanned image of the handwritten text, to increase the accuracy of recognition by enhancing some features and eliminating some inconsistencies. It solves the issue of spatial measure and threshold, which are sensitive to shape the connected component (CC), by reducing the region of interest to core region. This method rectifies the problem of segmenting words from lines using bounding box (BB) method, which suppresses the structure of character. Trimmed mean (TM) is used to detect the core region and also as threshold for gap discrimination in this segmentation method. The system was developed in Java and its performance was evaluated on word images selected from the IAM database. Applying the segmentation scheme on 1100 text lines earned 96.7% of accuracy; on the other hand BB method produced only 90.1%.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: Experimental results show that the proposed LSMC outperforms standard codec's such as Set Partitioning in Hierarchical Trees (SPIHT) and Set Partitionsed Embedded bloCK coder (SPECK) for Lossy and Lossless Compression.
Abstract: In this paper, a Novel Video Codec called L Shaped Morpho Codec (LSMC) based on Lifting Wavelet Transform (LWT) is proposed. Each frame in the Video sequence is first decomposed into different sub-bands using LWT at maximum decomposition level. The proposed LSMC keeps track of significant pixels of sub-band in the scan order of left to right and top to bottom. If the pixel is found significant, then the Morphological dilation is applied immediately to find the significant pixels using L shaped structuring element. This will improve the Rate-Distortion performance for Lossy Compression. Experimental results show that the proposed LSMC outperforms standard codec's such as Set Partitioning in Hierarchical Trees (SPIHT) and Set Partitioned Embedded bloCK coder (SPECK) for Lossy and Lossless Compression. For Lossy compression, LSMC performs well at higher bit rates than the lower bitrates. The average bits per pixel (bpp) required for Lossless Compression by LSMC is less over SPECK.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed RABC algorithm can search for multiple thresholds which are very close to the optimal ones examined by the exhaustive search method.
Abstract: In this paper to compute optimum thresholds for Maximum Tsallis entropy thresholding (MTET) model, a new hybrid algorithm is proposed by integrating the Artificial Bee Colony Optimization (ABC) with the Powell’s conjugate gradient (PCG) method. Here the ABC with improved perturbation mechanism (IPM) will act as the main optimizer for searching the near-optimal thresholds while the PCG method will be used to fine tune the best solutions obtained by the ABC in every iteration. This new multilevel thresholding technique is called the Refined Artificial Bee Colony Optimization (RABC) algorithm for MTET. Experimental results over multiple images with different range of complexities validate the efficiency of the proposed technique with regard to segmentation accuracy, speed, and robustness in comparison with other techniques reported in the literature. The experimental results demonstrate that the proposed RABC algorithm can search for multiple thresholds which are very close to the optimal ones examined by the exhaustive search method.

Journal ArticleDOI
TL;DR: It is proved that the efficiency of the proposed algorithm is better in terms of accuracy and execution time and it is compared with K-Means algorithm in termsof running time, memory usage, and accuracy.
Abstract: In recent years privacy preserving data mining problem has gained considerable importance, due to the vast amount of personal data about individuals that are stored at different commercial vendors and organizations. Privacy preserving clustering is not intensively studied as other data mining techniques, such as rule mining, sequence mining, etc. In this paper, we obtain privacy by anonymization, where the data is encrypted from the original data, along with the secure key and the secure key is obtained by the Diffie Hellman key exchange algorithm. In order to perform clustering on the anonymize data Fuzzy C Means clustering algorithm is used. The Fuzzy C means clustering algorithm is suitable for clustering data where the boundaries are ambiguous. However in this paper initially distance matrix is calculated and using which similarity matrix and dissimilarity matrix is formed. Similarity matrix calculates the similarity among the data point with the cluster centroids and dissimilarity matrix calculates the dissimilarity among the data point with the cluster centroids. The membership matrix is constructed from the above matrices is used to cluster the anonymized data. The experimental result of the proposed algorithm is compared with K-Means algorithm in terms of running time, memory usage, and accuracy and it is proved that the efficiency of the proposed algorithm is better in terms of accuracy and execution time