scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Information Engineering and Applications in 2011"


Journal Article
TL;DR: Two image compression techniques are simulated based on Discrete Cosine Transform and Discrete Wavelet Transform and the results are shown and different quality parameters of its by applying on various images are compared.
Abstract: Image compression is a method through which we can reduce the storage space of images, videos which will helpful to increase storage and transmission process's performance. In image compression, we do not only concentrate on reducing size but also concentrate on doing it without losing quality and information of image. In this paper, two image compression techniques are simulated. The first technique is based on Discrete Cosine Transform (DCT) and the second one is based on Discrete Wavelet Transform (DWT). The results of simulation are shown and compared different quality parameters of its by applying on various images Keywords: DCT, DWT, Image compression, Image processing

64 citations


Journal Article
TL;DR: This project will serve as good indication of how important it is to curb car theft in the country and the means of sending the data to the owner of the vehicle using SMS when the alarm is triggered are specified.
Abstract: Surveillance system using phone line for security and tracking. Based on the above statement, it is targeted that this project will serve as good indication of how important it is to curb car theft in the country. Surveillance is specified to car alarm system and the means of sending the data to the owner of the vehicle using SMS when the alarm is triggered. Due to the inefficient conventional car security system, the possibility of the car can be stolen is high. The main reason is that the alarm is limited to the audible distance. Somehow if there is another way of transmitting the alarm to the car owner ,tracking the vehicle ,knowing the exactly that the car is been stolen at the same time that is not limited to the audible and line of sight, the system can be upgraded. SMS is a good choice of the communication to replace the conventional alarm, because it can be done and does not require much cost. Although most of people know  GPS can provide more security for the car but the main reason people does not apply it because the cost. Advance car security system is too expensive. Cost for the gadget is too high. Beside that, people also must pay for the service monthly. Tracking systems were first developed for the shipping industry because they wanted to determine where each vehicle was at any given time. Passive systems were developed in the beginning to fulfill these requirements. For the applications which require real time location information of the vehicle, these systems can't be employed because they save the location information in the internal storage and location information can only be accessed when vehicle is available. To achieve automatic Vehicle Location system that can transmit the location information in real time. Active systems are developed. Real time vehicular tracking system incorporates a hardware device installed in the vehicle (In-Vehicle Unit) and a remote Tracking server. The information is transmitted to Tracking server using GSM/GPRS modem on GSM network by using SMS or using direct TCP/IP connection with Tracking server through GPRS. Tracking server also has GSM/GPRS modem that receives vehicle location information via GSM network and stores this information in database. This information is available to authorized users of the system via website over the internet. Keywords : GPS,GPRS,Sensors

50 citations


Journal Article
TL;DR: The proposed system provides new textural information and segmenting normal and benign, malignant tumor images, especially in small tumor regions of CT images efficiently and accurately with lesser computational time.
Abstract: Soft tissues segmentation from brain computed tomography image data is an important but time consuming task performed manually by medical experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue among different patients and in many cases, similarity between tumor and normal tissue. A computer software system is designed for the automatic segmentation of brain CT images. Image analysis methods were applied to the images of 30 normal and 25 benign,25 malignant images. Textural features extracted from the gray level co-occurrence matrix of the brain CT images and bidirectional associative memory were employed for the design of the system. Best classification accuracy was achieved by four textural features and BAM type ANN classifier. The proposed system provides new textural information and segmenting normal and benign, malignant tumor images, especially in small tumor regions of CT images efficiently and accurately with lesser computational time. Keywords: Bidirectional Associative Memory classifier(BAM), Computed Tomography (CT), Gray Level Co-occurrence Matrix (GLCM), Artificial Neural Network (ANN).

10 citations


Journal Article
TL;DR: A hybrid method for completion of images of natural images, where the removal of a foreground object cr eates a hole in the image, using both structure inpainting methods and texture synthesis techniques.
Abstract: Image inpainting or image completion refers to the task of filling in the missing or damaged regions o f an image in a visually plausible way. Many works on this sub ject have been proposed these recent years. We pres ent a hybrid method for completion of images of natural s cenery, where the removal of a foreground object cr eates a hole in the image. The basic idea is to decompose t he original image into a structure and a texture im age. Reconstruction of each image is performed separatel y. The missing information in the structure compone nt is reconstructed using a structure inpainting algorith m, while the texture component is repaired by an im proved exemplar based texture synthesis technique. Taking advantage of both the structure inpainting methods and texture synthesis techniques, we designed an effect ive image reconstruction method. A comparison with some existing methods on different natural images shows the merits of our proposed approach in providing hi gh quality inpainted images.

10 citations


Journal Article
TL;DR: Significant features, performance improvement in comparisons of routing protocol for vehicular ad hoc network (VANET) are focused on.
Abstract: Vehicular ad hoc network is one of the most promising applications of MANET that an inter communication system. In VANET nodes which are vehicles can move safety with high speed and generally must communicate quickly reliably. When an accident occurs in a road or highway, alarm messages must be disseminated, instead of ad hoc routed, to inform all other vehicles. Vehicular ad hoc network architecture and cellular technology to achieve intelligent communication and improve road traffic safety and efficiency .To organize their in vehicle computing system, vehicle to vehicle ad hoc networks, hybrid architecture with special properties such as high mobility, network portioning and constrained topology .there is a lot of research about VANET for driving services, traffic information services, user communication and information services. VANET can perform effective communication by utilizing routing information. Some researchers are contributed a lots in the area of VANET. In this articles mainly focusing on significant features, performance improvement in comparisons of routing protocol for vehicular ad hoc network (VANET). Keywords : VANET, Routing Protocol, PBR, CAR, CBR etc.

10 citations


Journal Article
TL;DR: In this paper, the authors examined the technical efficiency of Libyan manufacturing firms over the 2000 to 2008 time period and used the Data Envelopment Analysis (DEA) technique to analyze production efficiency of firms before and after privatization.
Abstract: This paper examined the technical efficiency of Libyan manufacturing firms over the 2000 to 2008 time period. The study used the Data Envelopment Analysis (DEA) technique to analyze production efficiency of firms before and after privatization. An inefficiency model is estimated to link the inefficiency of inputs or resources used to produce output to other factors such as ownership structure to justify the impact of privatization policy on efficiency. The results indicated that the average efficiency score before privatization was 49.5 percent, but the score improved to 62.3 percent after privatization. The increase of 12.8 percent indicates that on average there is only minor improvement in technical efficiency of firms after privatization. Nevertheless, this increase was not statistically significant. The results also indicated that there were no evidences to suggest that there are differences in efficiency levels of firms before and after privatization policy, and efficiency is a function of ownership structure. Keywords: Libya, Data Envelopment Analysis, technical efficiency, ownership, privatization

8 citations


Journal Article
TL;DR: In this paper, the local information is extracted using angle oriented discrete cosine transforms and invokes certain normalization techniques to increase the reliability of the face detection process, neighborhood pixel information is incorporated into the proposed method, and the face matching classification for the proposed system is done using various distance measure methods like Euclidean Distance, Manhattan Distance and Cosine Distance methods and the recognition rate were compared for different distance measures.
Abstract: Face recognition is one of the wide applications of image processing technique. In this paper complete image of face recognition algorithm is proposed. In the prepared algorithm the local information is extracted using angle oriented discrete cosine transforms and invokes certain normalization techniques. To increase the Reliability of the Face detection process, neighborhood pixel information is incorporated into the proposed method. Discrete Cosine Transform (DCT) are renowned methods are implementing in the field of access control and security are utilizing the feature extraction capabilities. But these algorithms have certain limitations like poor discriminatory power and disability to handle large computational load. The face matching classification for the proposed system is done using various distance measure methods like Euclidean Distance, Manhattan Distance and Cosine Distance methods and the recognition rate were compared for different distance measures. The proposed method has been successfully tested on image database which is acquired under variable illumination and facial expressions. It is observed from the results that use of face matching like various method gives a recognition rate are high while comparing other methods. Also this study analyzes and compares the obtained results from the proposed Angle oriented face recognition with threshold based face detector to show the level of robustness using texture features in the proposed face detector. It was verified that a face recognition based on textual features can lead to an efficient and more reliable face detection method compared with KLT (Karhunen Loeve Transform), a threshold face detector. Keywords : Angle Oriented, Cosine Similarity, Discrete Cosine Transform, Euclidean Distance, Face Matching, Feature Extraction, Face Recognition, Image texture features.

7 citations


Journal Article
TL;DR: An H-bridge inverter topology with reduced switch count technique is introduced, which dramatically reduces the complexity of control circuit, cost, lower order harmonics and thus effectively reduces total harmonic distortion.
Abstract: In this paper, an H-bridge inverter topology with reduced switch count technique is introduced. This technique reduces the number of controlled switches used in conventional multilevel inverter. To establish a single phase system, the proposed multilevel inverter requires one H-bridge and a multi conversion cell. A multi conversion cell consists of three equal voltage sources with three controlled switches and three diodes. In conventional method, twelve controlled switches are used to obtain seven levels. Due to involvement of twelve switches the harmonics, switching losses, cost and total harmonic distortion are increased. This proposed topology also increases the level to seven with only seven controlled switches. It dramatically reduces the complexity of control circuit, cost, lower order harmonics and thus effectively reduces total harmonic distortion. Keywords: Cascaded Multilevel Inverter, H-bridge Inverter, Total Harmonic Distortion, Sinusoidal Pulse Width Modulation, Insulated Gate Bipolar Transistor

7 citations


Journal Article
TL;DR: The algorithms that predict a vehicle's entire route as it is driven are studied, useful for giving the driver warnings about upcoming traffic hazards or information about upcoming points of interest, including advertising.
Abstract: Vehicle-to-vehicle communication is a concept greatly studied during the past years Vehicles equipped with devices capable of short-range wireless connectivity can form a particular mobile ad-hoc network, called a Vehicular Ad-hoc NETwork (or VANET) The users of a VANET, drivers or passengers, can be provided with useful information and with a wide range of interesting services Route prediction is the missing piece in several proposed ideas for intelligent vehicles In this paper, we are studying the algorithms that predict a vehicle's entire route as it is driven Such predictions are useful for giving the driver warnings about upcoming traffic hazards or information about upcoming points of interest, including advertising This paper describes the route Prediction algorithms using Markov Model, Hidden Markov Model (HMM), Variable order Markov model (VMM) Keywords: VANET, MANET, ITs, GPS, HMM, VMM, PST

5 citations


Journal Article
TL;DR: In this paper, it is proposed that documents are digitally signed on before being sent as an authentication measure.
Abstract: In an organization with its own Private Network , it is not just enough to transfer the documents from one person to another, but also it needs to ensure that the document retains its integrity, confirms the authenticity of the sender, provides privacy, if required and it is safe against repudiations. To address these, it is proposed to establish a public key infrastructure (PKI) for digital signatures with in the organizations. Public key infrastructure provides robust and rigorous security measures to protect user data and credentials. In this paper, it is proposed that documents are digitally signed on before being sent as an authentication measure. The trust between two parties and digital signatures are reinforced by components of public key infrastructure (PKI) namely Public Key Cryptography, Certificate Authority (CA), Certificates, Certificate Repository (CR), and also a simple application to demonstrate the same would also be attempted. Keywords: Open SSL, Public Key Infrastructure, Digital signatures, Certificate Authority, Certificate Repository.

5 citations


Journal Article
TL;DR: This research gives an idea of reducing development time and efforts using clone detection and clustering process and it is proved how this research is useful in software development and maintenance.
Abstract: Software modules reusability may play an unbeatable role to increase the software productivity. Code clones can be used one of the parameter for cluster formation of software modules. Cluster analysis is a scheme used for cataloging of data in which data elements are screened into groups called clusters that represent collections of data elements that are based on dissimilarities or similarities. The clustering approach is an important tool in decision making and an effective creativity technique in generating ideas and obtaining solutions. Software development and maintenance are big challenges in the market for survival of a software industry. This research gives an idea of reducing development time and efforts using clone detection and clustering process. Different types of methods have been applied in this research such as Hierarchical Clustering (HC) and Non-Hierarchical Clustering (NHC) for software modules classification. We have proved how this research is useful in software development and maintenance. The experiments have been done using 13 C++ programs. Keywords: Lines of Code (LOC), Hierarchical Clustering Algorithm (HCA), Non-Hierarchical Clustering Algorithm (NHCA)

Journal Article
TL;DR: In this paper, the maximal ratio receiver combining (MRRC) diversity technique is evaluated to mitigate the effect of fading in IDMA scheme employing random interleaver and prime interleavers with single transmit two receiving antennas in low rate coded environment.
Abstract: The antenna diversity mechanism is established as the well known mechanism for reduction of probability of occurrence of communication failures (outages) caused by fades. In receiver diversity, multiple antennas are employed at the receiver side in case of transmitter diversity, multiple antennas are the integral part of transmitter section.. In this paper, Maximal Ratio Receiver Combining (MRRC) diversity technique is evaluated to mitigate the effect of fading in IDMA scheme employing random interleaver and prime interleaver with single transmit two receiving antennas in low rate coded environment. For the performance evaluation, channel is assumed to be Rayleigh multipath channel with BPSK modulation. Simulation results demonstrate the significant improvement in BER performance of IDMA with maximal ratio receiver combining (MRRC) diversity along with prime interleaver and random interleaver and it has also been observed that BER performance of prime interleaver is similar to that of random interleaver with reduced bandwidth and memory requirement at transmitter and receiver side. Keywords: Multipath Fading, MRRC diversity, Multi user detection, Interleave-Division Multiple Access (IDMA) Scheme, Random Interleaver, Prime Interleaver

Journal Article
TL;DR: Digital Implementations of newly developed multiscale representation systems namely Curvelets, Ridgelet and Contourlets transforms are used for denoising the image and digital transforms applied to the problem of restoring an image from noisy data are compared with those obtained from well established methods based on the thresholding of Wavelet Coefficients.
Abstract: Image reconstruction is one of the most important areas of image processing. As many scientific experiments result in datasets corrupted with noise, either because of the data acquisition process or because of environmental effects, denoising is necessary which a first pre-processing step in analyzing such datasets. There are several different approaches to denoise images. Despite similar visual effects, there are subtle differences between denoising, de-blurring, smoothing and restoration. Although the discrete wavelet transform (DWT) is a powerful tool in image processing, it has three serious disadvantages: shift sensitivity, poor directionality and lack of phase information. To overcome these disadvantages, a method is proposed which is based on Curvelet transforms which has very high degree of directional specificity. Allows the transform to provide approximate shift invariance and directionally selective filters while preserving the usual properties of perfect reconstruction and computational efficiency with good well-balanced frequency responses where as these properties are lacking in the traditional wavelet transform.Curvelet reconstructions exhibit higher perceptual quality than Wavelet based reconstructions, offering visually sharper images and in particular higher quality recovery of edges and of faint linear and curve linear features. The Curvelet reconstruction does not contain the quantity of disturbing artifacts along edges that we see in wavelet reconstruction. Digital Implementations of newly developed multiscale representation systems namely Curvelets, Ridgelet and Contourlets transforms are used for denoising the image. We apply these digital transforms to the problem of restoring an image from noisy data and compare our results with those obtained from well established methods based on the thresholding of Wavelet Coefficients. Keywords: Curvelets Transform, Discrete Wavelet Transform, Ridgelet Transform, Peak signal to Noise Ratio (PSNR), Mean Square Error (MSE).

Journal Article
TL;DR: The core components of mobile RFID, advantages and its applications in scenario of smart networks are described, including security, network architecture, operation scenario, and code resolution mechanism.
Abstract: Basically RFID (radio-frequency identification) is a wireless communication technology within the L1 (Layer 1, the physical layer of the OSI 7-layer Reference Model) and L2 scopes between RFID tag and reader. The RFID reader reads the code in the RFID tag and interprets it by communicating with the IS(information services)Â server via a proper communication network. This is the typical architecture defined by EPC (electronic product Code)global. RFID networks need to provide value added services in order to give better visibility to inventory movement across supply chain or closed loop applications like Asset tracking or Work In Progress tracking. The RFID reader can be stationary or mobile. A mobile RFID reader affords more applications than the stationary one. Mobile RFID is a newly emerging technology which uses the mobile phone as an RFID reader with a wireless technology and provides new valuable services to the user by integrating RFID and ubiquitous sensor network infrastructure with mobile communication and wireless internet. The mobile RFID enables business to provide new services to mobile customers by securing services and transactions from the end-user to a company's existing e-commerce and IT systems. In this paper, I describe about the core components of mobile RFID, advantages and its applications in scenario of smart networks. Although there are several types of mobile RFID readers in the market, I focused on mobile RFID technology that has several positive features including security, network architecture, operation scenario, and code resolution mechanism. Keywords: EPC network, RFID, Mobile RFID, Smart RFID network

Journal Article
TL;DR: In this article, the performance of coded and uncoded interleave division multiple access (IDMA) systems with tree-based interleaver and random interleavers is compared.
Abstract: In recent days, on the horizon of wireless world, newly proposed multiple access scheme known as Interleave-Division Multiple-Access (IDMA) has made its remarkable impact. Researchers all over world, are making hard marks to establish the scheme to establish its claim as potential candidate for 4 th generation wireless communication systems. This paper is concerned with the performance enhancement of iterative IDMA systems under coded & uncoded environment. The performance of an interleave division multiple access (IDMA) system can be improved by the optimized power allocation techniques. Based on the optimized power allocation technique we compare the performance of coded & uncoded IDMA system with random interleaver & tree based interleaver. During the simulation, it has been observed that tree based interleaver demonstrate the similar bit error rate (BER) performance to that of random interleaver however on other fronts including bandwidth and memory requirement at transmitter and receiver ends, it outperforms the random interleavers. Keywords: Tree Based Interleaver, Random Interleaver, IDMA, linear programming, power allocation, BER.

Journal Article
TL;DR: The framework of the Affective Decision Making Engine outlined here provides a blueprint for creating software agents that emulate psychological affect when making decisions in complex and dynamic problem environments.
Abstract: The framework of the Affective Decision Making Engine outlined here provides a blueprint for creating software agents that emulate psychological affect when making decisions in complex and dynamic problem environments. The influence of affect on the agent's decisions is mimicked by measuring the correlation of feature values, possessed by objects and/or events in the environment, against the outcome of goals that are set for measuring the agent's overall performance. The use of correlation in the Affective Decision Making Engine provides a statistical justification for preference when prioritizing goals, particularly when it is not possible to realize all agent goals. The simplification of the agent algorithm retains the function of affect for summarizing feature-rich dynamic environments during decision making. Keywords: Affective decision making, correlative adaptation, affective agents

Journal Article
TL;DR: A novel Selective Mapping (SLM) PAPR reduction technique that is based on a random-like Irregular Repeat Accumulate encoder for both P APR and Bit Error Rate better performance.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high data rate and reliable communication over fading channels. The main implementation drawback of this system is the possibility of high Peak to Average Power Ratio (PAPR). In this paper, we develop a novel Selective Mapping (SLM) PAPR reduction technique. In the novel proposed scheme, the alternative symbol sequences are generated by module 2 additions of data with the rows of cyclic Hadamard matrix with the same size, inserting the selected row's number to avoid transmitting any side information and specially using a random-like Irregular Repeat Accumulate (IRA) encoder for both PAPR and Bit Error Rate (BER) better performance. Keywords: IRA Codes, OFDM, PAPR, SLM method.

Journal Article
TL;DR: In this article, the authors characterize connected vertex magic total labeling graphs through the ideals in topological spaces, where ideal space is a triplet, where X is a nonempty set, I is a topology, and I is an ideal of subsets of X.
Abstract: A graph with v vertices and e edges is said to be vertex magic total labeling graph if there is a one to one map taking the vertices and edges onto the integers 1,2,+e with the property that the sum of the label on the vertex and the labels of its incident edges is constant, independent of the choice of the vertex. An ideal space is a triplet, where X is a nonempty set, I„ is a topology, I is an ideal of subsets of X. In this paper we characterize connected vertex magic total labeling graphs through the ideals in topological spaces. Keywords: Vertex magic total labeling graphs, ideal, topological space, Euler graph.

Journal Article
TL;DR: The author suggests a model to select the searching material only related to books in hard copy form, softcopy form, read only and printable from for web services selection for Library System.
Abstract: Web Services provide a promising results and solution according to the needs and requirements with fast & flexible manners for information sharing among different peoples and businesses. The major key issue in research in Web Services is the selection process which is most difficult & cumbersome because the increasing numbers of services that can not meet or fulfill all the non- functional requirements like performance, efficiency, reliability sensitivity etc. moreover for web services selection for Library System, the author suggests a model to select the searching material only related to books in hard copy form, softcopy form, read only and printable from. The Author suggests an agent for the selection of these books from the web. When any body else who wants to search a specific book from the web, then this service agent will show all the web sites where books will be available. The agent generates a list of books with their all user's needs and Non-functional requirements. On the basis of these non-functional requirements the user can pick the book according to the document provided by service agent. Keywords: Service Agent, Non-functional requirement, Web services

Journal Article
TL;DR: A new hybrid algorithm for mining multilevel association rules called AC Tree i.e., AprioriCOFI tree was developed, which helps in mining association rules at multiple concept levels.
Abstract: In recent years, discovery of association rules among item sets in large database became popular. It gains its attention on research areas. Several association rule mining algorithms were developed for mining frequent item set. In this papers, a new hybrid algorithm for mining multilevel association rules called AC Tree i.e., AprioriCOFI tree was developed. This algorithm helps in mining association rules at multiple concept levels. The proposed algorithm works faster compared to traditional association rule mining algorithm and it is efficient in mining rules from large text documents. Keywords: Association rules, Apriori, FP tree, COFI tree, Concept hierarchy.

Journal Article
TL;DR: Although CBCS is a dedicated service to an organization, but existing CBCS can also dedicate to other organization when development companies will follow the CBDAMO (Cloud Based Dedicated Customized Application for Multiple Organization) technique.
Abstract: Desktop applications should be run over the cloud environment. This is the slogan of those organization which are familiar with the cloud computing. Organization's members are not only responsible for smooth running of desktop application but also have a burden of successful running of database server, back & recovery devices, time and cost of human efforts. CBCS (Cloud Based Custom Software) is the solution of such issues. Development companies are responsible for development of CBCS of an organization. Although CBCS is a dedicated service to an organization, but existing CBCS can also dedicate to other organization when development companies will follow the CBDAMO (Cloud Based Dedicated Customized Application for Multiple Organization) technique. This is 6-layer technique which provides a smooth way to development companies for reusing the existing CBCS to new CBCS. Key Words: Cloud Computing, CBCS, DBCS, CBDAMO, Development Companies

Journal Article
TL;DR: 3D-DCT video compression algorithm takes a full-motion digital video stream and divides it into groups of 8 frames, considered as a three-dimensional image, which includes 2 spatial components and one temporal component.
Abstract: Image compression addresses the problem of reducing the amount of data required to represent a digital image called the redundant data. The underlying basis of the reduction process is the removal of redundant data. The redundancy used here is pschychovisual redundancy. 3D-DCT video compression algorithm takes a full-motion digital video stream and divides it into groups of 8 frames. Each group of 8 frames is considered as a three-dimensional image, which includes 2 spatial components and one temporal component. Each frame in the image is divided into 8x8 blocks (like JPEG), forming 8x8x8 cubes. Each 8x8x8 cube is then independently encoded using the 3D-DCT algorithm: 3D-DCT, Quantizer, and Entropy encoder. A 3D DCT is made up of a set of 8 frames at a time. Image compression is one of the processes in image processing which minimizes the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. Keywords: 2D DCT, 3D DCT, JPEG, GIF, CR

Journal Article
TL;DR: The aim is to show that miniDES algorithm is efficient and sufficient to provide the security for the RFID based systems, there is no need for very complex cryptography algorithms which requires high power of computational power.
Abstract: Radio frequency identification (RFID) is a generic term that is used to describe a system that transmits the identity (in the form of a unique serial number) of an entity or person wirelessly, using radio waves. Unlike bar-code technology, RFID technology does not require contact or line of sight for communication. RFID data can be read through the human body, clothing and non-metallic materials. RFID Systems technically consist of RFID Tags, Readers, Communication Protocols and Information Systems. These technical parts enable the collection of data on the tagged object or person. In most of today's RFID Systems the data on the tag is accessible by anyone who is able to operate a RFID reader in such cases how much these RFID tags are protected? The tags accessed data should be only readable to authenticated people. The challenge in providing security for low- cost RFID tags is that they are computationally weak systems, unable to perform even basic symmetric-key cryptographic operations et al "ARI JUELS(2004)" . Security for RFID systems has to start at the base of the technology. Information on the tags has to be stored in a secure way by using a lightweight crypto algorithm. The processing capabilities of many RFID based embedded systems are easily besieged by the computational demands of security processing, leading to failures in sustaining required data rates or number of connections et al "SRIVATHS RAVI & ANAND RAGHUNATHAN, PAUL KOCHER, SUNIL HATTANGADY" . In this paper, I explore a concept of miniDES symmetric key algorithm is suitable for RFID tag security deploying in the Bike renting system. I consider the type of security obtainable in RFID based devices with a small amount of rewritable memory, but very limited computing capability. My aim is to show that miniDES algorithm is efficient and sufficient to provide the security for the RFID based systems, there is no need for very complex cryptography algorithms which requires high power of computational power. And the automation of bike renting system definitely enhances the performance of the bike renting process Keywords: Radio frequency Identification, Bike renting, miniDES, embedded systems, cryptography, smart card systems