scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Research in Computer and Communication Engineering in 2015"



Journal ArticleDOI
TL;DR: To provide security of data and authentication of user, a technique where two concepts are implemented for performing those operations, the first one is identity based signature (IBS) for verification of user generated by the verifier and second one is a key is xor operated with the data and get the cipher and binary level technique for encryption and decryption of the original message.
Abstract: Secure and efficient data transmission is a critical issue for cluster-based wireless Sensor Networks (WSNs). In Cluster-based WSNs authentication of users is a very Important issue .So, by authenticating the sent user and the destination user , we can achieve the security and efficiency of data over CWSNs. To provide security of data and authentication of user we proposed a technique where we are implementing two concepts for performing those operations. The first one is identity based signature (IBS) for verification of user generated by the verifier and second one is a key is xor operated with the data and get the cipher and then binary level technique for encryption and decryption of the original message. The binary level technique converts the plain text into binary form and then splits the data into blocks and assign values to it based on identification mark (IM) technique which depends upon the length of the binary digits, then these are divided into two level, 1 st level is 2 bit and 2 nd level is 4 bit . Then at the receiver user the Cipher text will be decrypted by using the reverse technique and the destination user will get the original message. By providing those techniques we can improve efficiency, security overhead and energy consumption.

45 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that EG-RAODV significantly outperforms better packet delivery ratio, lowest routing request ratio, less link failures while maintaining a reasonable routing control overhead and lowest average end to end delay.
Abstract: 4 Abstract: Vehicular ad hoc networks (VANETs) are a special form of wireless networks made by vehicles communicating among themselves on roads which includes communications among vehicles and between vehicles and road side units. However, due to the high mobility and the frequent changes of the network topology, the communication links are highly vulnerable to disconnection in VANETs. This paper extend the well-known ad hoc on- demand distance vector (AODV) routing protocol with evolving graph theory to propose reliable routing protocol EG- RAODV. Simulation results demonstrate that EG-RAODV significantly outperforms better packet delivery ratio, lowest routing request ratio, less link failures while maintaining a reasonable routing control overhead and lowest average end to end delay.

43 citations


Journal ArticleDOI
TL;DR: The implementation of image processing operations on Raspberry Pi is presented, used in the real time application of MAV, where the image captured by MAVs will consist of unwanted things due to atmospheric conditions; hence it is necessary to remove noise present in the MAVs images.
Abstract: Today image processing are used in various techniques, this paper presents the implementation of image processing operations on Raspberry Pi. The Raspberry Pi is a basic embedded system and being a low cost a singleboard computer used to reduce the complexity of systems in real time applications. This platform is mainly based on python. Raspberry pi consist of Camera slot Interface (CSI) to interface the raspberry pi camera. Here, the Dark and Low contrast images captured by using the Raspberry Pi camera module are enhanced in order to identify the particular region of image. This concept is used in the real time application of MAV, The MAVs are basically used to capture images and videos through the Raspberry pi camera module. Because of its credit card sized (small) and less weight in the design. However, the image captured by MAVs will consist of unwanted things due to atmospheric conditions; hence it is necessary to remove noise present in the MAVs images.

36 citations


Journal ArticleDOI
TL;DR: A survey of navigation system of visually impaired people highlights various technologies with their practical usefulness, design and working challenges and requirements of blind people to provide a better understanding to identify important research directions in this increasingly important social area for future research.
Abstract: 2 Abstract: Mobility of visually impaired people is restricted by their incapability to recognize their surroundings. According to the World Health Organization (WHO) in 2012, out of 7 billion global population there were over 285 million visually impaired people and 39 million were totally blind out of which 19 million are children ( below 15 years). This means that someone in our world goes blind in every five seconds and a child in every minute. Over 90 percent blind children obtain no schooling. Recent survey source India is now became the world's large number of blind people. The population of India has reached 120 Cr. of those 8.90 Cr. people are visually impaired. 90%of those cannot travel independently. In this paper, we present a survey of navigation system of visually impaired people highlighting various technologies with their practical usefulness, design and working challenges and requirements of blind people. The aim of this paper is to provide a better understanding to identify important research directions in this increasingly important social area for future research.

30 citations


Journal ArticleDOI
TL;DR: This paper reviews, analyse and compare features of the existing cloud computing modelling and simulation tools, and suggests ways to improve the quality and accuracy of these tools.
Abstract: 2 Abstract: Cloud computing is sharing of computer hardware and software resources over the internet so that anyone who is connected to the internet can access it as a service in a seamless way. As we are moving more and more towards the application of this newly emerging technology, it is essential to study, evaluate and analyse the performance, security and other related problems that might be encountered in cloud computing. Since, it is not a feasible way to directly analyse the behaviour of cloud on such problems using the real hardware and software resources due to its high costs, modelling and simulation has become a very powerful tool to cope with these issues. In this paper, we review, analyse and compare features of the existing cloud computing modelling and simulation tools.

29 citations


Journal ArticleDOI
TL;DR: This paper evaluates facial representation predicated on statistical local features, Local Binary Patterns, for facial expression recognition, and finds that LBP features are effective and efficient for face expression recognition.
Abstract: 3 Abstract: LBP is really a very powerful method to explain the texture and model of a digital image. Therefore it was ideal for feature extraction in face recognition systems. A face image is first split into small regions that LBP histograms are extracted and then concatenated in to a single feature vector. This vector forms an efficient representation of the face area and can be used to measure similarities between images. Automatic facial expression analysis is a fascinating and challenging problem, and impacts important applications in several areas such as human- computer interaction and data-driven animation. Deriving a facial representation from original face images is an essential step for successful facial expression recognition method. In this paper, we evaluate facial representation predicated on statistical local features, Local Binary Patterns, for facial expression recognition. Various machine learning methods are systematically examined on several databases. Broad experiments illustrate that LBP features are effective and efficient for facial expression recognition.

27 citations


Journal ArticleDOI
TL;DR: In this paper, three schemes (precompensation, postcompensation and symmetrical-compensation) of dispersion compensation with DCF are proposed, which are compared in terms of Q-factor, BER, eye height and threshold value at the receiver end.
Abstract: 2 Abstract: In this paper, dispersion compensating fibers (DCF) are used to compensate the positive dispersion accumulated over the length of the fiber at 10Gbits/sec and 250 km of optical fiber with 50 km of DCF. Three schemes (Pre-compensation, post-compensation and symmetrical-compensation) of dispersion compensation with DCF are proposed. The simulated transmission system has been analyzed on the basic of different parameters by using OptiSystem 7.0 simulator. The results of three dispersion compensation methods are compared in terms of four parameter, which are Q-factor, BER, Eye height and threshold value, investigated at the receiver end. Further, it has been observed that the system needs proper matching between EDFA gain and the length of the fiber for the optimum performance.

26 citations


Journal ArticleDOI
TL;DR: Two cloud simulators: CloudSim and CloudAnalyst, with their overview are presented so it can be easily decided which one is suitable for particular research topic, and the survey on the service broker policy, its issues and available solutions are presented.
Abstract: Cloud computing is one of the most promising computing field, which has given the new vision to the computing field. Cloud computing has opened a door as a new model for hosting and delivering services over the Internet. The main aim of cloud computing is to provide the resources as a services to the client. The new concept of Federated Cloud Computing in which multiple datacenters are distributed over different regions. Since the evolution of Cloud Computing: load balancing, energy management, VM migration, brokerage policies, cost modelling and security issues are popular research topics in the field. Deployment of real cloud environment for testing or for commercial use is very costly. Cloud simulators help to model various cloud applications and it is very easy to analyse. In this survey, two cloud simulators: CloudSim and CloudAnalyst, with their overview are presented so it can be easily decided which one is suitable for particular research topic. And also the survey on the service broker policy, its issues and available solutions are presented. Because there is always been the requirement to select appropriate datacenter so that further tasks for processing the request should be carried out with efficiency in least response time. So the issue of selecting appropriate datacenter which is known as service broker policy is kind of important.

23 citations


Journal ArticleDOI
TL;DR: This paper presents the summary about unstructured data analysis for the beginners or the people from academia who is interested in analysis of un Structured data to extract the knowledge to improve the business processes and performance.
Abstract: Recent years have observed the ability to gather a massive amount of data in a large number of domains. As the data is collected in unprecedented rate, the analysis, rather than the storage of this data becomes a challenge. According to the IDC estimation 90% of data is unstructured data which is a fastest growing data whereas the remaining is the structured data, unstructured data refers to information that either does not have predefined data model or does not fit into relational database for information access. This unstructured data are being continuously comes from various sources like satellite images, sensor readings, email messages, social media, web logs, survey results, audio, videos etc. Due to the large volume of unstructured data there is a big challenge for all the industry currently to analyse and extract a meaningful value from it. Traditional methods are adequate for analysis of structured data but these methods are not appropriate for large volume of unstructured data in order to extract knowledge. This paper presents the summary about unstructured data analysis for the beginners or the people from academia who is interested in analysis of unstructured data to extract the knowledge to improve the business processes and performance.

21 citations


Journal ArticleDOI
TL;DR: In this paper, the authors highlight some technology which is used for air pollution monitoring and how effective of these technologies are and identify the important research in this important area and highlight some technologies which are used in this area.
Abstract: Air pollution monitoring is though old but very useful concept in day to day life. Air pollution monitoring start from traditional way to the most sophisticated computer has been used to monitor the air quality, however the fresh air is necessary for all human being, for that various technology has been used and some of this technology is really useful in order to provide a real time air quality data. Aim of this paper is to highlight some technology which is used for air pollution monitoring and how effective of these technologies are and identify the important research in this important area.

Journal ArticleDOI
TL;DR: To evaluate the performance of Computer Aided Diagnosis (CAD) for Lung Cancer using artificial neural intelligence on CT scan images, region of interest is evaluated using maximum entropy and supervised learning.
Abstract: To evaluate the performance of Computer Aided Diagnosis (CAD) for Lung Cancer using artificial neural intelligence on CT scan images. Lung Cancer can be summarized by evaluating region of interest using maximum entropy and supervised learning. Lung cancer seems to be the common cause of death among people throughout the world. Early detection of lung cancer seems to be the only factor which can increase the chance of survival among people.

Journal ArticleDOI
TL;DR: Raspberry Pi is used as a standalone platform for hosting image processing and the algorithm for face detection is being implemented on raspberry pi which enables live video streaming along with detection of human faces.
Abstract: Video Surveillance is important as far as security is concerned these days. Commercial spaces, schools and hospitals, warehouses and other challenging indoor and outdoor environments require high end cameras with PTZ. The current technologies require RFIDs which are costly and hence the security domain in all becomes expensive. This paper describes the use of low cost single -board computer Raspberry Pi which follows face detection algorithm written in Python as a default programming environment. This new technology is less expensive and in this paper it is used as a standalone platform for hosting image processing. The paper aims at developing a system which captures real time images and displays them on browser using TCP/IP. The algorithm for face detection is being implemented on raspberry pi which enables live video streaming along with detection of human faces.

Journal ArticleDOI
TL;DR: This paper presents a comprehensive review of Handwritten Character Recognition (HCR) in English language.
Abstract: This paper presents a comprehensive review of Handwritten Character Recognition (HCR) in English language.The handwritten character recognition has been applied in variety of applications like Banking sectors, Health care industries and many such organizations where handwritten documents are dealt with. Handwritten Character Recognition is the process of conversion of handwritten text into machine readable form. For handwritten characters there are difficulties like it differs from one writer to another, even when same person writes same character there is difference in shape, size and position of character. Latest research in this area has used different types of method, classifiers and features to reduce the complexity of recognizing handwritten text.

Journal ArticleDOI
TL;DR: By using Genetic algorithm the time require to generate time table is reduced and a timetable which is more accurate, precise and free of human errors is generated.
Abstract: 4 Abstract: Timetable creation is a very arduous and time consuming task. To create timetable it takes lots of patience and man hours. Time table is created for various purposes like to organize lectures in school and colleges, to create timing charts for train and bus schedule and many more. To create timetable it requires lots of time and man power .In our paper we have tried to reduce these difficulties of generating timetable by Genetics Algorithm. By using Genetic algorithm we are able to reduce the time require to generate time table and generate a timetable which is more accurate, precise and free of human errors. The first phase contains all the common compulsory classes of the institute, which are scheduled by a central team. The second phase contains the individual departmental classes. Presently this timetable is prepared manually, by manipulating those of earlier years, with the only aim of producing a feasible timetable.

Journal ArticleDOI
TL;DR: This paper is an attempt to set a benchmark in comparing the performance of MySQL against SQL Server in Windows Environment and shows that SQL Server is still a significantly better performer when compared to MySQL.
Abstract: The enormous amount of data flow has made Relation Database Management System the most important and popular tools for persistence of data. While open-source RDBMS systems are not as widely used as proprietary systems like Oracle DB or SQL Server, but over the years, systems like MySQL have gained massive popularity. In a stereotypical view, SQL Server is considered to be an enterprise-level tool, MySQL has carved a niche as a backend for website development. This paper is an attempt to set a benchmark in comparing the performance of MySQL against SQL Server in Windows Environment. To test and evaluate the performance the Resort Management System named Repose is considered. The result shows that SQL Server is still a significantly better performer when compared to MySQL.

Journal ArticleDOI
TL;DR: This study examines the factors associated with the assessment of teacher's performance and proposes a model to evaluate their performance through the use of techniques of data mining like association, classification rules to determine ways that can help them to better serve the educational process and hopefully improve their performance.
Abstract: This study examines the factors associated with the assessment of teacher's performance. To improve the teacher performance, good prediction of training course that will be obtained by teacher in one way to reach the highest level of quality in Teacher performance, but there is no certainty if there are accurately determine Teacher advantage and increase its efficiency through this session. In this case the real data is collected for teachers from the Ministry of Education and Higher Education in Gaza City. It contains data from the academic qualifications for teachers as well as their experience and courses. The data includes three years and questionnaire contains many questions about the course and length of service in the ministry. We propose a model to evaluate their performance through the use of techniques of data mining like association, classification rules (Decision Tree, Rule Induction, K-NN, Naive Bayesian (Kernel)) to determine ways that can help them to better serve the educational process and hopefully improve their performance and thus reflect it on the performance of teachers in the classroom. In each tasks, we present the extracted knowledge and describe its importance in teacher performance domain.

Journal ArticleDOI
TL;DR: A mobile sensing system (android application) for road irregularity detection using Android OS based smart phone sensors and research in identifying braking events - frequent braking indicates congested traffic conditions - and bumps on the roads to characterize the type of road is described.
Abstract: Importance of the road infrastructure for the society could be the same as importance of blood vessels for humans. Road surface quality should be monitored and repaired on a regular basis. It is very difficult to design a optimal system which gathers the road condition data and processes it. Participatory sensing approach can be mostly used for such data collection.The paper is describes a mobile sensing system (android application) for road irregularity detection using Android OS based smart phone sensors. Selected data processing algorithms are discussed and their evaluation presented with true positive rate as high as 90% using real world data. The optimal parameters for the algorithms are determined as well as recommendations for their application.Continuously keeping track on road and traffic conditions in a city is a problem widely studied. Many methods have available towards addressing this problem. But this methods proposed require dedicated hardware such as GPS devices and accelerometers in vehicles or cameras on roadside and near traffic signals. All such proposed are unaffordable tothe common man regarding of monetary cost and human effort required. We extend a prior study to improve the algorithm based on using accelerometer, GPS and magnetometer sensor readings for trafficand road conditions detection. We are specifically made research in identifying braking events - frequent braking indicates congested traffic conditions - and bumps on the roads to characterize the type of road.

Journal ArticleDOI
TL;DR: The work done in CLIR and translation techniques for CLIR are described and different type of translation techniques can be used to achieve Cross Language Information Retrieval.
Abstract: Search for the information is not only limited to the native languages of the user, but nowadays it is more extended to other languages. Cross language information retrieval (CLIR), whose goal is to find relevant information written in a language different from the language of query. CLIR can be used to enhance the ability of users to search and retrieve documents in many languages. Different type of translation techniques can be used to achieve Cross Language Information Retrieval. This paper describes the work done in CLIR and translation techniques for CLIR

Journal ArticleDOI
TL;DR: A total insight of various Peak -to Average Power Reduction (PAPR) techniques and principles of OFDM systems used in wireless communications is given.
Abstract: Orthogonal frequency division multiplexing (OFDM) is a special case of multicarrier transmission which transmits a stream of data over a number of lower data rate subcarriers. OFDM splits the total transmission bandwidth into a number of orthogonal and non-overlapping subcarriers and transmit the collection of bits called symbols in parallel using these subcarriers. This paper gives a total insight of various Peak -to Average Power Reduction (PAPR) techniques and principles of OFDM systems used in wireless communications. The research paper places a focus also on OFDM behaviors and techniques like Carrier Frequency Offset (CFO) estimation that improves performance of OFDM for wireless communications. Finally, the paper provides a number of wireless communication standards and many of the applications where OFDM systems are used.

Journal ArticleDOI
T Shabana, A Anam, A Rafiya, K Aisha, Saboo Siddik 
TL;DR: This paper aims at developing an email system that will help even a naive visually impaired person to use the services for communication without previous training and is completely based on interactive voice response which will make it user friendly and efficient to use.
Abstract: In today's world communication has become so easy due to integration of communication technologies with internet. However the visually challenged people find it very difficult to utilize this technology because of the fact that using them requires visual perception. Even though many new advancements have been implemented to help them use the computers efficiently no naive user who is visually challenged can use this technology as efficiently as a normal naive user can do that is unlike normal users they require some practice for using the available technologies. This paper aims at developing an email system that will help even a naive visually impaired person to use the services for communication without previous training. The system will not let the user make use of keyboard instead will work only on mouse operation and speech conversion to text. Also this system can be used by any normal person also for example the one who is not able to read. The system is completely based on interactive voice response which will make it user friendly and efficient to use.

Journal ArticleDOI
TL;DR: The details of the face have been taken as blocks and Discrete Cosine Transform is used, applied on face image’s blocks and without doing inverse DCT Principal Component Analysis (PCA) is applied directly for dimensionality reduction this makes the system very fast.
Abstract: The speed of procedures became necessities during recent years; therefore using computers turn out to be the most important factors to increase the speed of implementation especially in security aspect such as recognition of people.There are a lot of waysto recognize the people face recognition is one of them. In this work the details of the face have been taken as blocks and Discrete Cosine Transform (DCT) is used, applied on face image’s blocks. Then without doing inverse DCT Principal Component Analysis (PCA) is applied directly for dimensionality reduction this makes the system very fast. Olivetti Research Laboratory (ORL) database of faces had been used to obtain the results.Each face is considered as a numerical sequence (blocks) that can be easily modelled by HMM. On 400 face images of the (ORL) face database the system has been examined. The experiments showed a recognition rate of 95.211%, using half of the images for training.

Journal ArticleDOI
TL;DR: An improved testability quantification framework using the identified set of potential factors, OOD properties and OOD metrics for software products at early phase, exclusively at design time is described.
Abstract: The quality of any object oriented design is critical because it has a great influence on overall quality of finally delivered software product. Testability quantification early in the software development process is a criterion of crucial important to software development team. Testability has always been an indefinable concept its correct measurement, quantification or evaluation is a difficult task because of its potential factors. Testability analysis of object oriented software at an initial stage of software development process has been identified as a key factor for high quality product. A best suited object oriented design (OOD) properties and its associated metrics are helpful if applied in the early stage of development process .This paper describes an improved testability quantification framework using the identified set of potential factors, OOD properties and OOD metrics for software products at early phase, exclusively at design time. The proposed framework relates OOD properties to high level quality attributes/constructs using appropriate information to develop quality product and it may used to benchmark software products according to their key attribute. The objective of this research work to encourage researchers and developers to provide a framework to access and quantify software testability at early stage of development life cycle.

Journal ArticleDOI
TL;DR: The motivation was to create an object tracking application to interact with the computer, and develop a virtual human computer interaction device that will convert the Sign Language into text and audio.
Abstract: Communication is an integral part of human life. But for people who are mute & hearing impaired, communication is a challenge. To understand them, one has to either learn their language i.e. sign language or finger language. The system proposed in this project aims at tackling this problem to some extent. In this paper, the motivation was to create an object tracking application to interact with the computer, and develop a virtual human computer interaction device. The motivation behind this system is two-fold. It has two modes of operation: Teach and Learn. The project uses a webcam to recognize the hand positions and sign made using contour recognition (3) and outputs the Sign Language in PC onto the gesture made. This will convert the gesture captured via webcam into audio output which will make normal people understand what exactly is being conveyed. Thus our project Sign Language to Speech Converter aims to convert the Sign Language into text and audio.

Journal ArticleDOI
TL;DR: A new method for face detection and segmentation based on face color, which uses the YCbCr color space as a method to segment image to many regions and forms a prerequisite for any practical verification system using face as the main attribute.
Abstract: Detection and segmentation of faces from an image is a crucial problem that has gained importance, face detection and segmentation play the main role in the face recognition systems. There are many difficulties should be solved to make the face detection and segmentation algorithm successful. The face skin has special colors range as well as special textures that can be detected by using texture recognition algorithms which recognized skin from the background. In this paper we introduced a new method for face detection and segmentation based on face color, we uses the YCbCr color space as a method to segment image to many regions. Gray level co-occurrence matrix used to extract the important features represent the skin, and then Tamura texture used to remove all the non skin blobs which is recognized as skin by GLCM. The proposed algorithm tested with many images and it was successfully recognize the images with faces from images without faces. The proposed algorithm has high efficient in detected faces and segmented faces from the background. The accuracy of this algorithm more than 99% in detection faces and also segment its. The proposed algorithm forms a prerequisite for any practical verification system using face as the main attribute.

Journal ArticleDOI
TL;DR: In most of the university and colleges attendance of students is important factor, checking student attendance is the important issues because all universities evaluate student's attendance while them giving final grade as discussed by the authors.
Abstract: In most of the university and colleges attendance of students is important factor, checking student attendance is the important issues because all universities evaluate student’s attendance while them giving final grade. Some colleges use paper sheet for student attendance and after that fill all this information manually in college server. This all are time consuming process like calling particular student then fill all information, and student give the proxies of their friends even they absent. By considering all this issue we develop one system which get attendance and update attendance in one place. Our paper presents near field communication technology to get the attendance of students in school and colleges. The system is based on NFC Technology and run on mobile as application, this paper presented details of this system. Keyword: attendance, near field communication, android OS, embedded mobile camera

Journal ArticleDOI
TL;DR: It can be concluded that even extended C4.5 is complex but decision tree obtained is the most suitable with high true positive (correct detection of attacks) and low false positive (Incorrect detection) with high accuracy.
Abstract: 5 Abstract: With the tremendous growth of the usage of computers over network and development in application running on various platform captures the attention toward network security(1). Intrusion detection system has become an important component of a network infrastructure protection mechanism. The Intrusion Detection System (IDS) plays a vital role in detecting anomalies and attacks in the network (5). In this work, data mining concept is integrated with an IDS to identify the relevant, hidden data of interest for the user effectively and with less execution time. In proposed system, we first preprocess dataset (KDD 99 cup). Then we study different types of decision tree algorithms (C4.5 and its extension) of data mining for the task of detecting intrusions and compare their relative performances. Based on this study, it can be concluded that even extended C4.5 is complex but decision tree obtained is the most suitable with high true positive (correct detection of attacks) and low false positive (Incorrect detection) with high accuracy.

Journal ArticleDOI
TL;DR: A robot designed for agricultural purposes that performs basic elementary functions like picking, harvesting, weeding, pruning, planting, grafting.
Abstract: Agribot is a robot designed for agricultural purposes. As one of the trends of development on automation and intelligence of agricultural machinery in the 21 st century, all kinds of agricultural robots have been researched and developed to implement a number of agricultural production in many countries. This Bot can performs basic elementary functions like picking, harvesting, weeding, pruning, planting, grafting.

Journal ArticleDOI
TL;DR: This paper investigates the use of data mining techniques in forecasting maximum temperature, rainfall, evaporation and wind speed and concludes that hybrid system has emerged.
Abstract: 5 Abstract: Weather forecasting is an important application in meteorology and has been one of the most scientifically and technologically challenging problems around the world. In this paper, we investigate the use of data mining techniques in forecasting maximum temperature, rainfall, evaporation and wind speed. Weather prediction approaches are challenged by complex weather phenomena with limited observations and past data. Weather phenomena have many parameters that are impossible to enumerate and measure. Increasing development on communication systems enabled weather forecast expert systems to integrate and share resources and thus hybrid system has emerged. Even though these improvements on weather forecast, these expert systems can't be fully reliable since weather forecast is main problem.

Journal ArticleDOI
TL;DR: The second largest cellular market in the world after China, with a massive subscriber base of 867.80 million, as of March 2013, is the second largest mobile market after China.
Abstract: India is the second largest cellular market in the world after China, with a massive subscriber base of 867.80 million, as of March 2013. Majority of smartphone users are still on 2G network. Budget 4G smartphones coupled with affordable plans, can very well drive 4G growth in India. The most obvious mobile commerce trend is further development. Yearly m-commerce sales are forecasted to increase fourfold billion in the next few years. Businesses are beginning to realize that m-commerce is key to enhance their brand, boost sales, and keep up with competitors.India's retail market is expected to cross 1.3 trillion USD by 2020 from the current market size of 500 billion USD. Modern retail with a penetration of only 5% is expected to grow about six times from the current 27 billion USD to 220 billion USD, across all categories and segments.India is set to witness proliferation of the fourth-generation wireless data services, or 4G services shortly with slashed data plans. Being the second largest mobile market in the world, India needs to take its place in the forefront of providing innovative services and applications to its citizens. Recent eMarketer study, by the year 2017 more than 25% of all online retail transactions will happen in the mobile paradigm. Adweek explains that statistic with information that 18-34 year olds are very likely to use their mobile devices as a shopping tool. Their process is to visit their favorite retail stores not to shop but to view a product and compare prices, and then to compare prices at various online locations using their phones. They then buy the product using their mobile device. The future looks very bright for mobile commerce, although businesses are still experimenting with how to use the mobile commerce concept to their best advantage.