scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Global Research in Computer Sciences in 2013"


Journal Article
TL;DR: The aim of this work is to provide various mobility strategies, defences and measures, control aspect, management and governance aspect to look forth in implementing a BYOD strategy in an organization.
Abstract: The growth of mobile technology, with regard to availability of 3G/4G services and devices like Smartphone’s has created new phenomenon for communication and data processing ability to do business One such phenomenon that has emerged in the business environment is BYOD (Bring Your Own Device), which means that employees use their personal device to access company resources for work, inside or outside organizational environment This new phenomenon brings with itself new opportunities but has many risks associated with it Using mobile devices for personal as well as professional work brings with itself risks that need to be mitigated The aim of this work is to provide various mobility strategies, defences and measures, control aspect, management and governance aspect to look forth in implementing a BYOD strategy in an organization

109 citations


Journal Article
TL;DR: In image processing, noise reduction and restoration of image is expected to improve the qualitative inspection of an image and the performance criteria of quantitative image analysis techniques.
Abstract: In image processing, noise reduction and restoration of image is expected to improve the qualitative inspection of an image and the performance criteria of quantitative image analysis techniques Digital image is inclined to a variety of noise which affects the quality of image. The main purpose of de-noising the image is to restore the detail of original image as much as possible. The criteria of the noise removal problem depends on the noise type by which the image is corrupting .In the field of reducing the image noise several type of linear and non linear filtering techniques have been proposed . Different approaches for reduction of noise and image enhancement have been considered, each of which has their own limitation and advantages.

84 citations


Journal Article
TL;DR: This paper mainly discusses about various colour spaces and the how they organized and the colour conversion algorithms like CMYK to RGB, RGB toCMYK, HSL toRGB,RGB to HSL, HSV to RGB , RGB To HSV, YUV to RGB And RGB to YUV.
Abstract: A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components (e.g. RGB and CMYK are color models). However, a color model with no associated mapping function to an absolute color space is a more or less arbitrary color system with no connection to any globally understood system of color interpretation. This paper mainly discusses about various colour spaces and the how they organized and the colour conversion algorithms like CMYK to RGB, RGB to CMYK, HSL to RGB, RGB to HSL , HSV to RGB , RGB To HSV , YUV to RGB And RGB to YUV.

36 citations


Journal Article
TL;DR: This paper, focuses on the concept, process and applications of Text Mining.
Abstract: With the advancement of technology, more and more data is available in digital form. Among which, most of the data (approx. 85%) is in unstructured textual form. Text, so it has become essential to develop better techniques and algorithms to extract useful and interesting information from this large amount of textual data. Hence, the area of text mining and information extraction has become popular areas of research, to extract interesting and useful information. This paper, focuses on the concept, process and applications of Text Mining.

34 citations


Journal Article
TL;DR: This paper takes a closer look at the six requirement prioritization techniques and puts them in a controlled experiment with the objective of understanding differences regarding ease of use, total time taken, scalability, accuracy, and total number of comparisons required to make decisions.
Abstract: There are many requirements prioritization techniques and selecting the most appropriate one is a decision problem in its own rights. This paper takes a closer look at the six requirement prioritization techniques and put them in a controlled experiment with the objective of understanding differences regarding ease of use, total time taken, scalability, accuracy, and total number of comparisons required to make decisions. These five criteria combined will indicate which technique is more suitable. The result from the experiment shows that Value oriented Prioritization (VOP) yields an accurate result, can scale up, and requires the least amount of time.

32 citations


Journal Article
TL;DR: This paper mainly discusses about various colour spaces and the how they organized and the colour conversion algorithms like CMYK to RGB, RGB toCMYK, HSL toRGB,RGB to HSL, HSV to RGB , RGB To HSV, YUV to RGB And RGB to YUV.
Abstract: A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components (e.g. RGB and CMYK are color models). However, a color model with no associated mapping function to an absolute color space is a more or less arbitrary color system with no connection to any globally understood system of color interpretation. This paper mainly discusses about various colour spaces and the how they organized and the colour conversion algorithms like CMYK to RGB, RGB to CMYK, HSL to RGB, RGB to HSL , HSV to RGB , RGB To HSV , YUV to RGB And RGB to YUV. Keywords: Colour spaces, colour conversion algorithms, CMYK, RGB, HSL, HSV and YUV

24 citations


Journal Article
TL;DR: Focus of this paper work is on centrality measure analysis carried out on the co-authorship network using Gephi, a social network analysis tool.
Abstract: Study of social networks reveal communication patterns which are of interest to researchers. Co-authorship network is one type of a social network. These networks represent the publication work carried out by researchers. Co-authorship networks analysis is useful in understanding the structure of scientific collaborations and status of individual authors. Centrality measure calculation is one of the many tasks of social network analysis. Focus of this paper work is on centrality measure analysis carried out on the co-authorship network using Gephi, a social network analysis tool.

22 citations


Journal Article
TL;DR: Main that specialize in important options, performance improvement, over read of routing protocol and represents the previous work has in hot water conveyance unintended network (VANET).
Abstract: Security is one in all the main problems in VANET. Cooperation among inter-vehicular networks and device networks placed inside the vehicles or on the road need to be further investigated and analysed. As the number of vehicles grows the trust between them should also be maintained for the flexible communication. There are lots of analysis regarding VANET for driving services, traffic data services, user communication and knowledge services. This network, with its huge size, plays a critical role in communication because all types of people use it to achieve daily routine required service. A small error in these can cause great disaster in roads specially. Imagine an attacker take control over a worldwide VANET-based network then he will be able to break it and cause chaos in all roads .VANET will perform effective communication by utilizing routing info. Some researchers’ area unit contributed tons within the space of VANET. During this article in the main that specialize in important options, performance improvement, over read of routing protocol and represents the previous work has in hot water conveyance unintended network (VANET).

21 citations


Journal Article
TL;DR: The proposed system segments and classifies oral cancers at an earlier stage and accuracy obtained for the proposed system is 92.5%.
Abstract: Oral Cancer is the most common cancer found in both men and women. The proposed system segments and classifies oral cancers at an earlier stage. The tumor is detected using Marker Controlled Watershed segmentation. The features extracted using Gray Level Co occurrence Matrix (GLCM) is Energy, Contrast, Entropy, Correlation, Homogeneity. The extracted features are fed into Support Vector Machine (SVM) Classifier to classify the tumor as benign or malignant. The accuracy obtained for the proposed system is 92.5%.

20 citations


Journal Article
TL;DR: In this paper, the authors review the limitations of satellite remote sensing and the problems of image fusion techniques and conclude that the remote sensing is still the lack of software tools for effective information extraction from remote sensing data.
Abstract: Remote sensing has proven to be a powerful tool for the monitoring of the Earth‟s surface to improve our perception of our surroundings has led to unprecedented developments in sensor and information technologies. However, technologies for effective use of the data and for extracting useful information from the data of Remote sensing are still very limited since no single sensor combines the optimal spectral, spatial and temporal resolution. This paper briefly reviews the limitations of satellite remote sensing. Also, reviews on the problems of image fusion techniques. The conclusion of this, According to literature, the remote sensing is still the lack of software tools for effective information extraction from remote sensing data. The trade-off in spectral and spatial resolution will remain and new advanced data fusion approaches are needed to make optimal use of remote sensors for extract the most useful information.

19 citations


Journal Article
TL;DR: This research work is aimed to help the software engineers to identify the use of formal methods at different stages of software development, with special reference to the requirements phase.
Abstract: There is an increasing demand of current information systems to incorporate the use of a higher degree of formalism in the development process. Formal Methods consist of a set of tools and techniques based on mathematical model and formal logic that are used to specify and verify requirements and designs for hardware and software systems. This paper presents a detailed analysis of formal methods along with their goals and benefits followed by limitations. This research work is aimed to help the software engineers to identify the use of formal methods at different stages of software development, with special reference to the requirements phase.

Journal Article
TL;DR: The design of simple hardware circuit enables every user to use this wireless home security system with PIR sensor, Gas sensor, Smoke sensor and Main fuse Failure Detector at Home & Industries.
Abstract: Security and automation is a prime concern in our day-to-day life. The approach to home and industrial automation and security system design is almost standardized nowadays. In this paper, we have tried to increase these standards by combining new design techniques and developed a low cost home and industrial automated security systems. Everyone wants to be as much as secure as possible. The design of simple hardware circuit enables every user to use this wireless home security system with PIR sensor, Gas sensor, Smoke sensor and Main fuse Failure Detector at Home & Industries [6]. The system is fully controlled by the 8 bit P89V51RD2 microcontroller. All the sensors and detector are interconnected to microcontroller by using various types of interface circuits. The microcontroller will continuously monitor all the sensors and if it senses any security problem then the microcontroller will send the SMS to the user mobile through GSM modem. The Microcontroller also turns ON and OFF the electrical appliances in home and industry based on SMS received from the user.

Journal Article
TL;DR: The Objective of this paper is to perform Automation Testing using Software Testing Tool “Selenium”, with this web testing tool, test cases are automatically recorded in background while tester is entering the data in a web application screen.
Abstract: Testing is a very important activity in Software Development Process. It is to examine & modify source code. Effective Testing produces high quality software. This Paper deals with a significant and vital issue of Software Testing. Testing can be conducted manually as well as Automated. These Techniques have their own advantages & disadvantages. The Objective of this paper is to perform Automation Testing using Software Testing Tool “Selenium”. With this web testing tool, test cases are automatically recorded in background while tester is entering the data in a web application screen.

Journal Article
TL;DR: Different steps involved in face recognition using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and the different distance measures that can be used in face Recognition are discussed.
Abstract: Face recognition has become a major field of interest these days. Face recognition algorithms are used in a wide range of applications such as security control, crime investigation, and entrance control in buildings, access control at automatic teller machines, passport verification, identifying the faces in a given databases. This paper discusses different steps involved in face recognition using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) and the different distance measures that can be used in face recognition

Journal Article
TL;DR: This paper has point out some of the major issues effecting the security and reliability of the cloud.
Abstract: Cloud computing is one of the most widely used and acceptable technique. Because of this it came into attention of various groups of people, so the security and maintenance of cloud is major issue. In this paper we have point out some of the major issues effecting the security and reliability of the cloud.

Journal Article
TL;DR: This paper discusses the changes in the existing SDLC and suggests appropriate steps which can lead to lower carbon emissions, power and paper use, thus helping organizations to move towards greener and sustainable software development.
Abstract: The Software development lifecycle (SDLC) currently focuses on systematic execution and maintenance of software by dividing the software development process into various phases that include requirements-gathering, design, implementation, testing, deployment and maintenance. The problem here is that certain important decisions taken in these phases like use of paper, generation of e-Waste, power consumption and increased carbon foot print by means of travel, Air-conditioning etc may harm the environment directly or indirectly. There is a dearth of models that define how a software can be developed and maintained in an environment friendly way. This paper discusses the changes in the existing SDLC and suggests appropriate steps which can lead to lower carbon emissions, power and paper use, thus helping organizations to move towards greener and sustainable software development.

Journal Article
TL;DR: Results obtained with a brain MRI indicate that the meta-heuristic methods proposed can improve the sensitivity and reliability of the systems for automated detection of brain tumors.
Abstract: The Segmentation is a fundamental technique used in image processing to extract suspicious regions from the given image. In this paper proposes the meta-heuristic methods such as Ant Colony optimization (ACO), genetic algorithm (GA) and Particle swarm optimization (PSO) for segmenting brain tumors in 3D magnetic resonance images. Here this paper is divided into two stages. In the first stage preprocessing and enhancement is performed using tracking algorithms. These are used to preprocessing to suppress artifacts, remove unwanted skull portions from brain MRI and these images are enhanced using weighted median filter. The enhanced technique is evaluated by Peak Signal-to-Noise Ratio (PSNR) and Average Signal-to-Noise Ratio (ASNR) for filters. In the Second stage of the intelligent segmentation is three algorithms will be implemented for identifying and segmenting of suspicious region using ACO, GA and PSO, and their performance is studied. The proposed algorithms are tested with real patients MRI. Results obtained with a brain MRI indicate that this method can improve the sensitivity and reliability of the systems for automated detection of brain tumors .The algorithms are tested on 21 pairs of MRI from real patient‘s brain database and evaluate the performance of the proposed method.

Journal Article
TL;DR: A variety of studies across several different countries, industries and areas have been taken into account for identifying the failures factors of IS.
Abstract: An information system (IS) project management is the critical issue for the companies due to its high failure rate .The objective of this paper is to explore the reasons for failures in the information system. The failures of IS is not confined to any particular industry rather they do happen in every country; whether small or large companies; in commercial, non profitable, and governmental organizations; and without regard to their status or reputation. For developing an understanding of the failure factors of IS ,an in-depth review of the existing literature has been done .A variety of studies across several different countries, industries and areas have been taken into account for identifying the failures factors of IS. This has been confirmed from most of the studies that not all the failures belong to technical aspects but also to the social aspects of the system as IS is a Socio-technical System. This paper presents a critical failure factors for the Information system.

Journal Article
TL;DR: The key challenges to provide security in cognitive radio networks are explored, and the current security carriage of emerging IEEE 802.22 cognitive radio typical is discussed and recognizes security threats and vulnerabilities along with the countermeasures and solutions.
Abstract: Reference to the latest developments the spectrum shortage problems are because of the wireless communication. For the future networks the most practical, scientific and systematic challenges is the use of licensed or unlicensed wireless networks with opportunistic use of the spectrum with, without and limited rules. Since different wireless networks are using different frequency bands. So there is a need to use lessening bands when there is no activity on them. Cognitive radio is a new technology which leads to solve these problems through dynamically utilization of rules and spectrum. Several spectrum sharing schemes have been proposed. Now a day’s security in cognitive radio network becomes a major and challenging issue, and chances are prearranged to the attackers in cognitive radio technology as compared to the wireless networks in a general form. In cognitive radio, mobile station equipment may switch to any available frequency band, as it makes a list of available free channel and make handoff decision accordingly. So whenever handoff is made whether soft or hard there will be a chance that malicious attacker may hack ongoing traffic or he may even interrupt established traffic by imitating any kind of passive or active attack like interception, spoofing denial of service etc. This paper explore the key challenges to provide security in cognitive radio networks, and discusses the current security carriage of emerging IEEE 802.22 cognitive radio typical and recognizes security threats and vulnerabilities along with the countermeasures and solutions

Journal Article
TL;DR: This research paper proposed a scheme for image compression using DCT and DWT named as hybrid compression technique which aims to achieve higher compression rates with preserving the quality of the reconstructed image.
Abstract: Digital images in their uncompressed form require an enormous amount of storage capacity which in turn needs large transmission bandwidth for the transmission over the network. Image compression reduces the storage space of image and also maintains the quality of image. There are various compression techniques available in literature. Discrete Cosine Transform (DCT) is one of the widely used image compression method and the Discrete Wavelet Transform (DWT) is another which provides improvements in the quality of the picture. This research paper proposed a scheme for image compression using DCT and DWT named as hybrid compression technique. DCT has high energy compaction property and often require less computational resources and DWT is multi-resolution transformation. The goal is to achieve higher compression rates with preserving the quality of the reconstructed image.

Journal Article
TL;DR: Wavelets, wavelets families, types of compression techniques are outlined, which are essential for medical image compression in hospitals and data transfer for diagnosis.
Abstract: Medical image compression is essential for huge database storage in various hospitals and data transfer for diagnosis. Wavelets give one such approach for compression purpose. A wavelet is a wave-like oscillation with an amplitude starts from zero, increases, and then decreases again to zero. Compression techniques are divided into lossless and lossy techniques. Lossless technique involves exact reconstruction of image but poor compression ratio. Lossy technique gives higher compression ratio. This paper outlines wavelets, wavelets families, types of compression techniques

Journal Article
TL;DR: A new kernel function called polynomial radial basis function (PRBF) that could improve the classification accuracy of support vector machines (SVMs) is introduced and it is shown that the proposed kernel converges faster than the Gauss and Polynomial kernels.
Abstract: In this paper, we introduce a new kernel function called polynomial radial basis function (PRBF) that could improve the classification accuracy of support vector machines (SVMs). The proposed kernel function combines both Gauss (RBF) and Polynomial (POLY) kernels and is stated in general form. It is shown that the proposed kernel converges faster than the Gauss and Polynomial kernels. The accuracy of the proposed algorithm is compared to algorithms based on both Gaussian and polynomial kernels by application to a variety of non-separable data sets with several attributes. We noted that the proposed kernel gives good classification accuracy in nearly all the data sets, especially those of high dimensions.

Journal Article
TL;DR: This paper presents weighted intuitionistic fuzzy Delphi method, which is an improved and elaborative statistical tool used to reach in better conclusions in communication with experts.
Abstract: This paper presents weighted intuitionistic fuzzy Delphi method In real life usage of Delphi method, information communicated by experts may not be used with full and complete potential Hence highly accurate and realistic conclusions cannot always be obtained In intuitionistic fuzzy Delphi method, communication with experts is the same as fuzzy Delphi method, yet an improved and elaborative statistical tool is used to reach in better conclusions Again, the experts use their individual competency and subjectivity And competency and ability to predict successfully varies extensively among experts Thus different importance and hence weights should be assigned to them by the decision maker Hence more realistic and accurate prediction is obtained

Journal Article
TL;DR: The genetic algorithms evolution process is described, the active contours to detect the boundaries of the object whose boundaries are not defined are described and the use of genetic algorithm with active contour in image segmentation is described.
Abstract: Genetic Algorithm is a search technique used in computing to find approximate solutions to optimization and search problems. As a search strategy, genetic algorithm has been applied successfully in many fields. Firstly, this paper describes the genetic algorithms evolution process .It then describes the active contours to detect the boundaries of the object whose boundaries are not defined. Then it describes the use of genetic algorithm with active contours in image segmentation.

Journal Article
TL;DR: Modified MWFES-2 can be applied to encrypt any short message, password, confidential message or any other important document and the results show that the present method is free from standard attacks such as differential attack, known plain text attack etc.
Abstract: Nath et al developed a method Multi Way Feedback Encryption Standard Version-I [18] recently, where the authors used both forward and backward feedback from left to right and from right to left on the plain text along with the key. In MWFES-I, the ASCII value of plain text is added with key and forward feedback (FF) and backward feedback (BF) to obtain intermediate cipher text. The initial FF and BF are taken to be 0. The intermediate cipher text is taken modulo operation with 256 to get cipher text. This cipher text is taken as feedback for the next column. In the second round we calculate the cipher text from the RHS. In the MWFES-II, the authors have used a much more general approach. The FF and BF have been applied using skip by n-columns; where „n‟ can be 0 to any number less than the length of the plain text. In the present method (Modified Multiway Feedback Encryption Standard, Version -2), the authors have introduced two different skips „n1‟ and „n2‟. We perform „n1‟ skips for forward feedback (that is, from the left), and „n2‟ skip for backward feedback (that is, from the right). „n1‟ and „n2‟ may or may not be equal and is taken as a function of the generated keypad. The authors applied the present method on some standard plain texts such as 1024 ASCII „0‟, 1024 ASCII „1‟, 1024 ASCII „2‟ and 1024 ASCII „3‟ and the frequency analysis shows the encrypted texts are totally random. Initially, the user has to enter a secret key (seed). The key-expansion algorithm generates an enlarged keypad of the size of the plain-text from the seed. This keypad is used for further encryption and decryption. The present method is very effective as the encrypted text changes drastically on varying the skip „n1‟ and „n2‟. Modified MWFES-2 can be applied to encrypt any short message, password, confidential message or any other important document. The results show that the present method is free from standard attacks such as differential attack, known plain text attack etc.

Journal Article
TL;DR: The purpose of this paper is to review various different load balancing algorithms for the heterogeneous network like grid and to identify various metric and identify gaps between them.
Abstract: In heterogeneous environment like Grid, which allow the use for geographically widely distributed and multi-owner resources to solve large-level application. So for keep maintaining the balance of workload between emerged infrastructures like grid, use of load balancing algorithms was important. Jobs are needed to be equally balanced between various computing nodes. To fully utilized the resource in heterogeneous network and increasing computation and overall improve the throughput of system is aim of load balancing algorithm. The purpose of this paper is to review various different load balancing algorithms for the heterogeneous network like grid and to identify various metric and identify gaps between them. Many load balancing algorithms are already implemented which works against various issues like heterogeneity, scalability etc. Different load balancing algorithms for the grid environment woks on various metrics such as make span, time, average resource utilization rate, communication overhead, reliability, stability, fault tolerance.

Journal Article
TL;DR: This paper proposes the improvised technique of implementing smart ration card which will lead to a database without duplicate entries and ghost cards which will help to avoid illegal and bogus claims and fraud in distribution of ration.
Abstract: This paper proposes the improvised technique of implementing smart ration card. The main objectives of smart ration card are providing food grains and other essential items to vulnerable sections of the society at reasonable (subsidized) prices and to eradicate inefficiency in the targeting of beneficiaries and the resulting leakage of subsidies which is the main disadvantage of the present PDS (Public Distribution System). These objectives can be achieved by creating a unique database of residents in India and will put together the best technologies and processes for this purpose. This will lead to a database without duplicate entries and ghost cards which will help to avoid illegal and bogus claims and fraud in distribution of ration.

Journal Article
TL;DR: A new method is proposed in this paper to secure the exchange of public keys in SSP based on ARP spoofing, and consequently, the process of SSP will be secure, reliable and provide protection against Man-In-The-Middle (MITM) attacks.
Abstract: This paper explain on different types of MITM attacks, their consequences, techniques and solutions under different circumstances giving users options to choose one from various solutions. Man-In-The-Middle (MITM) attack is one of the primary techniques in computer based hacking. MITM attack can successfully invoke attacks such as Denial of service, DNS spoofing and Port stealing. MITM attack of every kind has lot of surprising consequences in store for users such as, stealing online account user id, password, stealing of local ftp id, or telnet session etc. Man-in-the-middle attack is used wildly as a method of attacking the network. To discover how this type of attack works, this paper describes a method of man-in-the-middle attack based on ARP spoofing, and proposes a method of preventing such attacks. a new method is proposed in this paper to secure the exchange of public keys in SSP. By adopting the proposed technique, the exchange of public key becomes more secure and consequently, the process of SSP will be secure, reliable and provide protection against Man-In-The-Middle (MITM) attacks

Book ChapterDOI
TL;DR: A scheme that discusses secure rank based keyword search over an encrypted cloud data that can be extended to support Boolean search and Fuzzy keyword search techniques is presented.
Abstract: We present a scheme that discusses secure rank based keyword search over an encrypted cloud data. The data that has to be outsourced is encrypted using symmetric encryption algorithm for data confidentiality. The index file of the keyword set that has to be searched is outsourced to the local trusted server where the keyword set that is generated from the data files is also stored. This is done so that any un-trusted server cannot learn about the data with the help of the index formed. The index is created with the help of Aho-Corasick multiple strings matching algorithm which matches the pre-defined set of keywords with information in the data files to index them and store relevant data in B+ trees. Whenever the user searches for a keyword, the request is sent to the local trusted server and the indexed data is referred. The files are listed based on the certain relevance criteria. User requests for the required files to the un-trusted server. The parameters required for ranking is got from the data stored while indexing. Based on the ranking, the files are retrieved from the un-trusted server and displayed to the user. The proposed system can be extended to support Boolean search and Fuzzy keyword search techniques.

Journal Article
TL;DR: The proposed research paper tries to evaluate the challenges in enforcing security for cloud computing services and describe the necessity of a regularization of SLA.
Abstract: Cloud computing is a cumulative collection of technologies. It shares on-demand computing resources that are positioned and disposed efficiently. In spite of several benefits, numerous challenges are there such as Data security, Performance, Data Locking, Access control, Bandwidth costs, Internet Dependency, Data confidentiality, Auditability, Application and Availability. SLA (Service Level Agreement) plays a vital role in cloud computing. Each service associates with a specific SLA. This is collaboration between service providers and consumer. Therefore, SLA has to define the level of security and their intricacy established on the services to make the consumer realize about the implementation of security policies. The proposed research paper tries to evaluate the challenges in enforcing security for cloud computing services and describe the necessity of a regularization of SLA.