scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Global Research in Computer Sciences in 2011"


Journal Article
TL;DR: This paper provides a critical review of steganography as well as to analyze the characteristics of various cover media namely image, text, audio and video in respects of the fundamental concepts, the progress of Steganographic methods and the development of the corresponding steganalysis schemes.
Abstract: The staggering growth in communication technology and usage of public domain channels (i.e. Internet) has greatly facilitated transfer of data. However, such open communication channels have greater vulnerability to security threats causing unauthorized information access. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, text, video or audio, etc, and then transmitted secretly to the receiver. Steganalysis is another important topic in information hiding which is the art of detecting the presence of steganography. This paper provides a critical review of steganography as well as to analyze the characteristics of various cover media namely image, text, audio and video in respects of the fundamental concepts, the progress of steganographic methods and the development of the corresponding steganalysis schemes

80 citations


Journal Article
TL;DR: The IHS sharpening technique is one of the most commonly used techniques for sharpening. as mentioned in this paper explored different IHS transformation techniques and experiment it as IHS based image fusion.
Abstract: The IHS sharpening technique is one of the most commonly used techniques for sharpening. Different transformations have been developed to transfer a color image from the RGB space to the IHS space. Through literature, it appears that, various scientists proposed alternative IHS transformations and many papers have reported good results whereas others show bad ones as will as not those obtained which the formula of IHS transformation were used. In addition to that, many papers show different formulas of transformation matrix such as IHS transformation. This leads to confusion what is the exact formula of the IHS transformation?. Therefore, the main purpose of this work is to explore different IHS transformation techniques and experiment it as IHS based image fusion. The image fusion performance was evaluated, in this study, using various methods to estimate the quality and degree of information improvement of a fused image quantitatively.

80 citations


Journal Article
TL;DR: A comparative study of three most recently methods for face recognition, one of the approach is eigenface, fisherfaces and other one is the elastic bunch graph matching.
Abstract: The technology of face recognition has become mature within these few years System, using the face recognition, has become true in real life In this paper, we will have a comparative study of three most recently methods for face recognition One of the approach is eigenface, fisherfaces and other one is the elastic bunch graph matching After the implementation of the above three methods, we learn the advantages and Disadvantages of each approach and the difficulties for the implementation

31 citations


Journal Article
TL;DR: The International Data Encryption Algorithm (IDEA) is one of the strongest secret-key block ciphers as discussed by the authors, and it can be expressed in a simpler way.
Abstract: There are several symmetric and asymmetric data encryption algorithms. IDEA (International Data Encryption Algorithm) is one of the strongest secret-key block ciphers. In this article, I try to represent the existing IDEA algorithm in a different way. In the following illustration, we would see how the encryption can be expressed in a simpler way.

30 citations


Journal Article
TL;DR: To provide a comprehensive survey, this work not only categorize existing boimetric techniques but also present detailed of representative methods within each category.
Abstract: BIOMETRICS is the measurement of biological data. The term biometrics is commonly used today to refer to the authentication of a person by analyzing physical characteristics, such as fingerprints, or behavioral characteristics, such as signatures. Since many physical and behavioral characteristics are unique to an individual, biometrics provides a more reliable system of authentication than ID cards, keys, passwords, or other traditional systems. The word biometrics comes from two Greek words and means life measure. To provide a comprehensive survey, we not only categorize existing boimetric techniques but also present detailed of representative methods within each category. Any characteristic can be used as a biometric identifier if (1) every person possesses the characteristic, (2) it varies from person to person, (3) its properties do not change considerably over time, and (4) it can be measured manually or automatically. Physical characteristics commonly used in biometric authentication include face, fingerprints, handprints, eyes, and voice. Biometric authentication can be used to control the security of computer networks, electronic commerce and banking transactions, and restricted areas in office buildings and factories. It can help prevent fraud by verifying identities of voters and holders of driver's license or visas. In authentication, a sensor captures a digital image of the characteristic being used to verify the user's identity. A computer program extracts a pattern of distinguishing features from the digital image. Another program compares this pattern with the one representing the user that was recorded earlier and stored in the system database. If the patterns match well enough, the biometric system will conclude that the person is who he or she claims to be.

29 citations


Journal Article
TL;DR: This paper explores a method to identify tumor in brain disorder diagnosis in MR images using image segmentation to extract the abnormal tumour portion in brain.
Abstract: Detection of Brain tumour is the most common fatality in the current scenario of health care society. Computational applications are gaining significant importance in the day-to-day life. Specifically, the usage of the computer-aided systems for computational biomedical applications has been explored to a higher extent. Automated brain disorder diagnosis with MR images is one of the specific medical image analysis methodologies. Image segmentation is used to extract the abnormal tumour portion in brain. This paper explores a method to identify tumor in brain disorder diagnosis in MR images.

23 citations


Journal Article
TL;DR: The usability of alphanumeric passwords is studied, and it is found that they are more difficult for people to remember and the consequence is that one has to write them down.
Abstract: User Authentication is the process of determining whether a user should be authorized to access to a particular system or resource. Alphanumeric passwords are most common mechanism for authorizing computer users, even though it is well known that users generally choose passwords that are vulnerable to dictionary attacks, brute force attack and guessing attacks. Until recent years, the security problem has been formulated as a scientific problem. However, it is now extensively accepted that security is also a human computer interaction (HCI) problem. Most security mechanisms cannot be effective without taking into account, the user. HCI matters in two ways. One is the usability of the security systems themselves and another is the interaction of the security systems with user practices and motivations. We have studied the usability of alphanumeric passwords, and found that they are more difficult for people to remember and the consequence is that one has to write them down. We have discussed the usability versus security tradeoffs and found different inherent weaknesses in alphanumeric passwords. We have also discussed the alternative solutions those can be used instead of alphanumeric password.

22 citations


Journal Article
TL;DR: The out come of this paper is to generate a cross platform that can effectively hide a message inside a digital image file and select pixels with the HIOP (Higher Intensity Of Pixel) algorithm.
Abstract: As the communication increases day by day the value for security over network alsoincreases. There are many ways to hide information or transmission of information secretlyIn this sense steganography is the best part of sending information secretly. Steganography in the last few years has gained a wider audience. The technology has certainly been the topic of widespread discussion among the IT community. This is the art ofwriting message or information in such a way that no one apart suitable recipient knows the meaning of the message or information. For hiding secret information in images, there exist a large variety of steganography techniques some are more complex than others and all of them have respective strong and weak points. The out come of thispaper is to generate a cross platformthat can effectively hide a message inside a digitalimage file. We select pixels with the HIOP (Higher Intensity Of Pixel) algorithm. We divide the image into blocks and determine higher Intensity color Of pixel in each block and use astrassen multiplication in each blockwe create more dispersion in a selected pixels. As a result, the security level increased in hide of dataand also in discover of cipher text. It is also, try not to degrade image quality and as far as possible does not change the image size.

19 citations


Journal Article
TL;DR: A steganographic model combining the features of both text and image based steganography technique for communicating information more securely between two locations is proposed and the authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security.
Abstract: In all over the world maintain the security of the secret data has been a great challenge. One way to do this is to encrypt the message before sending it. Encrypted messages sending frequently through a communication channel like Internet, draws the attention of third parties, hackers and crackers, perhaps causing attempts to break and reveal the original messages. Steganography is an emerging area which is used for secured data transmission over any public media. Steganography is of Greek origin and means "Covered or hidden writing". Considerable amount of work has been carried out by different researchers on steganography. In this paper, a steganographic model combining the features of both text and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. As a further improvement of security level, the information has been encoded through SSCE values and embedded into the cover text using the proposed text steganography method to form the stego text. This encoding technique has been used at both ends in order to achieve high level of security.. Next the stego text has been embedded through PMM method into the cover image to form the stego image. At the receiver side different reverse operation has been carried out to get back the original information.

17 citations


Journal Article
TL;DR: The performance study shows that the FP-growth method is efficient and scalable and is about an order of magnitude faster than the Apriori algorithm.
Abstract: Association rule mining is one of the most popular data mining methods. However, mining association rules often results in a very large number of found rules, leaving the analyst with the task to go through all the rules and discover interesting ones. In this paper, we present the performance comparison of Apriori and FP-growth algorithms. The performance is analyzed based on the execution time for different number of instances and confidence in Super market data set. These algorithms are presented together with some experimental data. Our performance study shows that the FP-growth method is efficient and scalable and is about an order of magnitude faster than the Apriori algorithm

16 citations


Journal Article
TL;DR: The analysis of this survey will enable a better understanding of basic concept of virtualization, virtualization techniques, technologies, services and motivate especially Information Technologies (IT) staff as a detailed framework for the companies working in different industrial sectors.
Abstract: Today virtualization is not just a possibility but it is becoming mainstream in data centers across the business world. Virtualization technology is popular today for hosting Internet and cloud-based computer services. Technology is evolving at a rapid rate and virtualization is no longer just about consolidation and cost savings, It is about the agility and flexibility needed for service delivery in data centers, including production environment and the infrastructure that supports the most mission-critical applications. Virtualization is a rapidly evolving technology that provides a range of benefits to computing systems, such as improved resource utilization and management, application isolation and portability and system reliability. Among these features, live migration, resources management, IT infrastructure consolidation are core functions. Virtualization technology also enables IT personnel to respond to business needs more rapidly with lower cost and improved operational efficiencies. Virtualization technologies are changing the IT delivery model to provide on-demand self-service access to a shared pool of computing resources via broad network. The analysis of this survey will enable a better understanding of basic concept of virtualization, virtualization techniques, technologies, services and motivate especially Information Technologies (IT) staff as a detailed framework for the companies working in different industrial sectors. The evolution of datacenter transformation from traditional approach to virtualization i.e. from dedicated processing to pooled processing with strategic business values and methodologies are analyzed in details in this paper. This paper also discusses what virtualization is, how organizations can benefit from adopting virtualization into future IT plans.

Journal Article
TL;DR: Identifying the types of testing that can be applied for checking a particular quality attribute and which testing types are applicable in which phases of life cycle of software development is summarized.
Abstract: Software testing is a technique aimed at evaluating an attribute or capability/usability of a program or product/system and determining that it meets its quality. Although crucial to software quality and widely deployed by programmer & testers, software testing still remains an art, due to limited understanding of the principles of software. Software testing is an important technique for assessing the quality of a software product. In this paper, various types of software testing technique and various attributes of software quality are explained. Identifying the types of testing that can be applied for checking a particular quality attribute is the aim of this thesis report. All types of testing can not be applied in all phases of software development life cycle. Which testing types are applicable in which phases of life cycle of software development is also summarized.

Journal Article
TL;DR: An effective hash-based algorithm for the candidate set generation is proposed, and the number of candidate 2-itemsets generated by the proposed algorithm is smaller than that by previous methods, thus resolving the performance bottleneck.
Abstract: In this paper we describe an implementation of Hash based Apriori. We analyze, theoretically and experimentally, the principal data structure of our solution. This data structure is the main factor in the efficiency of our implementation. We propose an effective hash-based algorithm for the candidate set generation. Explicitly, the number of candidate 2-itemsets generated by the proposed algorithm is, in orders of magnitude, smaller than that by previous methods, thus resolving the performance bottleneck. Our approach scans the database once utilizing an enhanced version of priori algorithm.Note that the generation of smaller candidate sets enables us to effectively trim the transaction database size at a much earlier stage of the iterations, thereby reducing the computational cost for later iterations significantly

Journal Article
TL;DR: This article tries to represent the existing IDEA algorithm in a different way, and shows how the encryption can be expressed in a simpler way.
Abstract: There are several symmetric and asymmetric data encryption algorithms IDEA (International Data Encryption Algorithm) is one of the strongest secret-key block ciphers In this article, I try to represent the existing IDEA algorithm in a different way In the following illustration, we would see how the encryption can be expressed in a simpler way

Journal Article
TL;DR: This approach uses the idea of structural and feature changing of the cover carrier which is not visibly distinguishable from the original to the human beings and may be modified for other India language also.
Abstract: Recent years have witnessed the rapid development of the Internet and telecommunication techniques But due to hostilities of environment over the internet, confidentiality of information have increased at phenomenal rate Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved Steganography is an emerging area which is used for secured data transmission over any public media Steganography is of Greek origin and means "Covered or hidden writing" Considerable amount of work has been carried out by different researchers on Steganography In this paper the authors propose a novel text steganography method through changing the pattern of English alphabet letters Considering the structure of English alphabets, secret message has been mapped through some little structural modification of some of the alphabets of the cover text This approach uses the idea of structural and feature changing of the cover carrier which is not visibly distinguishable from the original to the human beings and may be modified for other India language also This solution is independent of the nature of the data to be hidden and produces a stego text with minimum degradation Quality of the stego text is analyzed by trade off between no of bits used for mapping Efficiency of the proposed method is illustrated by exhaustive experimental results and comparisons

Journal Article
TL;DR: Elasticities analysis of traffic sharing pattern among operators is presented and it is found that elasticities value depend on market position.
Abstract: The Internet service is managed by operators and each one tries to capture larger proportion of Internet traffic. This tendency causes inherent competition in the market. The location of the market in also an important factor. This paper assumes two different markets and two operators are in competition. It is found that elasticities value depend on market position. The priority position market has higher level. This paper present Elasticities analysis of traffic sharing pattern among operators. Simulation study is performing to analyze the Elasticities impact on traffic sharing

Journal Article
TL;DR: This paper proposes a new effective dynamic RR algorithm SMDRR (Subcontrary Mean Dynamic Round Robin) based on dynamic time quantum where the subcontrary mean or harmonic mean is used to find the time quantum.
Abstract: Round Robin (RR) Algorithm is considered as optimal in time shared environment because the static time is equally shared among the processes. If the time quantum taken is static then it undergoes degradation of the CPU performance and leads to so many context switches. In this paper, we have proposed a new effective dynamic RR algorithm SMDRR (Subcontrary Mean Dynamic Round Robin) based on dynamic time quantum where we use the subcontrary mean or harmonic mean to find the time quantum. The idea of this approach is to make the time quantum repeatedly adjusted according to the burst time of the currently running processes. Our experimental analysis shows that SMDRR performs better than RR algorithm in terms of reducing the number of context switches, average turnaround time and average waiting time

Journal Article
TL;DR: The simulation result demonstrates that H-SEP achieves longer lifetime and more effective data packets in comparison with the SEP and LEACH protocol.
Abstract: In this paper, the heterogeneous energy-efficient data gathering protocols for lifetime of wireless sensor networks have been reported. The main requirements of wireless sensor network are to prolong the network lifetime and energy efficiency. Here, Heterogeneous - SEP: A Stable Election Protocol for clustered heterogeneous (H-SEP) for Wireless Sensor Network has been proposed to prolong the network lifetime. In this paper, the impacts of heterogeneity in terms of node energy in wireless sensor networks have been mentioned. Finally the simulation result demonstrates that H-SEP achieves longer lifetime and more effective data packets in comparison with the SEP and LEACH protocol.

Journal Article
TL;DR: This paper is a review of the recent steganography techniques and utilization of DNA sequence appeared in the literature and indicates that DNA sequences possess some interesting properties, which can be utilized to hide data.
Abstract: Steganography is the art and science of secret communication, aiming to conceal the existence of a communication, which has been used in military, and perhaps terrorists. Steganography in the modern day sense of the word usually refers to information or a file that has been concealed inside a digital Picture, Video or Audio file. In steganography, the actual information is not maintained in its original format and thereby it is converted into an alternative equivalent multimedia file like image, video or audio, which in turn is being hidden within another object. Information Security is becoming an inseparable part of Data Communication. In order to address this Information Security, Steganography plays an important role. The digital media steganalysis is divided into three domains, which are image steganalysis, audio steganalysis, and video steganalysis. DNA sequences possess some interesting properties, which can be utilized to hide data. This paper is a review of the recent steganography techniques and utilization of DNA sequence appeared in the literature.

Journal Article
TL;DR: The proposed technique is to find the more fine edges and reduce the pixels that are not belonging to the edge to convert the uncertainties that exist in many aspects of image processing.
Abstract: Edge is a basic feature of image. The image edges include rich information that is very significant for obtaining the image characteristic by object recognition.Edge detection is the most commonly used technique in image processing. So this paper represents a modified rule based fuzzy logic technique, because fuzzy logic is desirable to convert the uncertainties that exist in many aspects of image processing. Here firstly the gradient and standard deviation is calculated and used as input for fuzzy system. The traditional algorithm like Sobel, Prewitt, LoG are implemented and then the results are compared with modified algorithm and concluded that the proposed technique is to find the more fine edges and reduce the pixels that are not belonging to the edge.

Journal Article
TL;DR: Two processor based CPU scheduling (TPBCS) algorithm is proposed, where one processor is exclusively for CPU-intensive processes and the other processor is exclusively for I/O- intensive processes.
Abstract: The performance and efficiency of multitasking operating systems mainly depends upon the use of CPU scheduling algorithm. In time shared system, Round Robin (RR) scheduling gives optimal solution but it may not be suitable for real time systems because it gives more number of context switches and larger waiting time and larger turnaround time. In this paper two processor based CPU scheduling (TPBCS) algorithm is proposed, where one processor is exclusively for CPU-intensive processes and the other processor is exclusively for I/O-intensive processes. This approach dispatches the processes to appropriate processor according to their percentage of CPU or I/O requirement. After the processes are dispatched to respective processors, the time quantum is calculated and the processes are executed in increasing order of their burst time. Experimental analysis shows that our proposed algorithm performs better result by reducing the average waiting time, average turnaround time.

Journal Article
TL;DR: A new Fair-Share scheduling with weighted time slice is proposed and analyzed which calculates time quantum in each round based on a novel approach which makes the time quantum repeatedly adjustable according to the burst time of the currently running processes.
Abstract: The performance and efficiency of multitasking operating systems mainly depend upon the used CPU scheduling algorithm. In Time Shared System, Round Robin(RR) scheduling gives optimal solution. But it is not suitable for real time system because it gives more number of context switches, larger waiting and turnaround time. In this paper a new Fair-Share scheduling with weighted time slice is proposed and analyzed which calculates time quantum in each round. Our proposed algorithm is based on a novel approach which makes the time quantum repeatedly adjustable according to the burst time of the currently running processes. This algorithm assigns a weight to each process and the process having the least burst time is assigned the largest weight. The process having largest weight is executed first, then the next largest weight and so on. Experimental analysis shows that our proposed algorithm gives better result, reduces the average waiting time, average turnaround time and number of context switches.

Journal Article
TL;DR: Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications.
Abstract: For the past decade, many image segmentation techniques have been proposed. These segmentation techniques can be categorized into three classes, (1) characteristic feature thresholding or clustering, (2) edge detection, and (3) region extraction. This survey summarizes some of these techniques. In the area of biomedical image segmentation, most proposed techniques fall into the categories of characteristic feature thresholding or clustering and edge detection. We present current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications.

Journal Article
TL;DR: Experimental results suggest that the proposed method to segment nucleus and cytoplasm of white blood cells performs well in identifying blood cell types regardless of their irregular shapes, sizes, and orientation.
Abstract: The analysis of blood cells in microscope images can provide useful information concerning the health of patients; however, manual classification of blood cells is time-consuming and susceptible to error due to the different morphological features of the cells. Therefore, a fast and automated method for identifying the different blood cells is required. In this paper, we propose a method to segment nucleus and cytoplasm of white blood cells (WBC). In this work, we segments cell images with varying background and illumination condition is designed. The results of segmentation show the better performance in comparison to the conventional methods. Experimental results suggest that the proposed method performs well in identifying blood cell types regardless of their irregular shapes, sizes, and orientation.

Journal Article
TL;DR: A solution to automatic lip contour detection if front view of face is available is proposed and has been tested on a database containing face images of different people and was found to have maximum success rate of 85%.
Abstract: Lip contour detection and tracking is the most important pre-requisite for computerized speech reading. Several approaches have been proposed for lip tracking after lip contour is accurately initialized on first frame. Detection and tracking of the lip contour is an issue in speech reading. A relatively large class of lip reading algorithms are available based on lip contour analysis. In these cases, lip contour extraction is needed as the first step. By lip contour extraction, we usually refer to the process of lip contour detection in the first frame of an audio-visual image sequence. Obtaining the lip contour in subsequent frames is usually referred as lip tracking. While for lip contour tracking there are well developed techniques and algorithms to perform this task automatically, in the case of lip contour extraction in the first frame the things are different. This is a much more difficult task than tracking, due to the lack of a good a-priori information in respect to the mouth position in the image, the mouth size, the approximate shape of the mouth, mouth opening etc. In this paper we propose a solution to automatic lip contour detection if front view of face is available. The proposed method has been tested on a database containing face images of different people and was found to have maximum success rate of 85%.

Journal Article
TL;DR: This Review paper is intended to summarize and compare the methods of automatic detection of brain tumor through Magnetic Resonance Image (MRI) used in different stages of Computer Aided Detection System (CAD).
Abstract: The segmentation of brain tumors in magnetic resonance images (MRI) is a challenging and difficult task because of the variety of their possible shapes, locations, image intensities. In this Review paper, it is intended to summarize and compare the methods of automatic detection of brain tumor through Magnetic Resonance Image (MRI) used in different stages of Computer Aided Detection System (CAD). Brain Image classification techniques are studied. Existing methods are classically divided into region based and contour based methods. These are usually dedicated to full enhanced tumors or specific types of tumors. The amount of resources required to describe large set of data is simplified and selected in for tissue segmentation.

Journal Article
TL;DR: This paper provides critical analysis of six most common encryption algorithms namely: DES, 3DES, RC2, Blowfish, AES (Rijndael), and RC6 and concludes the best Symmetric Cryptography Encryption algorithm.
Abstract: Cryptology is a science that deals with codes and passwords. Cryptology is alienated into cryptography and cryptanalysis. The Cryptography produces methods to protect the data, and cryptanalysis hack the protected data. Cryptography provide solutions for four different security areas - confidentiality, authentication, integrity and control of interaction between different parties involved in data exchange finally which leads to the security of information .Encryption algorithms play a key role in information security systems. This paper provides critical analysis of six most common encryption algorithms namely: DES, 3DES, RC2, Blowfish, AES (Rijndael), and RC6. A comparative study has been carried out for the above six encryption algorithms in terms of encryption key size ,block size, Number of Rounds ,Encryption/decryption time ,CPU process time, CPU clock cycles (in the form of throughput), Power consumption. And these comparisons are used to conclude the best Symmetric Cryptography Encryption algorithm.

Journal Article
TL;DR: This paper presents to the reader an investigation into individual strengths and weaknesses of the most common techniques including feature based methods, PCA based eigenfaces, LDA based fisherfaces, ICA, Gabor waveletbased methods, neural networks and hidden Markov models.
Abstract: Face recognition is an example of advanced object recognition. The process is influenced by several factors such as shape, reflectance, pose, occlusion and illumination which make it even more difficult. Today there exist many well known techniques to try to recognize a face. We present to the reader an investigation into individual strengths and weaknesses of the most common techniques including feature based methods, PCA based eigenfaces, LDA based fisherfaces, ICA, Gabor wavelet based methods, neural networks and hidden Markov models. Hybrid systems try to combine the strengths and suppress the weaknesses of the different techniques either in a parallel or serial manner. Today there exist many well known techniques to try to recognize a face. Experiments done with implementations of different methods have shown that they have individual strengths and weaknesses. Hybrid systems try to combine the strengths and suppress the weaknesses of the different techniques either in a parallel or serial manner. The paper is to evaluate the different techniques and consider different combinations of these. Here we compare or evaluate templates based and geometry based face recognition, also give the comprehensive survey based face recognition methods.

Journal Article
TL;DR: The architecture of mapping the Hindi language query entered by the user into SQL query is discussed, which will help people more comfortable with Hindi language to use database applications with ease.
Abstract: The need for Hindi Language interface has become increasingly accurate as native people are using databases for storing the data. Large number of e-governance applications like agriculture, weather forecasting, railways, legacy matters etc use databases. So, to use such database applications with ease, people who are more comfortable with Hindi language, require these applications to accept a simple sentence in Hindi, and process it to generate a SQL query, which is further executed on the database to produce the results. Therefore, any interface in Hindi language will be an asset to these people. This paper discusses the architecture of mapping the Hindi language query entered by the user into SQL query.

Journal Article
TL;DR: This work proposes some novel approaches of converting a color image into grayscale image, based on varying pixel depth, in order to improve the quality of black-and-white images.
Abstract: Color photography was originally rare and expensive, and color images became more popular only in the middle of the 20th century. It has indeed become even more popular since yet black-and-white remains a niche market for people who use the medium for artistic purposes. Even in today’s modern era, there is importance of grayscale and black-and-white images. Since the advent of color, black-and-white connotes something nostalgic, historic and anachronistic. There are three basic and primitive techniques to convert a color image into its gray level equivalent - the lightness method, the average method and the luminosity method. Here, we propose some novel approaches of converting a color image into grayscale image, based on varying pixel depth.