scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computer Science in 2010"


Journal ArticleDOI
TL;DR: The average time taken by K-Means algorithm is greater than the time takenby K-Medoids algorithm for both the case of normal and uniform distributions, and the r esults proved to be satisfactory.
Abstract: Problem statement: Clustering is one of the most important research ar eas in the field of data mining. Clustering means creating groups of ob jects based on their features in such a way that th e objects belonging to the same groups are similar an d those belonging to different groups are dissimila r. Clustering is an unsupervised learning technique. T he main advantage of clustering is that interesting patterns and structures can be found directly from very large data sets with little or none of the background knowledge. Clustering algorithms can be applied in many domains. Approach: In this research, the most representative algorithms K-Mean s and K-Medoids were examined and analyzed based on their basic approach. The best algorithm i n each category was found out based on their performance. The input data points are generated by two ways, one by using normal distribution and another by applying uniform distribution. Results: The randomly distributed data points were taken as input to these algorithms and clusters are found ou t for each algorithm. The algorithms were implemented using JAVA language and the performance was analyzed based on their clustering quality. The execution time for the algorithms in each category was compar ed for different runs. The accuracy of the algorith m was investigated during different execution of the program on the input data points. Conclusion: The average time taken by K-Means algorithm is greater than the time taken by K-Medoids algorithm for both the case of normal and uniform distributions. The r esults proved to be satisfactory.

211 citations


Journal ArticleDOI
TL;DR: A real time vision system for hand gesture based computer interaction to control an event like navigation of slides in Power Point Presentation is proposed.
Abstract: Problem statement: With the development of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient. Due to the limitation of these devices the useable command set is also limited. Direct use of hands can be used as an input device for providing natural interaction. Approach: In this study, Gaussian Mixture Model (GMM) was used to extract hand from the video sequence. Extreme points were extracted from the segmented hand using star skeletonization and recognition was performed by distance signature. Results: The proposed method was tested on the dataset captured in the closed environment with the assumption that the user should be in the Field Of View (FOV). This study was performed for 5 different datasets in varying lighting conditions. Conclusion: This study specifically proposed a real time vision system for hand gesture based computer interaction to control an event like navigation of slides in Power Point Presentation.

91 citations


Journal ArticleDOI
TL;DR: A classifier for fish images recognition is developed based on the combination between robust features extraction and neural network associated with the back-propagation algorithm to recognize an isolated pattern of interest in the image.
Abstract: Problem statement: Image recognition is a challenging problem researchers had been research into this area for so long especially in the recent years, due to distortion, noise, segmentation errors, overlap and occlusion of objects in digital images. In our study, there are many fields concern with pattern recognition, for example, fingerprint verification, face recognition, iris discrimination, chromosome shape discrimination, optical character recognition, texture discrimination and speech recognition, the subject of pattern recognition appears. A system for recognizing isolated pattern of interest may be as an approach for dealing with such application. Scientists and engineers with interests in image processing and pattern recognition have developed various approaches to deal with digital image recognition problems such as, neural network, contour matching and statistics. Approach: In this study, our aim was to recognize an isolated pattern of interest in the image based on the combination between robust features extraction. Where depend on size and shape measurements, that were extracted by measuring the distance and geometrical measurements. Results: We presented a system prototype for dealing with such problem. The system started by acquiring an image containing pattern of fish, then the image features extraction is performed relying on size and shape measurements. Our system has been applied on 20 different fish families, each family has a different number of fish types and our sample consists of distinct 350 of fish images. These images were divided into two datasets: 257 training images and 93 testing images. An overall accuracy was obtained using the neural network associated with the back-propagation algorithm was 86% on the test dataset used. Conclusion: We developed a classifier for fish images recognition. We efficiently have chosen a features extraction method to fit our demands. Our classifier successfully design and implement a decision which performed efficiently without any problems. Eventually, the classifier is able to categorize the given fish into its cluster and categorize the clustered fish into its poison or non-poison fish and categorizes the poison and non-poison fish into its family.

88 citations


Journal ArticleDOI
TL;DR: The aim of this worked to evaluate the ability of the proposed skin texture recognition algorithm to discriminate between healthy and infected skins and took the psoriasis disease as example.
Abstract: Problem statement: In this study a skin disease diagnosis system was developed and tested. The system was used for diagnosis of psoriases skin disease. Approach: Present study relied on both skin color and texture features (features derives from the GLCM) to give a better and more efficient recognition accuracy of skin diseases. We used feed forward neural networks to classify input images to be psoriases infected or non psoriasis infected. Results: The system gave very encouraging results during the neural network training and generalization face. Conclusion: The aim of this worked to evaluate the ability of the proposed skin texture recognition algorithm to discriminate between healthy and infected skins and we took the psoriasis disease as example.

67 citations


Journal ArticleDOI
TL;DR: This study showed that semi-supervised genetic algorithm-based clustering techniques can be applied to summarize relational data with more effectively and efficiently.
Abstract: Problem statement: In solving a classification problem in relational data mining, traditional methods, for example, the C4.5 and its variants, usually require data transformations from datasets stored in multiple tables into a single table. Unfortunately, we may loss some information when we join tables with a high degree of one-to-many association. Therefore, data transformation becomes a tedious trial-and-error work and the classification result is often not very promising especially when the number of tables and the degree of one-to-many association are large. Approach: We proposed a genetic semi-supervised clustering technique as a means of aggregating data stored in multiple tables to facilitate the task of solving a classification problem in relational database. This algorithm is suitable for classification of datasets with a high degree of one-to-many associations. It can be used in two ways. One is user-controlled clustering, where the user may control the result of clustering by varying the compactness of the spherical cluster. The other is automatic clustering, where a non-overlap clustering strategy is applied. In this study, we use the latter method to dynamically cluster multiple instances, as a means of aggregating them and illustrate the effectiveness of this method using the semi-supervised genetic algorithm-based clustering technique. Results: It was shown in the experimental results that using the reciprocal of Davies-Bouldin Index for cluster dispersion and the reciprocal of Gini Index for cluster purity, as the fitness function in the Genetic Algorithm (GA), finds solutions with much greater accuracy. The results obtained in this study showed that automatic clustering (seeding), by optimizing the cluster dispersion or cluster purity alone using GA, provides one with good results compared to the traditional k-means clustering. However, the best result can be achieved by optimizing the combination values of both the cluster dispersion and the cluster purity, by putting more weight on the cluster purity measurement. Conclusion: This study showed that semi-supervised genetic algorithm-based clustering techniques can be applied to summarize relational data with more effectively and efficiently.

65 citations


Journal ArticleDOI
TL;DR: This study starts with a brief on the machine translation scenario in India through data and previous research on machine translation, finding certain machine translation systems that have been developed in India for translation from English to Indian languages by using different approaches.
Abstract: Problem statement: In a large multilingual society like India, there i s a great demand for translation of documents from one language to anoth er language. Approach: Most of the state government works in there provincial languages, whereas the central government's official documents and reports are in English and Hindi. Results: In order to have an appropriate communication ther e is a need to translate these documents and reports in the respective provincial languages. Natural Language Processing (NLP) and Machine Translation (MT) tools are upcoming areas of study the field of computational linguistics. Machine translation i s the application of computers to the translation o f texts from one natural language into another natura l language. It is an important sub-discipline of th e wider field of artificial intelligence. Conclusion/Recommendations: There are certain machine translation systems that have been developed in Ind ia for translation from English to Indian languages by using different approaches. It is this perspecti ve with which we shall broach this study, launching our theme with a brief on the machine translation s ystems scenario in India through data and previous research on machine translation.

62 citations


Journal ArticleDOI
TL;DR: The results indicated that ANN based IPS can provide accuracy and precision which is quite adequate for the development of indoor LBS while using the already available Wi-Fi infrastructure, also the proposed method for collecting the training data can help in addressing the noise and interference, which are one of the major factors affecting the accuracy of IPS.
Abstract: Problem statement: Location knowledge in indoor environment using Indoor Positioning Systems (IPS) has become very useful and popular in recent years. A number of Location Based Services (LBS) have been developed, which are based on IPS, these LBS include asset tracking, inventory management and security based applications. Many next-generation LBS applications such as social networking, local search, advertising and geo-tagging are expected to be used in urban and indoor environments where GNSS either underperforms in terms of fix times or accuracy, or fails altogether. To develop an IPS based on Wi-Fi Received Signal Strength (RSS) using Artificial Neural Networks (ANN), which should use already available Wi-Fi infrastructure in a heterogeneous environment. Approach: This study discussed the use of ANN for IPS using RSS in an indoor wireless facility which has varying human activity, material of walls and type of Wireless Access Points (WAP), hence simulating a heterogeneous environment. The proposed system used backpropogation method with 4 input neurons, 2 output neurons and 4 hidden layers. The model was trained with three different types of training data. The accuracy assessment for each training data was performed by computing the distance error and average distance error. Results: The results of the experiments showed that using ANN with the proposed method of collecting training data, maximum accuracy of 0.7 m can be achieved, with 30% of the distance error less than 1 m and 60% of the distance error within the range of 1-2 m. Whereas maximum accuracy of 1.01 can be achieved with the commonly used method of collecting training data. The proposed model also showed 67% more accuracy as compared to a probabilistic model. Conclusion: The results indicated that ANN based IPS can provide accuracy and precision which is quite adequate for the development of indoor LBS while using the already available Wi-Fi infrastructure, also the proposed method for collecting the training data can help in addressing the noise and interference, which are one of the major factors affecting the accuracy of IPS.

55 citations


Journal ArticleDOI
TL;DR: A new text summarizer based on fuzzy logic is developed, which performs better than the MS Word summarizer as far as the semantics of the original text was concerned and utilizes ANN attribute through a connectionist model to achieve the best results.
Abstract: Problem statement: Text summarization takes care of choosing the most significant portions of text and generates coherent summaries that express the main intent of the given document. This study aims to compare the performances of the three text summarization systems developed by the authors with some of the existing Summarization systems available. These three approaches to text summarization are based on semantic nets, fuzzy logic and evolutionary programming respectively. All the three represent approaches to achieve connectionism. Approach: First approach performs Part of Speech (POS) tagging, semantic and pragmatic analysis and cohesion. The second system under discussion was a new extraction based automated system for text summarization using a decision module that employs fuzzy concepts. Third system under consideration was based on a combination of evolutionary, fuzzy and connectionist techniques. Results: Semantic net approach performs better than the MS Word summarizer as far as the semantics of the original text was concerned. To compare our summaries with those of the well known MS Word, Intellexer and Copernic summarizers, we use DUC’s human generated summaries as the bench-mark. The results were very encouraging. The second approach based on fuzzy logic results in an efficient system since fuzzy logic mimics decision making of humans. Third system showed promising results as far as precision and F-measure are concerned than all the other approaches. Conclusion: Our first approach used WordNet, a lexical database for English. Unlike other dictionaries, WordNet does not include information about etymology, pronunciation and the forms of irregular verbs and contains only limited information about usage. To overcome this limitation, we developed a new text summarizer based on fuzzy logic. As Text summarization application requires learning ability based on activation, we utilize ANN attribute through a connectionist model to achieve the best results.

45 citations


Journal ArticleDOI
TL;DR: In this paper, a review of EMG signal classification methods is presented, focusing on the advances and improvements on different methodologies used for EMG signals with their efficiency, flexibility, and applicability in different applications.
Abstract: Problem statement: The social demands for the Quality Of Life (QOL) are increasing with the exponentially expanding silver generation. To i mprove the QOL of the disabled and elderly people, robotic researchers and biomedical engineers have b een trying to combine their techniques into the rehabilitation systems. Various biomedical signals (biosignals) acquired from a specialized tissue, or gan, or cell system like the nervous system are the driv ing force for the entire system. Examples of biosig nals include Electro-Encephalogram (EEG), Electrooculogram (EOG), Electroneurogram (ENG) and (EMG). Approach: Among the biosignals, the research on EMG signal processing and controlling is currently expanding in various directions. EMG signal based r esearch is ongoing for the development of simple, robust, user friendly, efficient interfacing device s/systems for the disabled. The advancement can be observed in the area of robotic devices, prosthesis limb, exoskeleton, wearable computer, I/O for virt ual reality games and physical exercise equipments. An EMG signal based graphical controller or interfacin g system enables the physically disabled to use word processing programs, other personal computer software and internet. Results: Depending on the application, the acquired and pro cessed signals need to be classified for interpreting into mechanical forc e or machine/computer command. Conclusion: This study focused on the advances and improvements on different methodologies used for EMG signal classification with their efficiency, flexibility a nd applications. This review will be beneficial to the EMG signal researchers as a reference and comparison st udy of EMG classifier. For the development of robust, flexible and efficient applications, this study ope ned a pathway to the researchers in performing futu re comparative studies between different EMG classific ation methods.

44 citations


Journal ArticleDOI
TL;DR: The study revealed that MIS was adequately used in decision-making during crises and recommended that the MIS units should be maintained to ensure a free flow of information and adequate use of MIS in Decision-making.
Abstract: Problem statement: This study investigated and identified the importance role of MIS in decision-making process during crises at the Directorate General of Border Guard in Saud Arabia. In addition, it examines obstacles that limit the role of MIS in decision-making during crises. Approach: The study used the descriptive research design of the survey type. Data were collected from a sample of all officers in the Directorate General of Border Guard (DGBG) in Saud Arabia. Respondents consisting of officers holding administrative positions and senior heading units using stratified random sampling technique. Results: Data collected were analyzed using frequency counts, percentages, means, standard deviation and Chi square test statistics. The study revealed that MIS was adequately used in decision-making during crises. Conclusion: The study confirmed that the MIS should be used more heavily in the decision process during crises. It was recommended that the MIS units should be maintained to ensure a free flow of information and adequate use of MIS in decision-making.

42 citations


Journal ArticleDOI
TL;DR: It is observed that the neural network is capable of detecting skin in complex lighting and background environments and the classifier has the ability to classify the skin pixels belonging to people from different ethnic groups even when they are present simultaneously in an image.
Abstract: Problem statement: Skin color detection is used as a preliminary step in numerous computer vision applications like face detection, nudity recognition, hand gesture detection and person identification. In this study we present a pixel based skin color classification approach, for detecting skin pixels and non skin pixels in color images, using a novel neural network symmetric classifier. The neural classifiers used in the literature either uses a symmetric model with single neuron in the output layer or uses two separate neural networks (asymmetric model) for each of the skin and non-skin classes. The novelty of our approach is that it has two output layer neurons; one each for skin and non-skin class, instead of using two separate classifiers. Thus by using a single neural network classifier we have improved the separability between these two classes, eliminating additional time complexity that is needed in asymmetric classifier. Approach: Skin samples from web images of people from different ethnic groups were collected and used for training. Ground truth skin segmented images were obtained by using semiautomatic skin segmentation tool developed by the authors. The ground truth database of skin segmented images, thus obtained was used to evaluate the performance of our NN based classifier. Results: With proper selection of optimum classification threshold that varies from image to image the classifier gave the detection rate of more than 90% with 7% false positives on an average, Conclusion/Recommendations: It is observed that the neural network is capable of detecting skin in complex lighting and background environments. The classifier has the ability to classify the skin pixels belonging to people from different ethnic groups even when they are present simultaneously in an image. The proper choice of optimum classification threshold that varies from image to image is an issue here. Automatic computation of this optimum threshold for each image is desired in practical skin detection applications. This issue can be taken up as a future study, which will enable us to perform fully automatic skin segmentation with reduced false positives.

Journal ArticleDOI
TL;DR: This work aims to propose a new fuzzy logic realistic model to achieve more accuracy in software effort estimation and found that proposed model was performing better than ordinal COCOMO II and the achieved results were closer to the actual effort.
Abstract: Problem statement: Software development effort estimation is the process of predicting the most realistic use of effort required for developing software based on some parameters. It has always characterized one of the biggest challenges in Computer Science for the last decades. Because time and cost estimate at the early stages of the software development are the most difficult to obtain and they are often the least accurate. Traditional algorithmic techniques such as regression models, Software Life Cycle Management (SLIM), COCOMO II model and function points, require an estimation process in a long term. But, nowadays that is not acceptable for software developers and companies. Newer soft computing techniques to effort estimation based on non-algorithmic techniques such as Fuzzy Logic (FL) may offer an alternative for solving the problem. This work aims to propose a new fuzzy logic realistic model to achieve more accuracy in software effort estimation. The main objective of this research was to investigate the role of fuzzy logic technique in improving the effort estimation accuracy by characterizing inputs parameters using two-side Gaussian function which gave superior transition from one interval to another. Approach: The methodology adopted in this study was use of fuzzy logic approach rather than classical intervals in the COCOMO II. Using advantages of fuzzy logic such as fuzzy sets, inputs parameters can be specified by distribution of its possible values and these fuzzy sets were represented by membership functions. In this study to get a smoother transition in the membership function for input parameters, its associated linguistic values were represented by two-side Gaussian Membership Functions (2-D GMF) and rules. Results: After analyzing the results attained by means of applying COCOMO II and proposed model based on fuzzy logic to the NASA dataset and created an artificial dataset, it had been found that proposed model was performing better than ordinal COCOMO II and the achieved results were closer to the actual effort. The relative error for proposed model using two-side Gaussian membership functions is lower than that of the error obtained using ordinal COCOMO II. Conclusion: Based on the achieved results, it was concluded that, using soft computation approaches such as fuzzy logic and their advantages, good predication; adaption; understandability and the accuracy of software effort estimation can be improved and the estimation can be very close to the actual effort. This novelty model will lead researchers to focus on benefits of non-algorithmic models to overcome the estimation problems.

Journal ArticleDOI
TL;DR: This study presented Knowledge Extract, Profiling and Sharing Network (KEPSNet), framework to facilitate the codification knowledge and competencies management adapting knowledge management processes in capturing, storing, sharing and reusing knowledg e and Competencies.
Abstract: Problem statement: In managing knowledge and competencies as a strategic advantage to an organization, there are difficulties in captu ring, storing, sharing and reusing all this knowled ge. Researchers have agreed that assessing tacit knowle dge is difficult because knowhow of an employee are elusive and what more to assess them. It is compounded when employees leave the organization or become unavailable due to their mob ility within the organization. As a result various approaches to collection and codification of knowle dge have emerged. One of the most important approaches to emerge is knowledge management. Approach: In this study, we presented Knowledge Extract, Profiling and Sharing Network (KEPSNet), framework to facilitate the codification knowledge and competencies management adapting knowledge management processes in capturing, storing, sharing and reusing knowledg e and competencies. Results: We enhanced these processes autonomously by capturing knowledge and competencies in tacit and explicit form from members of group project implementation in the form of concept maps and managed, according to knowledge management process. A case study in a software development group setting was evaluated and results of knowledge management processes output generated from KEPSNet prototype are compared with the result from the pro ject manager in managing the project based. Two sets of questionnaires were given to the group members before and after implementing KEPSNet. Conclusion/Recommendations: The result of the evaluation validates the viabili ty of the key concept presented. Codification of tacit knowle dge has resulted in the codified knowledge and competencies recognized.

Journal ArticleDOI
TL;DR: This study proposed a method called Single Pass Seed Selection (SPSS) algorithm as modification to k-means++ to initialize first seed and probable distance for k-Means++ based on the point which was close to more number of other points in the data set.
Abstract: Problem statement: The k-means method is one of the most widely used clustering techniques for various applications. However, the k -means often converges to local optimum and the result depends on the initial seeds. Inappropriate choice of initial seeds may yield poor results. k- means++ is a way of initializing k-means by choosin g initial seeds with specific probabilities. Due to the random selection of first seed and the minimum probable distance, the k-means++ also results different clusters in different runs in different n umber of iterations. Approach: In this study we proposed a method called Single Pass Seed Selection (SPSS) algorithm as modification to k-means++ to initialize first seed and probable distance for k-means++ based on the point which was close to more number of other points in the data set. Result: We evaluated its performance by applying on various datasets and compare with k-means++. The SPSS algorithm was a single pass algorithm yielding unique solution in less number of iterations when c ompared to k-means++. Experimental results on real data sets (4-60 dimensions, 27-10945 objects a nd 2-10 clusters) from UCI demonstrated the effectiveness of the SPSS in producing consistent c lustering results. Conclusion: SPSS performed well on high dimensional data sets. Its efficiency incre ased with the increase of features in the data set; particularly when number of features greater than 1 0 we suggested the proposed method.

Journal ArticleDOI
TL;DR: This study used data mining modeling techniques to examine the blood donor classification using the CART derived model along with the extended definition for identifying regular voluntary donors to provide a good classification accuracy based model.
Abstract: Problem statement: This study used data mining modeling techniques to examine the blood donor classification. The availability of blo od in blood banks is a critical and important aspec t in a healthcare system. Blood banks (in the developing countries context) are typically based on a health y person voluntarily donating blood and is used for t ransfusions or made into medications. The ability t o identify regular blood donors will enable blood ban ks and voluntary organizations to plan systematically for organizing blood donation camps in an effective manner. Approach: Identify the blood donation behavior using the classification al gorithms of data mining. The analysis had been carried out using a standard blood transfusion data set and using the CART decision tree algorithm implemented in Weka. Results: Numerical experimental results on the UCI ML blood transfusion data with the enhancements helped to identify donor clas sification. Conclusion: The CART derived model along with the extended definition for identifying regular voluntary donors provided a good classification accuracy based model.

Journal ArticleDOI
TL;DR: It is concluded that TORA performs better for if the number of nodes in a network are increased and three routing protocols are being studied on the basis of the throughput of the network with respect to the numbe r of increasing nodes in the network.
Abstract: Problem statement: In this study, an attempt had been made to compare the performance of the reactive ad-hoc routing protocols using OPNE T modeler with respect to increasing number of nodes in the network. Approach: In present study, we compared various reactive rout ing protocols such as Ad hoc On-demand Distance Vector (AODV), Dynamic Source Routing (DSR) and Temporally Ordered Routing Algorithm (TORA), on the basis of their throughput by increasing number of nodes in the network. Results: Comparative study of routing protocols on the basi s of throughput of a network on the basis of number of i ncreasing nodes in the network. Conclusion/Recommendations: Three routing protocols are being studied on the b asis of the throughput of the network with respect to the numbe r of increasing nodes in the network and had been concluded that TORA performs better for if the number of nodes in a network are increased.

Journal ArticleDOI
TL;DR: The performance of the designed BPNN image compression system can be increased by modifying the network itself, learning parameters and weights, as well as reducing the chance of error occurring during the compressed image transmission through analog or digital channel.
Abstract: Problem statement: The problem inherent to any digital image is the large amount of bandwidth required for transmission or storage. This has driven the research area of image compression to develop algorithms that compress images to lower data rates with better quality. Artificial neural networks are becoming attractive in image processing where high computational performance and parallel architectures are required. Approach: In this research, a three layered Backpropagation Neural Network (BPNN) was designed for building image compression/decompression system. The Backpropagation neural network algorithm (BP) was used for training the designed BPNN. Many techniques were used to speed up and improve this algorithm by using different BPNN architecture and different values of learning rate and momentum variables. Results: Experiments had been achieved, the results obtained, such as Compression Ratio (CR) and peak signal to noise ratio (PSNR) are compared with the performance of BP with different BPNN architecture and different learning parameters. The efficiency of the designed BPNN comes from reducing the chance of error occurring during the compressed image transmission through analog or digital channel. Conclusion: The performance of the designed BPNN image compression system can be increased by modifying the network itself, learning parameters and weights. Practically, we can note that the BPNN has the ability to compress untrained images but not in the same performance of the trained images.

Journal ArticleDOI
TL;DR: Fuzzy based model can represent and manipulate agriculture knowledge that is incomplete or vague and it can be used to determine land limitation rating and effective decisions can be made for land suitability evaluation and crop selection problem.
Abstract: Problem statement: Evaluating land suitability and selecting crops in modern agriculture is of critical importance to every organization. This is because the narrower area of land, the more effectiveness in planting is required in accordance with the desires of the land. Process of evaluating land suitability class and selecting plants in accordance with decision marker's requirements is complex and unstructured. Approach: This study presented a fuzzy-based Decision Support System (DSS) for evaluating land suitability and selecting crops to be planted. A fuzzy rules was developed for evaluating land suitability and selecting the appropriate crops to be planted considering the decision maker's requirements in crops selection with the efficient use of the powerful reasoning and explanation capabilities of DSS. The idea of letting the problem to be solved determines the method to be used was incorporated into the DSS development. Results: As a result, effective decisions can be made for land suitability evaluation and crop selection problem. An example was presented to demonstrate the applicability of the proposed DSS for solving the problem of evaluating land suitability and selecting crops in real world situations. Conclusion: Fuzzy based model can represent and manipulate agriculture knowledge that is incomplete or vague and it can be used to determine land limitation rating. The rating value was used to determine limitation level of the land and used to determine what the most suitable crops to cultivate for the existing condition of the land.

Journal ArticleDOI
TL;DR: This study had proved that, when given sufficient training samples, LDA is able to provide better discriminant ability in feature extraction for face biometrics.
Abstract: Problem statement: In facial biometrics, face features are used as the required human traits for automatic recognition. Feature extracted from face images are significant for face biometrics system performance. Approach: In this thesis, a framework of facial biometric was designed based on two subspace methods i.e., Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). First, PCA is used for dimension reduction, where original face images are projected into lower-dimensional face representations. Second, LDA was proposed to provide a solution of better discriminant. Both PCA and LDA features were presented to Euclidean distance measurement which is conveniently used as a benchmark. The algorithms were evaluated in face identification and verification using a standard face database-AT and T and a locally collected database-CBE. Each database consists of 400 images and 320 images respectively. Results: LDA-based methods outperform PCA for both face identification and verification. For face identification, PCA achieves accuracy of 91.9% (AT and T) and 76.7% (CBE) while LDA 94.2% (AT and T) and 83.1% (CBE). For face verification, PCA achieves Equal Error Rate (EER) of 1.15% (AT and T), 7.3% (CBE) while LDA 0.78% (AT and T) and 5.81% (CBE). Conclusion/Recommendations: This study had proved that, when given sufficient training samples, LDA is able to provide better discriminant ability in feature extraction for face biometrics.

Journal ArticleDOI
TL;DR: Iris signature proposed was comparatively small just of 1×24 size but the iris signature length used by that algorithm is comparatively very high that is 1×2048 phase vector so this has definitely contributed to improve the speed of the system.
Abstract: Problem statement: In any real time biometric system processing speed and recognition time are crucial parameters. Reducing processing time involves many parameters like normalization, FAR, FRR, management of eyelid and eyelash occlusions, size of signature etc. Normalization consumes substantial amount of time of the system. This study contributes for improved iris recognition system with reduced processing time, False Acceptance Rate (FAR) and False Rejection Rate (FRR). Approach: To improve system performance and reliability of a biometric system. It avoided the iris normalization process used traditionally in iris recognition systems. The technique proposed here used different masks to filter out iris image from an eye. Comparative study of different masks was done and optimized mask is proposed. The experiment was carried on CASIA database consisting of 756 iris images of 108 persons. Each person contributes seven images of eye (108×7 = 756) images in the database. Results: In the proposed method: (1) Normalization step is avoided; (2) Computational time is reduced by 0.3342 sec; (3) Iris signature size is reduced; (4) Improved performance parameters. (With reduced feature size, proposed method achieves 99.4866% accuracy, 0.0069% FAR, 1.0198% FRR and significant increase in speed of the system). Conclusion: Iris signature proposed was comparatively small just of 1×24 size. Though Daugman’s method gives best accuracy of 99.90% but the iris signature length used by that algorithm is comparatively very high that is 1×2048 phase vector. Also Daugman has used phase information in signature formation. Our method gives a accuracy of 99.474% with a signature of comparatively very small length. This has definitely contributed to improve the speed.

Journal ArticleDOI
TL;DR: It was found that the texture features extracted from benign and cystic lesions of an elastogram are more distinct than that of an ultrasound image, indicating that classification of breast lesions using these features is under implementation.
Abstract: Problem statement: Elastography is developed as a quantitative approa ch to imaging linear elastic properties of tissues to detect suspicious tumors. We propose an automatic feature extraction method in ultrasound elastography and echography for characterization of breast lesions. Approach: The proposed algorithm was tested on 40 pairs of biopsy proven ultrasound elastography and echography images of which 11 are cystic, 16 benign and 13 mal ignant lesions. Ultrasound elastography and echography images of breast tissue are acquired usi ng Siemens (Acuston Antares) ultrasound scanner with a 7.3 MHz linear array transducer. The images were preprocessed and subjected to automatic threshold, resulting in binary images. The contours of a breast tumor from both echographic and elastographic images were segmented using level set method. Initially, six texture features of segmented lesions are computed from the two image types followed by computing three strain and two shape features using parameters from segmented lesi ons of both elastographic and echographic images. Results: These features were computed to assess their effec tiveness at distinguishing benign, malignant and cystic lesions. It was found that the texture features extracted from benign and cystic lesions of an elastogram are more distinct than tha t of an ultrasound image .The strain and shape features of malignant lesions are distinct from tha t of benign lesions, but these features do not show much variation between benign and cystic lesions. Conclusion: As strain, shape and texture features are distinct for benign, malignant and cystic lesio ns, classification of breast lesions using these fe atures is under implementation.

Journal ArticleDOI
TL;DR: This study proposed novel methods of financial time series indexing by considering their zigzag movement and illustrated mostly acceptable performance in tree operations and dimensionality reduction comparing to existing similar technique like Specialize Binary Tree.
Abstract: The objective of the current study was to establish an efficient and reproducible in vitro plant regeneration protocol using cotyledonary explant for Citrullus lanatus cv. Round Dragon. To achieve optimal conditions for adventitious shoot induction, cotyledon explants of 5-day-old seedlings, 7-day-old seedlings and 9-day-old seedlings were tested for regeneration potential on Murashige and Skoog (MS) media supplemented with 2.3 mg L-1 BAP. Results showed that high frequency of in vitro adventitious shoot regeneration was induced from the proximal region of 5-day-old seedlings (93%) with 19.80±0.99 shoots per responding explant after 6 weeks. Adventitious shoots induced from 5-day-old seedlings after 6 weeks were transferred to MS shoot regeneration medium without plant growth regulator for shoot elongation for 4 weeks. The influence of various concentrations of IBA, IAA and NAA on root initiation was examined on half-strength and full-strength of MS rooting medium. The best response for root initiation was obtained from the microshoots grown in full-strength MS rooting medium compared to the half-strength MS rooting medium. Furthermore, IBA was more efficient in promoting root induction than IAA and NAA, resulting in a higher rate of root initiation (100%) at the concentration of 0.1 mg L-1 IBA. Therefore, elongated shoots were rooted in MS medium supplemented with 0.1 mg L-1 IBA for 3 weeks. Rooted plantlets were acclimatized successfully under ex vitro conditions.

Journal ArticleDOI
TL;DR: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate and achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, JPEG compression and affine transformation.
Abstract: Problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, JPEG compression and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a dynamic power adjustment protocol that will be used for sending the periodical safety message based on the analysis of the channel status dependi ng on the channel congestion and the power used for transmission.
Abstract: Problem statement: Vehicular Ad hoc Networks (VANET) is one of the most challenging research area in the field of Mobile Ad Hoc Networks. Approach: In this research we proposed a dynamic power adjustment protocol that will be used for sending the periodical safety message. (Beacon ) based on the analysis of the channel status dependi ng on the channel congestion and the power used for transmission. Results: The Beacon Power Control (BPC) protocol first sensed and examined the percentage of the channel congestion, the result ob tained was used to adjust the transmission power fo r the safety message to reach the optimal power. Conclusion/Recommendations: This will lead to decrease the congestion in the channel and achieve good channel performance and beacon dissemination.

Journal ArticleDOI
TL;DR: This study proposed a method based on Taguchi’s robust design method for self-tuning of an autonomous underwater vehicle controller that has very good tracking performance and robustness even in the presence of disturbances and offers a chance to extend the same technique to the three dimensional vehicle tracking control.
Abstract: Problem statement: Conventional Proportional-Integral-Derivative (PID) controllers exhibit moderately good performance once the PID gains are properly tuned. However, when the dynamic characteristics of the system are time dependent or the operating conditions of the system vary, it is necessary to retune the gains to obtain desired performance. This situation has renewed the interest of researchers and practitioners in PID control. Self-tuning of PID controllers has emerged as a new and active area of research with the advent and easy availability of algorithms and computers. This study discusses self-tuning (auto-tuning) algorithm for control of autonomous underwater vehicles. Approach: Self-tuning mechanism will avoid time consuming manual tuning of controllers and promises better results by providing optimal PID controller settings as the system dynamics or operating points change. Most of the self-tuning methods available in the literature were based on frequency response characteristics and search methods. In this study, we proposed a method based on Taguchi’s robust design method for self-tuning of an autonomous underwater vehicle controller. The algorithm, based on this method, tuned the controller gains optimally and robustly in real time with less computation effort by using desired and actual state variables. It can be used for the Single-Input Single-Output (SISO) systems as well as Multi-Input Multi-Output (MIMO) systems without mathematical models of plants. Results: A simulation study of the AUV control on the horizontal plane (yaw plane control) was used to demonstrate and validate the performance and effectiveness of the proposed scheme. Simulation results of the proposed self-tuning scheme are compared with the conventional PID controllers which are tuned by Ziegler-Nichols (ZN) and Taguchi’s tuning methods. These results showed that the Integral Square Error (ISE) is significantly reduced from the conventional controllers. The robustness of this proposed self-tuning method was verified and results are presented through numerical simulations using an experimental underwater vehicle model under different working conditions. Conclusion/Recommendations: By using this scheme, the PID controller gains are optimally adjusted automatically online with respect to the system dynamics or operating condition changes. This technique found to be more effective than conventional tuning methods and it is even very convenient when mathematical models of plants are not available. Computer simulations showed that the proposed method has very good tracking performance and robustness even in the presence of disturbances. The simple structure, robustness and ease of computation of the proposed method make it very attractive for real time implementation for controlling of underwater vehicle and it offers a chance to extend the same technique to the three dimensional vehicle tracking control as well.

Journal ArticleDOI
TL;DR: The key values of Interoperability, durability, compatibility, manageability, dynamic reusability and accessibility in the proposed architecture enhance the future e-Learning systems to communicate more efficiently and share data more easily.
Abstract: Problem statement: Service Oriented Architecture (SOA) defines how to integrate widely disparate applications for a world that is Web based and uses multiple implementation platforms. Using (SOA) one can build durable e-learning contents, regardless of changes or evolutions in technology. This means that new content should be added to existing content without costly redesign, reconfiguration, or recoding. Approach: In this study an e-Learning management system with Web services oriented framework was proposed. The system will be an open source application with client-scripting facility. It also supports the cross browser and it is fully integrated with different databases; MS SQL Server, MS Access, Oracle and LDAP. Results: The key values of Interoperability, durability, compatibility, manageability, dynamic reusability and accessibility in the proposed architecture enhance the future e-Learning systems to communicate more efficiently and share data more easily. Conclusion/Recommendations: Web services architecture will provide a standard based platform for Service-Oriented Computing. It defines itself as a set of specifications that support an open XML-based platform for description, discovery, and interoperability of distributed, heterogeneous applications as services.

Journal ArticleDOI
TL;DR: The semant ic clustering and feature selection method was proposed to improve the clustered and feature sele ction mechanism with semantic relations of the text documents and was designed to identify the semantic relations using the ontology.
Abstract: Problem statement: Text documents are the unstructured databases that contain raw data collection. The clustering techniques are used grou p up the text documents with reference to its similarity. Approach: The feature selection techniques were used to impr ove the efficiency and accuracy of clustering process. The feature selecti on was done by eliminate the redundant and irrelevant items from the text document contents. S tatistical methods were used in the text clustering and feature selection algorithm. The cube size is v ery high and accuracy is low in the term based text clustering and feature selection method. The semant ic clustering and feature selection method was proposed to improve the clustering and feature sele ction mechanism with semantic relations of the text documents. The proposed system was designed to identify the semantic relations using the ontology. The ontology was used to represent the term and con cept relationship. Results: The synonym, meronym and hypernym relationships were represented in the ontology. The concept weights were estimated with reference to the ontology. The conce pt weight was used for the clustering process. The system was implemented in two methods. They were term clustering with feature selection and semantic clustering with feature selection. Conclusion: The performance analysis was carried out with the term clustering and semantic clustering methods . The accuracy and efficiency factors were analyzed in the performance analysis.

Journal ArticleDOI
TL;DR: The results showed that the approach of components neighbors-scan for connected component labeling promoted speed, accuracy and simplicity, and the approach has a good performance in terms of accuracy, the time consumed and the simplicity.
Abstract: Problem statement: Many approaches have been proposed in previous such as the classic sequential connected components labeling algorithm which is relies on two subsequent raster-scans of a binary image. This method produced good performance in terms of accuracy, but because of the implementation of the image processing systems now requires faster process of the computer, the speed of this technique's process has become an imp ortant issue. Approach: A computational approach, called components neighbors-scan labeling algorithm for connected component labeling was presented in this study. This algorithm required sc anning through an image only once to label connected components. The algorithm started by scanning from the head of the component's group, before tracing all the components neighbors by usin g the main component's information. This algorithm had desirable characteristics, it is simp le while promoted accuracy and low time consuming. By using a table of components, this approach also gave other advantages as the information for the next higher process. Results: The approach had been tested with a collection of binary images. In practically all cases, the technique had successful ly given the desired result. Averagely, from the results the algorithm increased the speed around 67 .4% from the two times scanning method. Conclusion: Conclusion from the comparison with the previous method, the approach of components neighbors-scan for connected component labeling promoted speed, accuracy and simplicity. The results showed that the approach has a good performance in terms of accuracy, the time consumed and the simplicity of the algorithm.

Journal ArticleDOI
TL;DR: The aim of this study was to compute disaggregate performance measures of universities in Sistan and Baluchestan state (in Iran) in the period 2004-2009, and the findings indicated the average technical efficiency in academic year 2008-2009 increase about 15%.
Abstract: Problem statement: The aim of this study was to compute disaggregate performance measures of universities. The traditional models for Data Envelopment Analysis (DEA) type performance measurement are based on thinking about production as a “black box”. Inputs are transformed in this box into outputs. One of the drawbacks of these models is the neglect of linking activities. Approach: Network DEA models generally consider processed which represent the main components of the system being studied. Most often the processes were executed in parallel and/or in series. Results: With respect to the network DEA approach, we estimated efficiency, the impact of each variable on the efficiency and productivity changes of the universities in Sistan and Baluchestan state (in Iran) in the period 2004-2009, the findings indicated the average technical efficiency in academic year 2008-2009 increase about 15%. Conclusion: Network Malmquist indexes showed the universities have on average 1.1%, productivity gain. The main factor of the productivity increase is the progress in technical change.

Journal ArticleDOI
TL;DR: The proposed adaptive thresholding method succeeded in providing improved denoising performance to recover the shape of edges and important detailed components and proved that the prop osed method can obtain a better image estimate than the wavelet based restoration methods.
Abstract: Problem statement: This study introduced an adaptive thresholding met hod for removing additive white Gaussian noise from digital images. Approach: Curvelet transform employed in the proposed scheme provides sparse decomposition as compared to the wavelet transform methods which being nongeometrical lack sparsity and fail to show optimal rate of convergence. Results: Different behaviors of curvelet transform maxima of image and noise across different scales allow us to design the threshold operator adaptively. Multiple thresho lds depending on the scale and noise variance are calculated to locally suppress the curvelet transfo rm coefficients so that the level of threshold is different at every scale. Conclusion/Recommendations: The proposed algorithm succeeded in providing improved denoising performance to recover the shape of edges and important detailed components. Simulation results proved that the prop osed method can obtain a better image estimate than the wavelet based restoration methods.