scispace - formally typeset
Search or ask a question

Showing papers in "The International Arab Journal of Information Technology in 2013"


Journal Article•
TL;DR: Concluding remarks about testing criteria set by different authors as listed in literature reveals that K series of PCA produced better results as compared to simple PCA and 2DPCA on the aforementioned datasets.
Abstract: Face recognition is considered to be one of the most reliable biometric, when security issues are taken into concern. For this, feature extraction becomes a critical problem. Different methods are used for extraction of facial feature which are broadly classified into linear and nonlinear subspaces. Among the linear methods are Linear Discriminant Analysis(LDA), Bayesian Methods (MAP and ML), Discriminative Common Vectors (DCV), Independent Component Analysis (ICA), Tensor faces (Multi-Linear Singular Value Decomposition (SVD)), Two Dimensional PCA (2D-PCA), Two Dimensional LDA (2D- LDA) etc. but Principal Component Analysis (PCA) is considered to be one the classic method in this field. Based on this a brief comparison of PCA family is drawn, of which PCA, Kernel PCA (KPCA), Two Dimensional PCA (2DPCA) and Two Dimensional Kernel (2DKPCA) are of major concern. Based on literature review recognition performance of PCA family is analyzed using the databases named YALE, YALE-B, ORL and CMU. Concluding remarks about testing criteria set by different authors as listed in literature reveals that K series of PCA produced better results as compared to simple PCA and 2DPCA on the aforementioned datasets.

65 citations


Journal Article•
TL;DR: A combination of DCT and fractal image compression techniques is proposed, employed to compress the color image while the fractal images compression is employed to evade the repetitive compressions of analogous blocks.
Abstract: Digital images are often used in several domains. Large amount of data is necessary to represent the digital images so the transmission and storage of such images are time-consuming and infeasible. Hence the information in the images is compressed by extracting only the visible elements. Normally the image compression technique can reduce the storage and transmission costs. During image compression, the size of a graphics file is reduced in bytes without disturbing the quality of the image beyond an acceptable level. Several methods such as Discrete Cosine Transform (DCT), DWT, etc. are used for compressing the images. But, these methods contain some blocking artifacts. In order to overcome this difficulty and to compress the image efficiently, a combination of DCT and fractal image compression techniques is proposed. DCT is employed to compress the color image while the fractal image compression is employed to evade the repetitive compressions of analogous blocks. Analogous blocks are found by using the Euclidean distance measure. Here, the given image is encoded by means of Huffman encoding technique. The implementation result shows the effectiveness of the proposed scheme in compressing the color image. Also a comparative analysis is performed to prove that our system is competent to compress the images in terms of Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) and Universal Image Quality Index (UIQI) measurements.

54 citations


Journal Article•
TL;DR: The results significantly demonstrate the suitability of the proposed features and classification using MLP and SFAM networks for classifying the acute leukaemia cells in blood sample.
Abstract: Leukaemia is a cancer of blood that causes more death than any other cancers among children and young adults under the age of 20. This disease can be cured if it is detected and treated at the early stage. Based on this argument, the requirement for fast analysis of blood cells for leukaemia is of paramount importance in the healthcare industry. This paper presents the classification of white blood cells (WBC) inside the Acute Lymphoblastic Leukaemia (ALL) and Acute Myelogenous Leukaemia blood samples by using the Multilayer Perceptron (MLP) and Simplified Fuzzy ARTMAP (SFAM) neural networks. Here, the WBC will be classified as lymphoblast, myeloblast and normal cell for the purpose of categorization of acute leukaemia types. Two different training algorithms namely Levenberg-Marquardt and Bayesian Regulation algorithms have been employed to train the MLP network. There are a total of 42 input features that consist of the size, shape and colour based features, have been extracted from the segmented WBCs, and used as the neural network inputs for the classification process. The classification results indicating that all networks have produced good classification performance for the overall proposed features. However, the MLP network trained by Bayesian Regulation algorithm has produced the best classification performance with testing accuracy of 95.70% for the overall proposed features. Thus, the results significantly demonstrate the suitability of the proposed features and classification using MLP and SFAM networks for classifying the acute leukaemia cells in blood sample.

41 citations


Journal Article•
TL;DR: This paper is based on a complete survey of face recognition conducted under varying facial expressions and evaluates various existing algorithms while comparing their results in general.
Abstract: Automatic face recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems Facial expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects This paper is based on a complete survey of face recognition conducted under varying facial expressions In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe The analysis has been completed by evaluating various existing algorithms while comparing their results in general It also expands the scope for other researchers for answering the question of effectively dealing with such problems

36 citations


Journal Article•
TL;DR: The Laplacian filter with sharpened images gives good performance in retrieval of JPEG format images as compared to the median filter in the DCT frequency domain and an experimental comparison of the results in terms of accuracy.
Abstract: An effective Content-Based Image Retrieval (CBIR) system is based on efficient feature extraction and accurate retrieval of similar images. Enhanced images by using proper filter methods can also, play an important role in image retrieval in a compressed frequency domain since currently most of the images are represented in the compressed format by using the Discrete Cosine Transformation ( DCT) blocks transformation. In compression, some crucial information is lost and perceptual information is left, which has significant energy requirement for retrieval in a compressed domain. In this paper, the statistical texture features are extracted from the enhanced images in the DCT domain using only the DC and first three AC coefficients of the DCT blocks of image having more significant information. We study the effect of filters in image retrieval using texture features. We perform an experimental comparison of the results in terms of accuracy on the basis of median, median with edge extraction and Laplacian filters using quantized histogram texture features in a DCT domain. Experiments on the Corel database using the proposed approach, give the improved results on the basis of filters; more specifically, the Laplacian filter with sharpened images gives good performance in retrieval of JPEG format images as compared to the median filter in the DCT frequency domain.

36 citations


Journal Article•
TL;DR: A GIS-based multicriteria evaluation site selection tool is developed in ArcGIS 9.3 using COM technology to achieve software interoperability and a comprehensive discussion of the site selection process and characteristics is presented.
Abstract: Site selection is a complex process for owners and analysts. This process involves not only technical requirements, but also economical, social, environmental and political demands that may result in conflicting objectives. Site selection is the process of finding locations that meet desired conditions set by the selection criteria. Geographic Information Systems (GIS) and Multi Criteria Evaluation techniques (MCE) are the two common tools employed to solve these problems. However, each suffers from serious shortcomings and could not be used alone to reach an optimum solution. This poses the challenge of integrating these tools. Developing and using GIS-based MCE tools for site selection is a complex process that needs well trained GIS developers and analysts who are not often available in most organizations. In this paper, a GIS-based multicriteria evaluation site selection tool is developed in ArcGIS 9.3 using COM technology to achieve software interoperability. This tool can be used by engineers and planners with different levels of GIS and MCE knowledge to solve site selection problems. A typical case study is presented to demonstrate the application of the proposed tool. In addition, the paper presents a comprehensive discussion of the site selection process and characteristics.

35 citations


Journal Article•
TL;DR: A new method for gender classification method which considers three features is proposed, which uses fuzzy logic and neural network to identify the gender of the speaker.
Abstract: Nowadays classification of gender is one of the most important processes in speech processing. Usually gender classification is based on considering pitch as feature. The pitch value of female is higher than the male. In most of the recent research works gender classification process is performed using the abovementioned condition. In some cases the pitch value of male is higher and also pitch of some female is low, in that case this classification does not produce the exact required result. By considering the aforementioned problem we have here proposed a new method for gender classification method which considers three features. The new method uses fuzzy logic and neural network to identify the gender of the speaker. To train fuzzy logic and neural network, training dataset is generated by using the above three features. Then mean value is calculated for the obtained result from fuzzy logic and neural network. By using this threshold value, the proposed method identifies the speaker belongs to which gender. The implementation result shows the performance of the proposed technique in gender classification .

31 citations


Journal Article•
TL;DR: This work proposes a certificateless DVS scheme without bilinear pairings, which is more practical than the previous related schemes for practical application and with the running time of the signature being saved greatly.
Abstract: To solve the key escrow problem in identity%based cryptosystem, Al%Riyami et al. introduced the Certi ficateLess Public Key Cryptography (CL%PKC). As an important c ryptographic primitive, CertificateLess Designated Verifier Signature (CLDVS) scheme was studied widely. Following Al%Riy ami et al. work, many certificateless Designated Verifier Signature (DVS) schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. In order to improve the performance we propose a certificateless DVS scheme without bilinear pairings. With the running time of the signature being saved greatly, our scheme is more practical than the previous related schemes for practical application.

30 citations


Journal Article•
TL;DR: This work performed data clustering of high dimension dataset using Constraint-Partitioning K-Means clustering algorithm which did not fit properly to cluster high dimensional data sets in terms of effectiveness and efficiency, and resulted in producing indefinite and inaccurate clusters.
Abstract: With the ever-increasing size of data, clustering of large dimensional databases poses a demanding task that should satisfy both the requirements of the computation efficiency and result quality. In order to achieve both tasks, clustering of feature space rather than the original data space has received importance among the data mining researchers. Accordingly, we performed data clustering of high dimension dataset using Constraint-Partitioning K-Means clustering algorithm which did not fit properly to cluster high dimensional data sets in terms of effectiveness and efficiency, because of the intrinsic sparse of high dimensional data and resulted in producing indefinite and inaccurate clusters. Hence, we carry out two steps for clustering high dimension dataset. Initially, we perform dimensionality reduction on the high dimension dataset using Principal Component Analysis as a preprocessing step to data clustering. Later, we integrate the Constraint-Partitioning K- Means clustering algorithm to the dimension reduced dataset to produce good and accurate clusters. The performance of the approach is evaluated with high dimensional datasets such as Parkinson's dataset and Ionosphere dataset. The experimental results showed that the proposed approach is very effective in producing accurate and precise clusters.

29 citations


Journal Article•
TL;DR: An Electronic voting (E-voting) system is a voting system in which the election data is recorded, stored and processed primarily as digital information.
Abstract: An Electronic voting (E-voting) system is a voting system in which the election data is recorded, stored and processed primarily as digital information. E-voting may become the quickest, cheapest, and the most efficient way to administer election and count vote since it only consists of simple process or procedure and require a few worker within the process.

28 citations


Journal Article•
TL;DR: A novel approach has been introduced in the paper for the efficient mapping of the DAG$based applications that takes into account the lower and upper bounds for the start time of the tasks.
Abstract: Today's multi$computer systems are heterogeneous in nature, i.e., the machines they are composed of, have varying processing capabilities and are interconnected through high speed networks, thus, making them suitable for performing diverse set of computing$intensive applications. In order to exploit the high performance of such a distributed system, efficient mapping of the tasks on available machines is necessary. This is an active research topic and different strategies have been adopted in literature for the mapping problem. A novel approach has been introduced in the paper for the efficient mapping of the DAG$based applications. The approach that takes into account the lower and upper bounds for the start time of the tasks. The algorithm is based on list scheduling approach and has been compared with the well known list scheduling algorithms existing in the literature. The comparison results for the randomly synthesized graphs as well as the graphs from the real world elucidate that the proposed algorithm significantly outperforms the existing ones on the basis of different cost and performance metrics.

Journal Article•
TL;DR: A new decision making approach for solving GIS software selection problem was proposed by integrating expert systems and multi-criteria decision making techniques and a prototype knowledge-based system was developed.
Abstract: Building a new GIS project is a major investment. Choosing the right GIS software package is critical to the success and failure of such investment. Because of the complexity of the problem a number of decision making tools must be deployed to arrive at the proper solution. In this study a new decision making approach for solving GIS software selection problem was proposed by integrating expert systems and multi-criteria decision making techniques. To implement the proposed decision- making approach, a prototype knowledge-based system was developed in which expert systems and Analytic Hierarchy Process (AHP) are successfully integrated using the Component Object Model (COM) technology. A typical case study is also presented to demonstrate the application of the prototype system.

Journal Article•
TL;DR: The goal of this work is to design a framework to extract simultaneously several objects of interest from Computed Tomography (CT) images by using some priori-knowledge by using properties of agent in a multi-agent environment.
Abstract: Image segmentation techniques have been an invaluable task in many domains such as quantification of tissue volumes, medical diagnosis, anatomical structure study, treatment planning, etc. Image segmentation is still a debatable problem due to some issues. Firstly, most image segmentation solutions are problem-based. Secondly, medical image segmentation methods generally have restrictions because medical images have very similar gray level and texture among the interested objects. The goal of this work is to design a framework to extract simultaneously several objects of interest from Computed Tomography (CT) images by using some priori-knowledge. Our method used properties of agent in a multi-agent environment. The input image is divided into several sub-images, and each local agent works on a sub-image and tries to mark each pixel as a specific region by means of given priori-knowledge. During this time the local agent marks each cell of sub- image individually. Moderator agent checks the outcome of all agents' work to produce final segmented image. The experimental results for CT images demonstrated segmentation accuracy around 91% and efficiency of 7 seconds.

Journal Article•
TL;DR: This study designs an IDS using eXtended Classifier System (XCS) with internal modification for classifier generator to gain better Detection Rate (DR) and achieves a promised performance compared with other systems for detecting intrusions.
Abstract: Due to increasing incidents of cyber attacks, building effective Intrusion Detection Systems (IDS) are essential for protecting information systems security, and yet it remains an elusive goal and a great challenge. Current IDS examine all data features to detect intrusion or misuse patterns. Some of the features may be redundant or low importance during detection process. The purpose of this study is to identify important input features in building IDS to gain better Detection Rate (DR). By that, two stages are proposed for designing intrusion detection system. In the first phase, we proposed filtering process for a set of features to combine best set of features for each type of network attacks that implemented by using Artificial Neural Network (ANN). Next, we design an IDS using eXtended Classifier System (XCS) with internal modification for classifier generator to gain better DR. In the experiments, we choose KDD '99 as a dataset to train and examine the proposed work. From experiment results, XCS with its modifications achieves a promised performance compared with other systems for detecting intrusions.

Journal Article•
TL;DR: An improved approach for indicating visually salient regions of an image based upon a known visual search task by employing a robust model of instantaneous visual attention combined with a pixel probability map derived from the automatic detection of a previously-seen object.
Abstract: T his paper presents an improved approach for indicating visually salient regions of an image based upon a known visual search task. The proposed approach employs a robust model of instantaneous visual attention (i.e. "bottom-up") combined with a pixel probability map derived from the automatic detection of a previously-seen object (task-dependent i.e. ("top-down"). The objects to be recognized are parameterized quickly in advance by a viewpoint-invariant spatial distribution of Speeded Up Robust Features (SURF) interest-points. The bottom-up and top-down object probability images are fused to produce a task-dependent saliency map. The proposed approach is validated using observer eye-tracker data collected under object search-and-count tasking. Proposed approach shows 13% higher overlap with true attention areas under task compared to bottom-up saliency alone. The new combined saliency map is further used to develop a new intelligent compression technique which is an extension of Discrete Cosine Transform (DCT) encoding. The proposed approach is demonstrated on surveillance-style footage throughout.

Journal Article•
TL;DR: This study has led to the modification of the current LSB substitution algorithm by delivering a new algorithm namely sequential colour cycle to help in improving the payload of the secret data at the same time retaining the quality of the stego-image produced within an acceptance threshold.
Abstract: Several problems arise among the existing LSB-based image steganographic schemes due to distortion in a stego- image and limited payload capacity. Thus, a proposed scheme has been developed with the aims to help in improving the payload of the secret data at the same time retaining the quality of the stego-image produced within an acceptance threshold. This study has led to the modification of the current LSB substitution algorithm by delivering a new algorithm namely sequential colour cycle. For achieving a higher security, multi-layered steganography can be performed by embedding a secret data into multiple layers of cover-images. The performance evaluation has been tested and proven that the improvement of embedding ratio at 1:2 for the proposed algorithm can be achieved and the value of the image quality is not falling below the threshold of distortion.

Journal Article•
TL;DR: Two enhancements to the above work investigated earlier by adding two more features to the existing one, discounting approach was introduced to form a summary which ensures less redundancy among sentences and position weight mechanism has been adopted to preserve importance based on the position they occupy.
Abstract: Summarizing documents catering the needs of an user is tricky and challenging. Though there are varieties of approaches, graphical methods have been quite popularly investigated for summarizing document contents. This paper focus its attention on two graphical methods namely(LexRank (threshold) and LexRank (Continuous) proposed by Erkan and Radev. This paper proposes two enhancements to the above work investigated earlier by adding two more features to the existing one. Firstly, discounting approach was introduced to form a summary which ensures less redundancy among sentences. Secondly, position weight mechanism has been adopted to preserve importance based on the position they occupy. Intrinsic evaluation has been done with two data sets. Data set 1 has been created manually from the news paper documents collected by us for experiments. Data set 2 is from DUC 2002 data which is commercially available and distributed or accessed through National Institute of Standards Technology (NIST). We have shown that the based upon precision and recall parameters were comprehensively better as compared to the earlier algorithms.

Journal Article•
TL;DR: A framework for soccer event detection through collaborative analysis of the textual, visual and aural modalities is presented, demonstrating that despite considering simple features, and by averting the use of labeling examples, event detection can be achieved at very high accuracy.
Abstract: This paper presents a framework for soccer event detection through collaborative analysis of the textual, visual and aural modalities. The basic notion is to decompose a match video into smaller segments until ultimately the desired eventful segment is identified. Simple features are considered namely the minute-by-minute reports from sports websites (i.e. text), the semantic shot classes of far and closeup-views (i.e. visual), and the low-level features of pitch and log-energy (i.e. audio). The framework demonstrates that despite considering simple features, and by averting the use of labeled training examples, event detection can be achieved at very high accuracy. Experiments conducted on ~30-hours of soccer video show very promising results for the detection of goals, penalties, yellow cards and red cards.

Journal Article•
TL;DR: The proposed method resulted in more detailed images hence, giving radiologists additional information about thoracic cage details including clavicles, ribs, and costochondral junction, and Medically, the proposed methods have clarified the vascular impression in hilar regions in regular X-ray images.
Abstract: The principle objective of image enhancement is to process an image so that the result is more suitable than the original image for a specific application. This paper presents a novel hybrid method for enhancing digital X-Ray radiograph images by seeking optimal spatial and frequency domain image enhancement combinations. The selected methods from the spatial domain include: negative transform, histogram equalization and power-law transform. Selected enhancement methods from the frequency domain include: gaussian low and high pass filters and butterworth low and high pass filters. Over 80 possible combinations have been tested, where some of the combinations have yielded in an optimal enhancement compared to the original image, according to radiologist subjective assessments. Medically, the proposed methods have clarified the vascular impression in hilar regions in regular X-ray images. This can help radiologists in diagnosing vascular pathology, such as pulmonary embolism in case of thrombus that has been logged in pulmonary trunk, which will appear as a filling defect. The proposed method resulted in more detailed images hence, giving radiologists additional information about thoracic cage details including clavicles, ribs, and costochondral junction.

Journal Article•
TL;DR: This paper discusses the security for a simple and efficient three"party password"based authenticated key exchange protocol proposed by Huang most recently and proposes an enhanced protocol that can defeat the attacks described and yet is reasonably efficient.
Abstract: This paper discusses the security for a simple and efficient three"party password"based authenticated key exchange protocol proposed by Huang most recently. Our analysis shows her protocol is still vulnerable to three kinds of attacks: 1). undetectable on"line dictionary attacks, 2). ke y"compromise impersonation attack. Thereafter we pr opose an enhanced protocol that can defeat the attacks described and yet is reasonably efficient.

Journal Article•
TL;DR: It is shown in this paper that in addition to the above parameters, the type of the environment has a major effect on the noise rise in the uplink as well as the total power in the downlink.
Abstract: T he third generation networks like UMTS offers multimedia applications and services that meet endtoend quality of service requirements. The load factor in the uplink is critical and it is one of the important parameters which has a direct impact on the resource management as well as on the cell performance. However, in this paper, the fractional load factor in the uplink and the total downlink power are derived taken into account the multipath propagation in different environments. The analysis is based on changing new parameters that affect the QoS as well as the performance, such as: service activity factor, energytonoise ratio, E b/N0, interference factor, and the nonorthogonality factor of the codes. The impact of these parameters on the performance and the capacity as well as the total throughput of the cell is also investigated. It is shown in this paper that in addition to the above parameters, the type of the environment has a major effect on the noise rise in the uplink as well as the total power in the downlink. The investigation is based on different types of services, i.e., voice (conversational: 12.2 kbit/s), packet switched services with rates (streaming 64 and 128 kbit/s) and (interactive 384 kbit/s). Additionally, the obtained results in this paper are compared with some similar results in the literature.

Journal Article•
TL;DR: A method that focuses on the use of Arabic script in the generation of CAPTCHA, which exploits the limitations of Arabic OCRs in reading Arabic text to protect internet resources in Arabic speaking countries.
Abstract: Bots are programs that crawl through the web site and make auto registrations. CAPTCHAs, using Latin script, are widely used to prevent automated bots from abusing online services on the World Wide Web. However, many of the existing English based CAPTCHAs have some inherent problems and cannot assure the security of these websites. This paper proposes a method that focuses on the use of Arabic script in the generation of CAPTCHA. The proposed scheme uses specific Arabic font types in CAPTCHA generation. Such CAPTCHA exploits the limitations of Arabic OCRs in reading Arabic text. The proposed scheme is beneficial in Arabic speaking countries and is very useful in protecting internet resources. A survey has been conducted to find the usability of the scheme, which was satisfactory. In addition, experiments were carried out to find the robustness of the scheme against OCR. The results were encouraging. Moreover, a comparative study of our CAPTCHA and Persian CAPTCHA scheme shows its advancement over Persian CAPTCHA.

Journal Article•
TL;DR: This paper presents semantic technique on queries for retrieving more relevant results in CLIR) that concentrates on the Arabic, Malay or English query(s) translation (a dictionary based method) to retrieve documents according to query(S) translation.
Abstract: Cross language information retrieval (CLIR) presents huge ambiguous results as polysemy problems. Therefore, the semantic approach comes to solve the polysemy problem which that the same word may have different meanings according to the context of sentences. This paper presents semantic technique on queries for retrieving more relevant results in CLIR) that concentrates on the Arabic, Malay or English query(s) translation (a dictionary based method) to retrieve documents according to query(s) translation. Therefore, semantic ontology significantly improves and expands the single query itself with more synonym and related words. The query however is to retrieve relevant documents across language boundaries. Therefore, this study is conducted with the purposes to investigate English-Malay-Arabic query(s) translation approach and vice versa against keywords and querywords based on total retrieve and relevant. Keywords and querywords retrieval are evaluated in the experiments in terms of precision and recall. In order to produce more significant results, semantic technique is therefore applied to improve the performance of CLIR.

Journal Article•
TL;DR: In this article, the optimisation technique Genetic Algorithm (GA) is utilized and for similarity measure Squared Euclidean Distance (SED) is used for comparing query image featureset and database image featureet.
Abstract: With the aid of image content, the relevant images can be extracted from the image in the Content Based Image Retrieval (CBIR) system. Concise feature sets limit the retrieval efficiency, to eliminate this shape, colour, texture and contourlet features are extracted. For retrieving relevant images, the optimisation technique Genetic Algorithm (GA) is utilised and for similarity measure Squared Euclidean Distance (SED) is utilised for comparing query image featureset and database image featureset. Hence, from GA based similarity measure, relevant images are retrieved and evaluated by querying different images.

Journal Article•
TL;DR: A new technique, Bi-Level Weighted Histogram Equalization (BWHE) is proposed in this paper for the purpose of better brightness preservation and contrast enhancement of any input image.
Abstract: A new technique, Bi-Level Weighted Histogram Equalization (BWHE) is proposed in this paper for the purpose of better brightness preservation and contrast enhancement of any input image. This technique applies bi-level weighting procedure on Brightness preserving Bi-Histogram Equalization (BBHE) to enhance the input images. The core idea of this method is to first segment the histogram of the input image into two, based on its mean and then weighting constraints are applied to each of the sub-histogram separately. Finally, those two histograms are equalized independently and their union produces a brightness preserved and contrast enhanced output image. This technique is found to preserve the brightness and enhance the contrast of input images better than its contemporary methods.

Journal Article•
TL;DR: This paper presents a 3D reconstruction technique with a fast stereo correspondence matching which is robust in tackling additive noise and the additive noise is eliminated using a fuzzy filtering technique.
Abstract: Stereo correspondence matching is a key problem in many applications like computer and robotic vision to determine Three%Dimensional (3D) depth information of objects which is essential for 3D reconstruction. This paper presents a 3D reconstruction technique with a fast stereo correspondence matching which is robust in tackling additive noise. The additive noise is eliminated using a fuzzy filtering technique. Experimentations with ground truth images prove the effectiveness of the proposed algorithms.

Journal Article•
TL;DR: This paper employed Quantum-Behaved Particle Swarm Optimization (QPSO) model to select the best portfolio in 50 supreme Tehran Stock Exchange companies in order to optimize the objectives of the rate of return, systematic and non-systematic risks, return skewness, liquidity and sharp ratio.
Abstract: One of the popular methods for optimizing combinational problems such as portfolio selection problem is swarm- based methods. In this paper, we have proposed an approach based on Quantum-Behaved Particle Swarm Optimization (QPSO) for the portfolio selection problem. The particle swarm optimization (PSO) is a well-known population-based swarm intelligence algorithm. QPSO is also proposed by combining the classical PSO philosophy and quantum mechanics to improve performance of PSO. Generally, investors, in portfolio selection, simultaneously consider such contradictory objectives as the rate of return, risk and liquidity. We employed Quantum-Behaved Particle Swarm Optimization (QPSO) model to select the best portfolio in 50 supreme Tehran Stock Exchange companies in order to optimize the objectives of the rate of return, systematic and non-systematic risks, return skewness, liquidity and sharp ratio. Finally, the obtained results were compared with Markowitzs classic and Genetic Algorithms (GA) models indicated that although return of the portfolio of QPSO model was less that that in Markowitz's classic model, the QPSO had basically some advantages in decreasing risk in the sense that it completely covers the rate of return and leads to better results and proposes more versatility portfolios in compared with the other models. Therefore, we could conclude that as far as selection of the best portfolio is concerned, QPSO model can lead to better results and may help the investors to make the best portfolio selection.

Journal Article•
TL;DR: A countermeasure technique that utilises Metaheuristic Randomisation Algorithm (MRA) is proposed to address the FOA attack and is able to provide larger password space compare to the benchmarking scheme.
Abstract: This paper addresses a newly discovered security threat named Frequency of Occurrence Analysis (FOA) attack in searchmetics password authentication scheme. A countermeasure technique that utilises Metaheuristic Randomisation Algorithm (MRA) is proposed to address the FOA attack. The proposed algorithm is presented and an offline FOA attack simulation tool is developed to verify the effectiveness of the proposed method. In addition, a shoulder surfing testing is conducted to evaluate the effectiveness of the proposed method in terms of mitigating shoulder surfing attack. The experiment results show that MRA is able to prevent FOA and mitigate shoulder surfing attacks. Moreover, the proposed method is able to provide larger password space compare to the benchmarking scheme.

Journal Article•
TL;DR: A framework of summarizing an XML document based both on the document itself and the schema is given, which applies schema to summarize XML documents because there are many important semantic and structural information implied by the schema.
Abstract: eXtensible Markup Language (XML) has become one of the de facto standards of data exchange and representation in many applications. An XML document is usually too complex and large to understand and use for a human being. A summarized XML document of the original document is useful in such cases. Three standards are given to evaluate the final summarized XML document: document size, information content, and information importance. A framework of summarizing an XML document based both on the document itself and the schema is given, which applies schema to summarize XML documents because there are many important semantic and structural information implied by the schema. In our framework, redundant data are first removed by abnormal functional dependencies and schema structure. Then tags and values of the XML document are summarized based on the document itself and schema. Our framework is a semi-automatic approach which can help users to summarize an XML document in the sense that some parameters must be specified by the users. Experiments show that the framework can make the summarized XML document has a good balance of document size, information content, and information importance comparing with the original one.

Journal Article•
TL;DR: The top-down Model Driven Architecture (MDA) approach can be used to provide an efficient way to write specifications, develop applications and separation of business functions and application from the technical platform to be used.
Abstract: In recent years, Multi-Agent Systems (MAS) had started gaining widespread acceptance in the field of information technology. This prompted many researchers to attempt to find ways to facilitate their development process, which typically includes building different models. The transformation of system specifications into models and their subsequent translation into code is often performed by relying on unstandardized methods, hindering adaptation to rapid changes in technology. Furhtermore, there is a big gap between the analysis, the design and the implementation in the methodologies of multi-agent systems development. On the other hand, we have seen that the top-down Model Driven Architecture (MDA) approach can be used to provide an efficient way to write specifications, develop applications and separation of business functions and application from the technical platform to be used. In this work, we propose using the MDA architecture for developing MAS. We demonstrate several different approaches, resulting in a variety of methods for developing MAS. This, in turn, increases the flexibility and ease of the development of MAS, and avoids any previously imposed restrictions.