scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Innovations in Information Technology in 2008"


Proceedings ArticleDOI
01 Dec 2008
TL;DR: The authors introduced a new light stemming technique and compare it with other used stemmers and show how it improves the search effectiveness. But still there are many weakness and problems, in this paper we introduce a new Light Stemming Technique and compare with other Used Stemmers and shows how it improved the search efficiency.
Abstract: Building an effective stemmer for Arabic language has been always a hot research topic in the IR field since Arabic language has a very different and difficult structure than other languages, that's because it is a very rich language with complex morphology. Many linguistic and light stemmers have been developed for Arabic language but still there are many weakness and problems, in this paper we introduce a new light stemming technique and compare it with other used stemmers and show how it improves the search effectiveness.

44 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper presents a rule-based technique to generate Arabic phonetic dictionaries for a large vocabulary speech recognition system using classic Arabic pronunciation rules, common pronunciation rules of Modern Standard Arabic, as well as morphologically driven rules.
Abstract: Phonetic dictionaries are essential components of large-vocabulary natural language speaker-independent speech recognition systems. This paper presents a rule-based technique to generate Arabic phonetic dictionaries for a large vocabulary speech recognition system. The system used classic Arabic pronunciation rules, common pronunciation rules of Modern Standard Arabic, as well as morphologically driven rules. The paper gives in detail an explanation of these rules as well as their formal mathematical presentation. The rules were used to generate a dictionary for a 5.4 hours corpus of broadcast news. The phonetic dictionary contains 23,841 definitions corresponding to about 14232 words. The generated dictionary was evaluated on an actual Arabic speech recognition system. The pronunciation rules and the phone set were validated by test cases. The Arabic speech recognition system achieves word error rate of %11.71 for fully diacritized transcription of about 1.1 hours of Arabic broadcast news.

38 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: An efficient routing algorithm for large scale cluster-based wireless sensor networks using ant colony optimization (ACO) algorithm, which is a biologically inspired paradigm for optimization approach, to provide a smooth operation more effectively.
Abstract: In this paper, we propose an efficient routing algorithm for large scale cluster-based wireless sensor networks. The technique uses two routing levels. In the first level (intra-cluster), cluster members send data directly to their cluster head. In the second level (inter-cluster), the cluster heads use ant colony optimization (ACO) algorithm, which is a biologically inspired paradigm for optimization approach, to find a route to the base station. As only cluster heads participate in the inter-cluster routing operation, the method can provide a smooth operation more effectively. The delay of the algorithm is minimized by using the ant colony optimization algorithm along with clustering. To assess the efficiency of the proposed algorithm, we compare the method with some previous routing algorithms. The results show lower power consumption and more load balancing.

36 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: A new approach for lip contour extraction based on fuzzy clustering is proposed that employs a stochastic cost function to partition a color image into lip and non-lip regions such that the joint probability of the two regions is maximized.
Abstract: Lip feature extraction is one of the most challenging tasks in the lip reading systems' performance. In this paper, a new approach for lip contour extraction based on fuzzy clustering is proposed. The algorithm employs a stochastic cost function to partition a color image into lip and non-lip regions such that the joint probability of the two regions is maximized. First, the mouth location is determined and then, lip region is preprocessed using pseudo hue transformation. Fuzzy c-means clustering is applied to each transformed image along with b components of CIELAB color space. To delete the clustered pixels around lip, an ellipse and a Gaussian mask were used. In order to show the performance of the proposed method, the pseudo hue segmentation and fuzzy c-mean clustering without preprocessing are compared. The compared methods were applied to the VidTIMIT and M2VTS databases and the results show the superiority of the proposed method in comparison with other methods.

23 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: Performance evaluations show that watermark composition as well as the embedding and the extraction algorithms of the S-SGW and FWC are much faster than that of Guo et.al.
Abstract: Wireless sensors typically operate in uncontrolled and possibly hostile environment. Thus, sensors have a high risk of being captured and compromised by an adversary. Traditional security schemes are computationally expensive, because they introduce overhead which shortens the life of the sensors. Watermarking schemes are usually light weight and do not require extensive computing and power resources in comparison with traditional security techniques. Thus they can be attractive alternative for wireless sensor applications. This paper proposes two fragile watermarking schemes S-SGW and FWC to provide integrity for sensor data. The proposed scheme S-SGW is a simplification of a technique proposed by Guo et.al. for data streams. S-SGW and FWC requires less computing power than the original technique and thus; more suitable for WSNs. Yet it provides the same sensitivity to malicious updates. Data elements generated by sensors are organized into groups with variable sizes. In S-SGW a secret watermark is generated from every two consecutive groups. The watermark is then embedded in the earlier group by replacing the least significant bits in the data element. The performance evaluations show that watermark composition as well as the embedding and the extraction algorithms of the S-SGW and FWC are much faster than that of Guo et.al.

23 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: An effective, robust, and imperceptible video watermarking algorithm is proposed by virtue of applying a cascade of two powerful mathematical transforms; the discrete wavelets transform (DWT) and the singular value decomposition (SVD).
Abstract: Video watermarking is a relatively new technology that has been proposed to solve the problem of illegal manipulation and distribution of digital video. By virtue of this technology, copyright information is embedded in the video data to provide ownership verification. In this paper, we propose an effective, robust, and imperceptible video watermarking algorithm. The effectiveness of algorithm is brought by virtue of applying a cascade of two powerful mathematical transforms; the discrete wavelets transform (DWT) and the singular value decomposition (SVD). Reported experimental results demonstrate the effectiveness of the proposed algorithm.

20 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: A novel multi-swarm algorithm, which enables each swarm to converge to a different optimum and use FGBF technique distinctively, and a multi-dimensional extension of the moving peaks benchmark (MPB), which is a publicly available for testing optimization algorithms in aMulti-modal dynamic environment.
Abstract: The particle swarm optimization (PSO) was introduced as a population based stochastic search and optimization process for static environments; however, many real problems are dynamic, meaning that the environment and the characteristics of the global optimum can change over time. Thanks to its stochastic and population based nature, PSO can avoid being trapped in local optima and find the global optimum. However, this is never guaranteed and as the complexity of the problem rises, it becomes more probable that the PSO algorithm gets trapped into a local optimum due to premature convergence. In this paper, we propose novel techniques, which successfully address several major problems in the field of particle swarm optimization (PSO) and promise efficient and robust solutions for multi-dimensional and dynamic problems. The first one, so-called multi-dimensional (MD) PSO, re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multi-dimensional search space where the optimum dimension is unknown, swarm particles can seek for both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. To address the premature convergence problem, we then propose fractional global best formation (FGBF) technique, which basically collects all the best dimensional components and fractionally creates an artificial global-best particle (aGB) that has the potential to be a better ldquoguiderdquo than the PSOs native gbest particle. To establish follow-up of (current) local optima, we then introduce a novel multi-swarm algorithm, which enables each swarm to converge to a different optimum and use FGBF technique distinctively. We then propose a multi-dimensional extension of the moving peaks benchmark (MPB), which is a publicly available for testing optimization algorithms in a multi-modal dynamic environment. In this extended benchmark an extensive set of experiments show that MD PSO using FGBF technique with multi-swarms exhibits an impressive performance and tracks the global maximum peak with the minimum error.

20 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: An exploratory experiment in which visual attention on the Web is compared for people with different cognitive abilities, indicating marked differences between the visual scan paths of dyslexic and non-dyslexic Web users.
Abstract: In this paper, we describe an exploratory experiment in which visual attention on the Web is compared for people with different cognitive abilities. Eye tracking can measure the direction, sequence and duration of a Web user's gaze over time. Eye movements of participants, with and without dyslexia, were recorded by means of a remote eye tracking device. Participants completed nine tasks on each of six different web sites. Findings indicate marked differences between the visual scan paths of dyslexic and non-dyslexic Web users. Results also provide insights as to how eye tracking can be applied to assess the usability of interfaces for people with special needs and inform the design of accessible interactive systems.

19 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: The experimental results indicate the effectiveness of the Gabor-based features and SVM for Arabic (Indian) digits recognition.
Abstract: Arabic (Indian) handwritten digits recognition is useful in a large variety of banking and business applications and in postal zip code reading, and data entry applications. In this paper we present a technique for the automatic recognition of Arabic (Indian) handwritten digits using Gabor-based features and support vector machines (SVMs). A database consisting of 21120 samples written by 44 writers is used. 70% of the data is used for training and the remaining 30% is used for testing. Several scales and orientations are used to extract the Gaborbased features. The achieved average recognition rates are 99.85% and 97.94% using 3 scales & 5 orientations and using 4 scales & 6 orientations, respectively. The experimental results indicate the effectiveness of the Gabor-based features and SVM for Arabic (Indian) digits recognition.

19 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper proposes an implementation the minimum connected dominating set (MCDS), taking into count the safety applications constraint and the specifics of the VANET context, and demonstrates the good performances and the robustness of such protocol compared to other ones.
Abstract: Broadcast mechanisms are widely used in self-organizing wireless networks as support for other network layer protocols, in this paper we investigate the broadcast techniques proposed in literature for vehicular ad hoc network (VANET). For the safety applications the broadcast protocol has to guaranty the performance and the reliability. In this paper, we propose an implementation the minimum connected dominating set (MCDS), taking into count the safety applications constraint and the specifics of the VANET context. Simulation results demonstrate the good performances and the robustness of such protocol compared to other ones.

19 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: This review will provide answers for public concerns about the risk of using cell phone and shows that long-term exposure to EMF radiation from a cell phone could cause health effects, such as brain cancer.
Abstract: The growth in the use of cellular phone has raised the concerns about the possible interaction between the electromagnetic fields (EMF) radiation and the biological effects on human tissues, particularly the brain and the human immune system. These concerns have induced a large volume of research studies. However, most of the previous review studies are concentrated on negative effects and no published work took in consideration all possible effects caused by the use of cell phones. In this paper we aim to provide review of some studies which investigated the possible negative and positive biological effects of cell phone radiation on human tissues. This review will provide answers for public concerns about the risk of using cell phone. Our conclusion shows that long-term exposure to EMF radiation from a cell phone could cause health effects, such as brain cancer. Some positive health effects due to the exposure to the EMF radiation such as improve bone healing and reduce toxic effects of chemotherapy are highlighted. Finally, some studies have also showed no effect due to exposure to EMF. More long-term studies and analysis are much needed.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper simulates the JPEG compression attack with different quality standards to check the robustness of watermarking method in still images using Haar, Daubecheies and biorthogonal wavelets and proves the authenticity of the digital content.
Abstract: In this paper we analyze the robustness of watermarking method in still images using Haar, Daubecheies and biorthogonal wavelets. The embedding process uses a canny edge detection method and hides the watermark with the perceptual considerations on different modalities of images. The extraction scheme uses non-blind method to retrieve the watermark. We simulate the JPEG compression attack with different quality standards to check the robustness and prove the authenticity of the digital content.

Proceedings ArticleDOI
Z. Kchaou1, S. Kanoun1
01 Dec 2008
TL;DR: This work proposes an approach to stemming Arabic words similar to the approach of Khoja, but with two dictionaries, one of roots and another of radicals, which has the advantage of reducing the words that are inspired by their radicals to their radical.
Abstract: We propose an approach to stemming Arabic words similar to the approach of Khoja, but with two dictionaries, one of roots and another of radicals. Our approach has the advantage of reducing the words that are inspired by their radicals to their radical and words which are inspired by their roots to their roots with great reliability and consistency and solves the problem of the handicapped radicals and roots in Khoja. We tested our approach on a large corpus of Arabic texts covering several areas.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: The proposed summarization-based steganography (Sumstega) methodology takes advantage of recent advances in automatic summarization techniques to generate a text-cover that looks legitimate and pursues the variations among the outputs of autosummarization Techniques to conceal data.
Abstract: Steganography is the science and art of avoiding the arousal of suspicion in covert communications. This paper presents a novel steganography methodology that pursues text summarization in order to hide messages. The proposed summarization-based steganography (Sumstega) methodology takes advantage of recent advances in automatic summarization techniques to generate a text-cover. Sumstega does not exploit noise (errors) to embed a message nor produce a detectable noise. Instead, it pursues the variations among the outputs of autosummarization techniques to conceal data. Basically, Sumstega manipulates the parameters of automatic summarization tools, e.g. how the word frequency weights in the sentence selection, and employs other contemporary techniques such as paraphrasing, reordering, etc., to generate summary-cover that looks legitimate. The popular use of text summaries in business, science, education, news, etc., renders summary an attractive steganographic carrier and averts an adversary's suspicion. The validation results demonstrate the effectiveness of Sumstega.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: A semantic query expansion algorithm for medical information retrieval consisting of identifying MeSH concepts in user's query and applying expansion algorithm to them and results show improvements over classic method, query expansion using general purpose ontology and a number of other approaches.
Abstract: Domain specific ontologies can be used to improve both precision and recall of information retrieval systems. One approach in this regard is using query expansion techniques and the other would be introducing a semantic similarity measure for concepts in ontology. Although each approach has its own benefits and drawbacks, query expansion techniques are preferred when the corpus volume is so huge that examining concept pairs between query and documents is not reasonable. In this paper a semantic query expansion algorithm for medical information retrieval is introduced. Proposed approach consists of identifying MeSH (Medical Subject Headings) concepts in user's query and applying expansion algorithm to them. Expansion algorithm is based on the location of concepts in MeSH hierarchy, number of synonyms of each concept and number of terms the concept is made of. Results show improvements over classic method, query expansion using general purpose ontology and a number of other approaches.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: It is demonstrated that the current default configuration of the Linux networking subsystem (a.k.a. NAPI) is not suitable for Snort's performance, and the performance of Snort can be improved significantly by tuning certain configuration parameters.
Abstract: Snort is one of the most popular Network Intrusion Detection Systems (NIDS) that exist today. Snort needs to be highly effective to keep up with today's high traffic of gigabit networks. An intrusion detection system that fails to perform packet inspection at high rate will allow malicious packets to enter the network undetected. In this paper we demonstrate that the current default configuration of the Linux networking subsystem (a.k.a. NAPI) is not suitable for Snort's performance. We show that the performance of Snort can be improved significantly by tuning certain configuration parameters. In particular, we experimentally study the performance impact of choosing different NAPI budget values on Snort's throughput. We conclude that a small budget would enhance the performance significantly.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper formulate the problem of determining the optimal burst schedules for broadcasting multiple TV channels, and proposes a practical simplification of the general problem, which allows TV channels to have different bit rates, but at power of two increments.
Abstract: We consider energy optimization in mobile TV networks in which a base station broadcasts multiple TV channels to mobile receivers over a common wireless medium. In these systems, the base station broadcasts TV channels in bursts with bit rates much higher than the encoding rates of the video streams. Thus, mobile devices can receive a burst of traffic and then turn off their radio frequency circuits till the next burst in order to save energy. To achieve this energy saving, the base station must carefully construct the burst schedule for all TV channels. In this paper, we formulate the problem of determining the optimal burst schedules for broadcasting multiple TV channels. We show that this problem is NP-complete for channels with arbitrary bit rates. We then propose a practical simplification of the general problem, which allows TV channels to have different bit rates, but at power of two increments. Using this simplification, we propose and analyze an optimal burst scheduling algorithm. We implement our algorithm in a mobile TV testbed and we present empirical results to demonstrate its efficiency and optimality.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: A wavelet-based image multi-watermarking algorithm is applied to implement three issues of great importance to the management of medical images; image source authentication, image annotation, and image retrieval.
Abstract: Medical images of all types are produced in hospitals and health centers more than ever. The ease of production of medical images has been facilitated by the advancements in information and communication technologies. Traditional methods used to manage medical images fall short in coping with the great amount of information inherited in medical images. Therefore, there remains immediate need to develop efficient management strategies that work well with all types of medical images. In this paper, we apply a wavelet-based image multi-watermarking algorithm to implement three issues of great importance to the management of medical images. These issues are; image source authentication, image annotation, and image retrieval. Performance of the proposed algorithm was evaluated using four types of medical images; Ultrasound, X-ray, CT and MRI.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This publication presents selected aspects of modelling cognitive processes designed for the in-depth interpretation of image data, and a proposal for employing UBIAS cognitive information systems to cognitively interpret and analyze image-type data.
Abstract: This publication presents selected aspects of modelling cognitive processes designed for the in-depth interpretation of image data. This type of cognitive processes can be observed when executing complex data analysis and interpretation. This publication will also present a proposal for employing UBIAS cognitive information systems (Understanding Based Image Analysis Systems) to cognitively interpret and analyze image-type data. This type of analysis will be done based on human cognitive processes occurring in a person's brain. The application of this type of cognitive processes to the automatic, computer analysis of data will be illustrated with the example of a cognitive analysis of medical data showing various types of foot deformities. The process of analyzing, interpreting and reasoning on the basis of the semantic contents of a given image has led to rolling out a new class of cognitive systems designed for the in-depth image data analysis. And it is this type of processes that will be presented here.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: A potential probing technique for remotely discovering the last-matching rules of the security policy of a firewall, those rules that are located at the bottom of the ruleset and would require the most processing time by the firewall.
Abstract: In this paper we identify a potential probing technique for remotely discovering the last-matching rules of the security policy of a firewall. The last-matching rules are those rules that are located at the bottom of the ruleset of a firewall's security policy, and would require the most processing time by the firewall. If these rules are discovered, an attacker can potentially launch an effective low-rate DoS attack to trigger worst-case or near worst-case processing, and thereby overwhelming the firewall and bringing it to its knees. As a proof of concept, we developed a prototype program that implements the detection algorithm and validated its effectiveness experimentally.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper takes upon the well-established Dublin Core metadata standard and suggests a proper Semantic Web OWL ontology, coping with discrepancies and incompatibilities, indicative of such attempts, in novel ways.
Abstract: Metadata applications have evolved in time into highly structured ldquoislands of informationrdquo about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this paper we take upon the well-established Dublin Core metadata standard and suggest a proper Semantic Web OWL ontology, coping with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual Dublin Core metadata, originating from the live DSpace installation of the University of Patras institutional repository.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: A state of the art of Arabic text classification techniques and existing recognition systems, and how evaluation methods and competitions help to support the development of text recognition systems and methods are presented.
Abstract: Arabic character and text recognition methods for printed or handwritten characters are known since many years. We present in the first part of this paper a state of the art of Arabic text classification techniques and existing recognition systems. In the second part we discuss how evaluation methods and competitions help to support the development of text recognition systems and methods. Based on the results of the last Arabic handwriting recognition competition we show a concept to develop efficient recognition systems. On the basis of the actual situation of research future trends are described in the last part of this paper.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper provides the recommendations for why interleave division multiple access (IDMA) stands out among all the present day multiple access systems and recommends the recommended interleaver size at which it provides better efficiency and minimum processing delay.
Abstract: Beyond third generation (B3G) and fourth generation (4G) communication systems require bandwidth efficiency and low complexity receivers to accommodate high data rate and large number of users per cell. This paper provides the recommendations for why interleave division multiple access (IDMA) stands out among all the present day multiple access systems. Even if IDMA is a special case of code division multiple access (CDMA), it simply trounces CDMA as far as the cell capacity is concerned. In IDMA, interleavers are used as the only means for user separation. Interleaver size can be varied for a set of users but the recommended interleaver size is 1024 bits at which it provides better efficiency and minimum processing delay. Investigations carried throughout this paper are while using 1024 bit interleavers. IDMA exhibits the tendency to handle huge number of users.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper proposes a novel DAWG based on a compacted double-array, which overcomes the drawbacks of traditional ones and Experimental results show that the novel D AWG is more efficient than traditional ones.
Abstract: String matching is one of the fundamentals in various text-processing applications such as text mining and content filtering systems. This paper describes a fast string matching algorithm using a compact pattern matching machine DAWG. A directed acyclic word graph (DAWG) is traditionally implemented with a 2-dimensional linked list or matrix. However, DAWGs with these structures have drawbacks, the lookup time of the linked list based one is slow and the space requirement of the matrix based one is large. Therefore, this paper proposes a novel DAWG based on a compacted double-array, which overcomes the drawbacks of traditional ones. Experimental results show that the novel DAWG is more efficient than traditional ones.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: A simulation study of the enhanced distributed channel access method (EDCA) in the new IEEE802.11e standard protocol is presented to verify that it achieves superior QoS performance for real-time applications compared with the earlier legacy IEEE 802.11 DCF access method.
Abstract: Wireless local area networks (WLANs) are increasingly popular because of their flexibility. This spreading of WLANs comes with an increasing use of multimedia applications. Such applications are bandwidth sensitive and require a quality of service (QoS) that guarantees high performance transmission of continuous data. This requirement is the focus of the new enhanced IEEE 802.11e standard protocol for WLANs. This paper presents a simulation study of the enhanced distributed channel access method (EDCA) in the new IEEE 802.11e standard protocol. This protocol is evaluated to verify that it achieves superior QoS performance for real-time applications compared with the earlier legacy IEEE 802.11 DCF access method. A Simulation experiments - using the network simulator NS2- are carried out to compare the performance of both protocols regarding the throughput, delay, and jitter.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: A novel approach for privacy preserving clustering (PPC) over centralized data is introduced, which uses Haar wavelet transform and scaling data perturbation to protect the underlying numerical attribute values subjected to clustering analysis.
Abstract: Despite the benefits of data mining in a wide range of applications, this technique has raised some issues related to privacy and security of individuals. Due to these issues, data owners may prevent to share their sensitive information with data miners. In this paper, we introduce a novel approach for privacy preserving clustering (PPC) over centralized data. The proposed technique uses Haar wavelet transform (HWT) and scaling data perturbation (SDP) to protect the underlying numerical attribute values subjected to clustering analysis. In addition, some experimental results are presented, which demonstrate that the proposed technique is effective and finds an optimum in the tradeoff between clustering utility and data privacy.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper describes a sample scenario of computer usage operations in an English composition class that shows how the semantic gap between teachers and students can be filled using ontologies.
Abstract: In this paper, we show how ontology can be utilized to support distant communication. Although augmented with audio and video functions, and screen sharing facilities, the users of real-time communication tools are facing difficulties due to their inability to provide appropriate meaning for keywords or conversation topics. Our system fills the semantic gap and supports the knowledge of keywords related to a specific topic. This paper presents an extended approach that uses different domain ontologies to carry out mapping by agents. Assisting a teacher by providing appropriate information to students as an intelligent tutoring system is one example application. This paper describes a sample scenario of computer usage operations in an English composition class that shows how the semantic gap between teachers and students can be filled using ontologies.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: A systemic framework combining techniques from soft systems methodology, the unified modelling language (UML) and the naked objects approach to implementing interactive software systems to improve the quality of business process modelling and implementation is proposed.
Abstract: This paper proposes a systemic framework combining techniques from soft systems methodology (SSM), the unified modelling language (UML) and the naked objects approach to implementing interactive software systems. The framework supports the full development cycle from business process modelling to software implementation. SSM is used to explore the problem situation, techniques based on the UML are used for detailed design and the Naked Objects framework is used for implementation. We argue that there are advantages from combining and using the three techniques together to improve the quality of business process modelling and implementation. The proposed systemic framework is explained and justified using Mingers multimethodology ideas. The approach is being evaluated through a series of action research projects based on real-world case studies.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: An efficient routing and collision free centralized scheduling algorithms using single channel single transceiver system in WiMAX mesh network, which introduces the cross layer concept between the network layer and media access controller (MAC) layer is proposed.
Abstract: In the last few years, demand for high speed internet access service has increased greatly so the IEEE 802.16 working group on broadband wireless access (BWA) is developing the worldwide interoperability for microwave access (WiMAX) standard for wireless metropolitan area networks (MANs) which aims to provide broadband wireless last mile access, easy deployment, and high speed data rate for large spanning area. This paper propose an efficient routing and collision free centralized scheduling (CS) algorithms using single channel single transceiver system in WiMAX mesh network, which introduces the cross layer concept between the network layer and media access controller (MAC) layer. The proposed approach has improved the system performance in the aspects of scheduling length, channel utilization ratio (CUR), and the throughput of the system.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: This paper compares the quality of clustering of ART1 based clustering with k-mean clustering technique and self organizing neural network (SOM) clustering algorithm in terms of intra-cluster distances and indicates the clusters formed by ART1 clustering approach are much compact and isolated as compare to self organizing map and k-means based clusters.
Abstract: In this paper, we present and compare three clustering approaches which group fingerprints according to its minutiae points locations. The best technique for grouping fingerprints is based on the ART1 clustering algorithm. We compare the quality of clustering of ART1 based clustering with k-mean clustering technique and self organizing neural network (SOM) [1] clustering algorithm in terms of intra-cluster distances. Our results show that the average intra-cluster distance of the clusters formed by SOM and k-means algorithm varies from 83.36 to 127.372 and 33.925 to 58.17 respectively while the average intra-cluster distance of clusters formed by ART1 based clustering technique varies from 4.55 to 13.06, which indicates the clusters formed by ART1 clustering approach are much compact and isolated as compare to self organizing map and k-means based clustering approaches.