scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Software and Data Technologies in 2006"


Book ChapterDOI
11 Sep 2006
TL;DR: The present work provides a summary of the state of art in software measures by means of a systematic review on the current literature to show the trends in the software measurement field and the software process on which the measurement efforts have focused.
Abstract: The present work provides a summary of the state of art in software measures by means of a systematic review on the current literature. Nowadays, many companies need to answer the following questions: How to measure?, When to measure and What to measure?. There have been a lot of efforts made to attempt to answer these questions, and this has resulted in a large amount of data what is sometimes confusing and unclear information. This needs to be properly processed and classified in order to provide a better overview of the current situation. We have used a Measurement Software Ontology to classify and put the amount of data in this field in order. We have also analyzed the results of the systematic review, to show the trends in the software measurement field and the software process on which the measurement efforts have focused. It has allowed us to discover what parts of the process are not supported enough by measurements, to thus motivate future research in those areas.

64 citations


Proceedings Article
01 Jan 2006
TL;DR: A taxonomy is developed that helps classifying existing application scenarios concerning the dimensions of domain, design, and integration and an insight is provided into the area of semantic integration and how metamodels can be brought together with ontologies in this context.
Abstract: This paper strives for demonstrating “metamodels in action” which means showing concrete applications of this concept. Based on a literature survey we develop a taxonomy that helps classifying existing application scenarios concerning the dimensions of domain, design, and integration and briefly describe some of the existing work we came across. Furthermore, we provide an insight into the area of semantic integration and how metamodels can be brought together with ontologies in this context. The paper is concluded with an outlook on important future work in the field of metamodeling.

55 citations


Book ChapterDOI
11 Sep 2006
TL;DR: The aims of this article are to describe the characteristics of systems with Ambient Intelligence, to provide examples of their applications and to highlight the challenges that lie ahead, especially for the Software Engineering and Knowledge Engineering communities.
Abstract: Ambient Intelligence is a multi-disciplinary approach which aims to enhance the way environments and people interact with each other. The ultimate goal of the area is to make the places we live and work in more beneficial to us. Smart Homes is one example of such systems but the idea can be also used in relation to hospitals, public transport, factories and other environments. The achievement of Ambient Intelligence largely depends on the technology deployed (sensors and devices interconnected through networks) as well as on the intelligence of the software used for decision-making. The aims of this article are to describe the characteristics of systems with Ambient Intelligence, to provide examples of their applications and to highlight the challenges that lie ahead, especially for the Software Engineering and Knowledge Engineering communities.

47 citations


Proceedings Article
01 Dec 2006
TL;DR: The purpose of this paper is to explain a definition of agge methods that would help in the ranking or differentiation of agile methods from other available methods.
Abstract: ~en: are a nwn~r of agile and traditional methodologies for software development. Agilists provide agile principles and agile values to characterize the agile methods but there is no clear and inclusive defmition of agile methods; subsequently it is not feasible to draw a clear distinction between traditional and agile sofu:vare development methods in.~ractice. ~e purpose of this paper is to explain th~~of.agility.jn ~~l;.an~_tll:e~}os.ugges.t a definition of agge methods that would help in the ranking or differentiation of agile methods from other available methods. Abstract:

32 citations


Proceedings Article
14 Sep 2006
TL;DR: The ICSOFT 2006 Conference on Software and Data Technologies (ICSOFT), Setubal, Portugal, 11-14 September 2006 as discussed by the authors, 11:14-15 September 2006
Abstract: International Conference on Software and Data Technologies (ICSOFT 2006), Setubal, Portugal, 11-14 September 2006

28 citations


Book ChapterDOI
11 Sep 2006
TL;DR: This work presents a skeleton for branch & bound problems for MIMD machines with distributed memory based on a distributed work pool and discusses some implementation aspects such as termination detection as well as overlapping computation and communication.
Abstract: Algorithmic skeletons are predefined components for parallel programming. We will present a skeleton for branch & bound problems for MIMD machines with distributed memory. This skeleton is based on a distributed work pool. We discuss two variants, one with supply-driven work distribution and one with demand-driven work distribution. This approach is compared to a simple branch & bound skeleton with a centralized work pool, which has been used in a previous version of our skeleton library Muesli. Based on experimental results for two example applications, namely the n-puzzle and the traveling salesman problem, we show that the distributed work pool is clearly better and enables good runtimes and in particular scalability. Moreover, we discuss some implementation aspects such as termination detection as well as overlapping computation and communication.

24 citations


Proceedings Article
01 Jan 2006
TL;DR: A new learning by examples PCA-based algorithm for extracting skeleton information from data to assure both good recognition performances, and generalization capabilities is developed.
Abstract: The aim of the paper is to develop a new learning by examples PCA-based algorithm for extracting skeleton information from data to assure both good recognition performances, and generalization capabilities. Here the generalization capabilities are viewed twofold, on one hand to identify the right class for new samples coming from one of the classes taken into account and, on the other hand, to identify the samples coming from a new class. The classes are represented in the measuremen /feature space by continuous repartitions, that is the model is given by the family of density functions t ( ) f H h h ∈ , where H stands for the finite set of hypothesis (classes). The basis of the learning process is represented by samples of possible different sizes coming from the considered classes. The skeleton of each class is given by the principal components obtained for the corresponding sample. 1 PRINCIPAL COMPONENTS The starting point for PCA is a n-dimensional random vector X. There is available a sample X(1), X(2),... ,X(T) from this random vector. No explicit assumptions on the probability density of the vectors are made in PCA, as long as the first – order and the second –order statistics are known or can be estimated from the sample. Also, no generative model is assumed for vector X. In the PCA transform, the vector X is first centered by subtracting its mean, X = X E(X). In practice, the mean of the n-dimensional vector X is estimated from the available sample. In the following, we assume that the vector X is centered. Next, X is linearly transformed to another vector Y with m elements, m

18 citations


Book ChapterDOI
11 Sep 2006
TL;DR: An insight is provided into the important area of semantic integration and interoperability, whereas it is shown how metamodels can be brought together with ontologies in this context.
Abstract: This paper aims to provide an overview of existing applications of the metamodeling concept in the area of computer science. In order to do so, a literature survey has been performed that indicates that metamodeling is basically applied for two main purposes: design and integration. In the course of describing these two applications we are also going to briefly describe some of the existing work we came across. Furthermore, we provide an insight into the important area of semantic integration and interoperability, whereas we show how metamodels can be brought together with ontologies in this context. The paper is concluded with an outlook on relevant future work in the field of metamodeling.

18 citations


Proceedings Article
01 Jan 2006
TL;DR: A lightweight, near-English modelling language called SBOML (Smart Business Object Modelling Language) is proposed to model SBOs, a novel concept called Smart Business Object (SBO) that are web-ready business objects that can be modelled at a higher-level of abstraction than traditional modelling approaches.
Abstract: At present, there is a growing need to accelerate the development of web applications and to support continuous evolution of web applications due to evolving business needs. The object persistence capability and web interface generation capability in contemporary MVC (Model View Controller) web application development frameworks and model-to-code generation capability in Model-Driven Development tools has simplified the modelling of business objects for developing web applications. However, there is still a mismatch between the current technologies and the essential support for high-level, semantic-rich modelling of web-ready business objects for rapid development of modern web applications. Therefore, we propose a novel concept called Smart Business Object (SBO) to solve the above-mentioned problem. In essence, SBOs are web-ready business objects. SBOs have high-level, web-oriented attributes such as email, URL, video, image, document, etc. This allows SBO to be modelled at a higher-level of abstraction than traditional modelling approaches. A lightweight, near-English modelling language called SBOML (Smart Business Object Modelling Language) is proposed to model SBOs. We have created a toolkit to streamline the creation (modelling) and consumption (execution) of SBOs. With these tools, we are able to build fully functional web applications in a very short time without any coding.

14 citations


Book ChapterDOI
11 Sep 2006
TL;DR: A lightweight, near-English modelling language called SBOML (Smart Business Object Modelling Language) is proposed to model SBOs, a novel concept called Smart Business Object (SBO) that are web-ready business objects that can be modelled at a higher-level of abstraction than traditional modelling approaches.
Abstract: At present, there is a growing need to accelerate the development of web applications and to support continuous evolution of web applications due to evolving business needs. The object persistence capability and web interface generation capability in contemporary MVC (Model View Controller) web application development frameworks and model-to-code generation capability in Model-Driven Development tools has simplified the modelling of business objects for developing web applications. However, there is still a mismatch between the current technologies and the essential support for high-level, semantic-rich modelling of web-ready business objects for rapid development of modern web applications. Therefore, we propose a novel concept called Smart Business Object (SBO) to solve the above-mentioned problem. In essence, SBOs are web-ready business objects. SBOs have high-level, web-oriented attributes such as email, URL, video, image, document, etc. This allows SBO to be modelled at a higher-level of abstraction than traditional modelling approaches. A lightweight, near-English modelling language called SBOML (Smart Business Object Modelling Language) is proposed to model SBOs. We have created a toolkit to streamline the creation (modelling) and consumption (execution) of SBOs. With these tools, we are able to build fully functional web applications in a very short time without any coding.

14 citations


Proceedings Article
01 Jan 2006
TL;DR: This paper presents the usage tracking language – UTL, designed to be generic and an instantiation of a part of it is presented with IMS-Learning Design, the representation model the authors chose for their three years of experiments.
Abstract: Abstract: In the context of distance learning and teaching, the re-engineering process needs a feedback on the learners' usage of the learning system. The feedback is given by numerous vectors, such as interviews, questionnaires, videos or log files. We consider that it is important to interpret tracks in order to compare the designer’s intentions with the learners’ activities during a session. In this paper, we present the usage tracking language – UTL. This language is designed to be generic and we present an instantiation of a part of it with IMS-Learning Design, the representation model we chose for our three years of experiments.

Book ChapterDOI
11 Sep 2006
TL;DR: In the absence of hardware to perform IEEE 754R decimal floating-point operations, this new software package that will be fully compliant with the standard proposal should be an attractive option for various financial computations.
Abstract: The IEEE Standard 754-1985 for Binary Floating-Point Arithmetic [1] is being revised [2], and an important addition to the current text is the definition of decimal floating-point arithmetic [3]. This is aimed mainly to provide a robust, reliable framework for financial applications that are often subject to legal requirements concerning rounding and precision of the results in the areas of banking, telephone billing, tax calculation, currency conversion, insurance, or accounting in general. Using binary floating-point calculations to approximate decimal calculations has led in the past to the existence of numerous proprietary software packages, each with its own characteristics and capabilities. New algorithms are presented in this paper which were used for a generic implementation in software of the IEEE 754R decimal floating-point arithmetic, but may also be suitable for a hardware implementation. In the absence of hardware to perform IEEE 754R decimal floating-point operations, this new software package that will be fully compliant with the standard proposal should be an attractive option for various financial computations. The library presented in this paper uses the binary encoding method from [2] for decimal floating-point values. Preliminary performance results show one to two orders of magnitude improvement over a software package currently incorporated in GCC, which operates on values encoded using the decimal method from [2].

Proceedings Article
01 Jan 2006
TL;DR: This paper presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and therefore expensive process of modeling software systems using declarative models.
Abstract: Declarative models are a commonly used approach to deal with software complexity: by abstracting away the intricacies of the implementation these models are often easier to understand than the underlying code. Popular modeling languages such as UML can however become complex to use when modeling systems in

Proceedings Article
01 Jan 2006
TL;DR: The main objective of this research is a rigorous investigation of an architectural approach for developing and evolving reactive autonomic systems, and for continuous monitoring of their quality.
Abstract: The main objective of this research is a rigorous investigation of an architectural approach for developing and evolving reactive autonomic (self-managing) systems, and for continuous monitoring of their quality. In this paper, we draw upon our research experience and the experience of other autonomic computing researchers to discuss the main aspects of Autonomic Systems Timed Reactive Model (AS-TRM) architecture and demonstrate its reactive, distributed and autonomic computing nature. To our knowledge, ours is the first attempt to model reactive behavior in the autonomic systems.

Proceedings Article
01 Jan 2006
TL;DR: The form type concept that generalizes screen forms that users utilize to communicate with an information system is presented, which is semantically rich enough to enable specifying such an initial set of constraints, which makes it possible to generate application prototypes together with related implementation database schema.
Abstract: The paper presents the form type concept that generalizes screen forms that users utilize to communicate with an information system. The concept is semantically rich enough to enable specifying such an initial set of constraints, which makes it possible to generate application prototypes together with related implementation database schema. IIS*Case is a CASE tool based on the form type concept that supports conceptual modelling of an information system and its database schema. The paper outlines a way how this tool can generate XML specifications of application prototypes of an information system. The aim is to improve IIS*Case through implementation of a generator which can produce an executable prototype of an information system, automatically.

Proceedings Article
13 Sep 2006
TL;DR: This paper surveys existing and future directions regarding language-based solutions to the problem of insufficient number of actual applications using service-oriented architectures.
Abstract: The fast evolution of the Internet has popularized service-oriented architectures (SOA) with their promise of dynamic IT-supported inter-business collaborations Yet this popularity does not reflect on the number of actual applications using the architecture Programming models in use today make a poor match for the distributed, loosely-coupled, document-based nature of SOA The gap is actually increasing For example, interoperability between different organizations, requires contracts to reduce risks Thus, high-level models of contracts are making their way into service-oriented architectures, but application developers are still left to their own devices when it comes to writing code that will comply with a contract This paper surveys existing and future directions regarding language-based solutions to the above problem

Book ChapterDOI
11 Sep 2006
TL;DR: Results show that training is necessary to achieve orthogonal and effective classifications, and agreement between subjects, and efficiency is five minutes per defect classification in the average, and there is affinity between some categories.
Abstract: Defect categorization is the basis of many works that relate to software defect detection. The assumption is that different subjects assign the same category to the same defect. Because this assumption was questioned, our following decision was to study the phenomenon, in the aim of providing empirical evidence. Because defects can be categorized by using different criteria, and the experience of the involved professionals in using such a criterion could affect the results, our further decisions were: (i) to focus on the IBM Orthogonal Defect Classification (ODC); (ii) to involve professionals after having stabilized process and materials with students. This paper is concerned with our basic experiment. We analyze a benchmark including two thousand and more data that we achieved through twenty-four segments of code, each segment seeded with one defect, and by one hundred twelve sophomores, trained for six hours, and then assigned to classify those defects in a controlled environment for three continual hours. The focus is on: Discrepancy among categorizers, and orthogonality, affinity, effectiveness, and efficiency of categorizations. Results show: (i) training is necessary to achieve orthogonal and effective classifications, and obtain agreement between subjects, (ii) efficiency is five minutes per defect classification in the average, (iii) there is affinity between some categories.

Book ChapterDOI
11 Sep 2006
TL;DR: The OMG’s Business Process Definition Metamodel (BPDM) has been identified as the standard that will be the key for the application of MDA for BPM.
Abstract: Due to the rapid change in the business processes of organizations, Business Process Management (BPM) has come into being. BPM helps business analysts to manage all concerns related to business processes, but the gap between these analysts and people who build the applications is still large. The organization’s value chain changes very rapidly; to modify simultaneously the systems that support the business management process is impossible. MDE (Model Driven Engineering) is a good support for transferring these business process changes to the systems that implement these processes. Thus, by using any MDE approach, such as MDA, the alignment between business people and software engineering should be improved. To discover the different proposals that exist in this area, a systematic review was performed. As a result, the OMG’s Business Process Definition Metamodel (BPDM) has been identified as the standard that will be the key for the application of MDA for BPM.

Proceedings Article
01 Jan 2006
TL;DR: The issues of the author’s approach to the problems and the description of specific heuristics for the problems are considered and some common methods and algorithms related to such clustering are given.
Abstract: The present work is a continuation of several preceding author's works dedicated to a specific multiheuristic approach to discrete optimization problems. This paper considers those issues of this multiheuristic approach which relate to the problems of clustering situations. In particular it considers the issues of the author’s approach to the problems and the description of specific heuristics for the problems. We give the description of a particular example from the group of “Hierarchical Clustering Algorithms”, which we use for clustering situations. We also give descriptions of some common methods and algorithms related to such clustering. There are two examples of metrics on situations sets for two different problems; one of the problems is a classical discrete optimization problem and the other one is a game-playing programming

Proceedings Article
01 Jan 2006
TL;DR: This paper aims at designing some data mining methods of evaluating the seismic vulnerability of regions in the built infrastructure, based on k-nearest neighbor graphs, which has the advantage of taking into account any distribution of training instances and also data topology.
Abstract: This paper aims at designing some data mining methods of evaluating the seismic vulnerability of regions in the built infrastructure. A supervised clustering methodology is employed, based on k-nearest neighbor graphs. Unlike other classification algorithms, the method has the advantage of taking into account any distribution of training instances and also data topology. For the particular problem of seismic vulnerability analysis using a Geographic Information System, the gradual formation of clusters (for different values of k) allows a decision- making stakeholder to visualize more clearly the details of the cluster areas. The performance of the k-nearest neighbor graph method is tested on three classification problems, and finally it is applied to a sample from a digital map of Iaoi, a large city located in the North-Eastern part of Romania.

Proceedings Article
01 Jan 2006
TL;DR: A tool for retrospectively identifying pre-requirements traces by working backwards from requirements to the documented records of the elicitation process such as interview transcripts or ethnographic reports is presented.
Abstract: Pre-requirements specification tracing concerns the identification and maintenance of relationships between requirements and the knowledge and information used by analysts to inform the requirements’ formulation. However, such tracing is often not performed as it is a time-consuming process. This paper presents a tool for retrospectively identifying pre-requirements traces by working backwards from requirements to the documented records of the elicitation process such as interview transcripts or ethnographic reports. We present a preliminary evaluation of our tools performance using a case study. One of the key goals of our work is to identify requirements that have weak relationships with the source material. There are many possible reasons for this, but one is that they embody tacit knowledge. Although we do not investigate the nature of tacit knowledge in RE we believe that even helping to identify the probable presence of tacit knowledge is useful. This is particularly true for circumstances when requirements’ sources need to be understood during, for example, the handling of change requests.

Proceedings Article
01 Jan 2006
TL;DR: This paper presents a fully implemented language proposal that integrates FOP and generics in order to combine the strengths of both approaches with respect to program customization, and facilitates two-staged program customization.
Abstract: With feature-oriented programming (FOP) and generics programmers have proper means for structuring software so that its elements can be reused and extended. This paper addre sses the issue whether both approaches are equivalent. While FOP targets at large-scale building blocks and comp ositional programming, generics provide fine-grained customization at type-level. We contribute an analys is that reveals the individual capabilities of both approaches with respect to program customization. Ther efrom, we extract guidelines for programmers in what situations which approach suffices. Furthermor , we present a fully implemented language proposal that integrates FOP and generics in order to combine the ir strengths. Our approach facilitates two-staged program customization: (1) selecting sets of features; (2) p arameterizing features subsequently. This allows a broader spectrum of code reuse to be covered – reflected by proper language level mechanisms. We underpin our proposal by means of a case study.

Book ChapterDOI
11 Sep 2006
TL;DR: This paper presents an algorithm that minimizes the communication cost by performing the group-by operation before redistribution where only tuples that will be present in the join result are redistributed, thus reducing the Input/Output cost of join intermediate results.
Abstract: SQL queries involving join and group-by operations are frequently used in many decision support applications. In these applications, the size of the input relations is usually very large, so the parallelization of these queries is highly recommended in order to obtain a desirable response time. The main drawbacks of the presented parallel algorithms that treat this kind of queries are that they are very sensitive to data skew and involve expansive communication and Input/Output costs in the evaluation of the join operation. In this paper, we present an algorithm that minimizes the communication cost by performing the group-by operation before redistribution where only tuples that will be present in the join result are redistributed. In addition, it evaluates the query without the need of materializing the result of the join operation and thus reducing the Input/Output cost of join intermediate results. The performance of this algorithm is analyzed using the scalable and portable BSP (Bulk Synchronous Parallel) cost model which predicts a near-linear speed-up even for highly skewed data.

Book ChapterDOI
11 Sep 2006
TL;DR: This paper identifies main strategic (architectural), tactical (engineering), and operational (managerial) imperatives for adaptiveness into solutions resulting from integration projects.
Abstract: Whether application integration is internal to the enterprise or takes the form of external Business-to-Business (B2B) automation, the main integration challenge is similar – how to ensure that the integration solution has the quality of adaptiveness (i.e. it is understandable, maintainable, and scalable)? This question is hard enough for stand-alone application developments, let alone integration developments in which the developers may have little control over participating applications. This paper identifies main strategic (architectural), tactical (engineering), and operational (managerial) imperatives for buil-ding adaptiveness into solutions resulting from integration projects.


Book ChapterDOI
11 Sep 2006
TL;DR: A data mining approach for learning probabilistic user behavior models from the database usage logs is investigated based on combination of decision tree classification algorithm and empirical time-dependent feature map, motivated by potential functions theory.
Abstract: The problem of user behavior modeling arises in many fields of computer science and software engineering. In this paper we investigate a data mining approach for learning probabilistic user behavior models from the database usage logs. We propose a procedure for translating database traces into representation suitable for applying data mining methods. However, most existing data mining methods rely on the order of actions and ignore time intervals between actions. To avoid this problem we propose novel method based on combination of decision tree classification algorithm and empirical time-dependent feature map, motivated by potential functions theory. The performance of the proposed method was experimentally evaluated on real-world data. The comparison with existing state-of-the-art data mining methods has confirmed outstanding performance of our method in predictive user behavior modeling and has demonstrated competitive results in anomaly detection.

Book ChapterDOI
11 Sep 2006
TL;DR: This paper proposes a hybrid structured and unstructured topology in order to take advantages of both kind of networks and guarantees that if a content is at any place in the network, it will be reachable with probability one.
Abstract: Over the Internet today, there has been much interest in emerging Peer-to-Peer (P2P) networks because they provide a good substrate for creating data sharing, content distribution, and application layer multicast applications. There are two classes of P2P overlay networks: structured and unstructured. Structured networks can efficiently locate items, but the searching process is not user friendly. Conversely, unstructured networks have efficient mechanisms to search for a content, but the lookup process does not take advantage of the distributed system nature. In this paper, we propose a hybrid structured and unstructured topology in order to take advantages of both kind of networks. In addition, our proposal guarantees that if a content is at any place in the network, it will be reachable with probability one. Simulation results show that the behaviour of the network is stable and that the network distributes the contents efficiently to avoid network congestion.

Proceedings Article
01 Jan 2006
TL;DR: The experiences and lessons learned by two teams who adopted test driven development methodology for software systems developed at TransCanada are described.
Abstract: The tests needed to prove, verify, and validate a software application are determined before the software application is developed. This is the essence of test driven development, an agile practice built upon sound software engineering principles. When applied effectively, this practice can have many benefits. The question becomes how to effectively adopt test driven development. This paper describes the experiences and lessons learned by two teams who adopted test driven development methodology for software systems developed at TransCanada. The overall success of test driven methodology is contingent upon the following key factors: experienced team champion, well-defined test scope, supportive database environment, repeatable software design pattern, and complementary manual testing. All of these factors and the appropriate test regime will lead to a better chance of success in a test driven development project.

Proceedings Article
01 Jan 2006
TL;DR: In this paper, the Linguistic Matching component of a schema matching and integration system, called SASMINT, is the focus of this paper, which makes an effective use of NLP techniques for the linguistic matching and proposes a weighted usage of several syntactic and semantic similarity metrics.
Abstract: In order to deal with the problem of semantic and schematic heterogeneity in collaborative networks, matching components among database schemas need to be identified and heterogeneity needs to be resolved, by creating the corresponding mappings in a process called schema matching. One important step in this process is the identification of the syntactic and semantic similarity among elements from different schemas, usually referred to as Linguistic Matching. The Linguistic Matching component of a schema matching and integration system, called SASMINT, is the focus of this paper. Unlike other systems, which typically utilize only a limited number of similarity metrics, SASMINT makes an effective use of NLP techniques for the Linguistic Matching and proposes a weighted usage of several syntactic and semantic similarity metrics. Since it is not easy for the user to determine the weights, SASMINT provides a component called Sampler as another novelty, to support automatic generation of weights.

Proceedings Article
01 Jan 2006
TL;DR: A development process model will be presented that illustrates, in brief, how these principals can be combined and a publicly available Web based expert system called Landfill Operation Management Advisor was developed.
Abstract: The Web has become the ubiquitous platform for distributing information and computer services. The tough Web competition, the way people and organizations rely on Web applications, and the increasing user requirements for better services have raised their complexity. Expert systems can be accessed via the Web, forming a set of Web applications known as Web based expert systems. This paper supports that the Web engineering and expert systems principals should be combined when developing Web based expert systems. A development process model will be presented that illustrates, in brief, how these principals can be combined. Based on this model, a publicly available Web based expert system called Landfill Operation Management Advisor (LOMA) was developed. In addition, the results of an accessibility evaluation on LOMA – the first ever reported on Web based expert systems – will be presented. Based on this evaluation some thoughts on accessibility guidelines specific to Web based expert systems will be reported.