scispace - formally typeset
Search or ask a question
Author

Thair Khdour

Other affiliations: Higher Colleges of Technology
Bio: Thair Khdour is an academic researcher from Al-Balqa` Applied University. The author has contributed to research in topics: Software development process & Systems development life cycle. The author has an hindex of 5, co-authored 15 publications receiving 123 citations. Previous affiliations of Thair Khdour include Higher Colleges of Technology.

Papers
More filters
Journal Article
TL;DR: This paper investigates the state of risk and risk management in the most popular software development process models (i.e. waterfall, v-model, incremental development, spiral, and agile development) and helps project managers adopt the methodology that best suits their projects.
Abstract: Different software development methodologies exist. Choosing the methodology that best fits a software project depends on several factors. One important factor is how risky the project is. Another factor is the degree to which each methodology supports risk management. Indeed, the literature is rich in such studies that aim at comparing the currently available software development process models from different perspectives. In contrast, little effort has been spent in purpose of comparing the available process models in terms of its support to risk management. In this paper, we investigate the state of risk and risk management in the most popular software development process models (i.e. waterfall, v-model, incremental development, spiral, and agile development). This trend in such studies is expected to serve in several aspects. Technically, it helps project managers adopt the methodology that best suits their projects. From another side, it will make a way for further studies that aim at improving the software development process.

47 citations

Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms.
Abstract: The development of multimedia and digital imaging has led to high quantity of data required to represent modern imagery. This requires large disk space for storage, and long time for transmission over computer networks, and these two are relatively expensive. These factors prove the need for images compression. Image compression addresses the problem of reducing the amount of space required to represent a digital image yielding a compact representation of an image, and thereby reducing the image storage/transmission time requirements. The key idea here is to remove redundancy of data presented within an image to reduce its size without affecting the essential information of it. We are concerned with lossless image compression in this paper. Our proposed approach is a mix of a number of already existing techniques. Our approach works as follows: first, we apply the well-known Lempel-Ziv-Welch (LZW) algorithm on the image in hand. What comes out of the first step is forward to the second step where the Bose, Chaudhuri and Hocquenghem (BCH) error correction and detected algorithm is used. To improve the compression ratio, the proposed approach applies the BCH algorithms repeatedly until “inflation” is detected. The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms.

29 citations

Journal ArticleDOI
TL;DR: This paper presents a comprehensive theoretical study of the major risk factors threaten each of SDLC phases and an exhaustive list of 100 risk factors was produced.
Abstract: Each phase of the Software Development Life Cycle (SDLC) is vulnerable to different types of risk factors. Identifying and understanding these risks is a preliminary stage for managing risks successfully. This paper presents a comprehensive theoretical study of the major risk factors threaten each of SDLC phases. An exhaustive list of 100 risk factors was produced. This list reflects the most frequently occurring risk factors that are common to most software development projects.

20 citations

Proceedings ArticleDOI
20 Apr 2010
TL;DR: This paper proposes a semantic-based Web service registry filtering mechanism that takes the responsibility of narrowing down the number of Web service descriptions to be checked in detail to theNumber of only the relevant advertisements to the client request, and proposes an extension to Web Ontology Language for Services (OWL-S) with this extension.
Abstract: One of the major challenges of Service-Oriented Computing (SoC) paradigm is the Web service discovery, also known as matchmaking. Yet the mechanisms used to match Web services can be improved. However, considering the semantics of the Web service descriptions is a must for automating the process of discovery Web services. Current Web service matchmaking approaches check the capabilities of the requested Web service against the capabilities of all advertised Web service advertisements in the registry. As the number of advertised Web services in the registry can be huge, the process of checking all advertised Web services against a single client query can be time consuming. In this paper, we propose a semantic-based Web service registry filtering mechanism that takes the responsibility of narrowing down the number of Web service descriptions to be checked in detail to the number of only the relevant advertisements to the client request. Our proposed filtering mechanism picks the advertisements that are relevant to the client request and ignore, from an early stage and before checking the details of the descriptions, the advertisements that are not able to satisfy the client request. We also propose an extension to Web Ontology Language for Services (OWL-S). With this extension, filtering the Web service registry based on the semantics of the available descriptions is possible.

8 citations

Proceedings ArticleDOI
19 Nov 2019
TL;DR: This paper proposes a modern architecture that utilizes a Multi Agent System not only to help in choosing the best resources but also to generate the negotiation protocol between the grid users and providers to fully deploy the capacity of grid computing.
Abstract: Grid Computing refers to systems and applications that incorporate and control distributed services and resources to resolve systematic or industrial issues. Over the past few years, developers have realized the need for an automatic mechanism that can be used in employing the grid power, improving its operating and enhancing its production. Multi-Agents Systems are generally employed to solve problems by using decentralized techniques by a set of agents collaborate to resolve a challenge. Hence, it is considered suitable solutions for open systems that change frequently. In this paper, we propose a modern architecture that utilizes a Multi Agent System not only to help in choosing the best resources but also to generate the negotiation protocol between the grid users and providers to fully deploy the capacity of grid computing. Moreover, the proposed architecture can play the main role in monitoring the list of users' jobs as they are being handled.

6 citations


Cited by
More filters
01 Jan 2003

3,093 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD), which simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework.
Abstract: Recent years has witnessed growing interest in hyperspectral image (HSI) processing. In practice, however, HSIs always suffer from huge data size and mass of redundant information, which hinder their application in many cases. HSI compression is a straightforward way of relieving these problems. However, most of the conventional image encoding algorithms mainly focus on the spatial dimensions, and they need not consider the redundancy in the spectral dimension. In this paper, we propose a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD). Instead of processing the HSI separately by spectral channel or by pixel, we represent each local patch of the HSI as a third-order tensor. Then, the similar tensor patches are grouped by clustering to form a fourth-order tensor per cluster. Since the grouped tensor is assumed to be redundant, each cluster can be approximately decomposed to a coefficient tensor and three dictionary matrices, which leads to a low-rank tensor representation of both the spatial and spectral modes. The reconstructed HSI can then be simply obtained by the product of the coefficient tensor and dictionary matrices per cluster. In this way, the proposed PLTD algorithm simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework. The extensive experimental results on various public HSI datasets demonstrate that the proposed method outperforms the traditional image compression approaches and other tensor-based methods.

138 citations

Journal ArticleDOI
TL;DR: This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks, which is the combination of defined region of interest (ROI) and imageWatermarking secret key and the performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio.
Abstract: In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

74 citations

Journal ArticleDOI
TL;DR: Data processing with multiple domains is an important concept in any platform; it deals with multimedia and textual information and focuses on a structured or unstructure approach to data processing.
Abstract: Data processing with multiple domains is an important concept in any platform; it deals with multimedia and textual information. Where textual data processing focuses on a structured or unstructure...

45 citations

Journal ArticleDOI
TL;DR: The purpose of the image compression is to decrease the redundancy and irrelevance of image data to be capable to record or send data in an effective form, which decreases the time of transmit in the network and raises the transmission speed.
Abstract: Image compression is an implementation of the data compression which encodes actual image with some bits. Thepurpose of the image compression is to decrease the redundancy and irrelevance of image data to be capable to record or send data in an effective form. Hence the image compression decreases the time of transmit in the network and raises the transmission speed. In Lossless technique of image compression, no data get lost while doing the compression. To solve these types of issues various techniques for the image compression are used. Now questions like how to do mage compression and second one is which types of technology is used, may be arises. For this reason commonly two types’ of approaches are explained called as lossless and the lossy image compression approaches. These techniques are easy in their applications and consume very little memory. An algorithm has also been introduced and applied to compress images and to decompress them back, by using the Huffman encoding techniques.

36 citations