scispace - formally typeset
Search or ask a question

Showing papers by "National University of Computer and Emerging Sciences published in 2005"


Proceedings ArticleDOI
01 Aug 2005
TL;DR: Different quality parameters related to the release of a product are analyzed and the chosen model that supports those parameters is discussed and a controlled environment is tested through the use of this model.
Abstract: Software release management is an important key technology for distributing the project/product to the customer. The key success factor of any software product lies in how delicately the product is released to the customer. The traditional SCM system does not guarantee to handle release management issues of a complex system. Complex systems involve complex database, N-tiers just to name a few. Each kind of application involves special technical consideration from a release perspective. In this paper, we analyze different quality parameters related to the release of a product. This parameters should be handled through software release model. The chosen model that supports those parameters is discussed. A controlled environment is tested for those parameters through the use of this model.

10 citations


Proceedings ArticleDOI
01 Dec 2005
TL;DR: This work is novel in terms of scalability, item search order and two horizontal and vertical projection techniques, and presents a maximal algorithm using this hybrid database representation approach.
Abstract: In this paper we present a novel hybrid (array-based layout and vertical bitmap layout) database representation approach for mining complete maximal frequent itemset (MFI) on sparse and large datasets. Our work is novel in terms of scalability, item search order and two horizontal and vertical projection techniques. We also present a maximal algorithm using this hybrid database representation approach. Different experimental results on real and sparse benchmark datasets show that our approach is better than previous state of art maximal algorithms

8 citations


Proceedings ArticleDOI
01 Dec 2005
TL;DR: An improved parallel thinning algorithm is presented, which can be easily extended for cursive or non-cursive languages alike by introducing a modified set of preservation rules via pixel arrangement grid templates, making it both robust to noise and speed.
Abstract: One of the most crucial phases in the process of text recognition is thinning of characters to a single pixel notation The success measure of any thinning algorithm lies in its property to retain the original character shape, which are also called unit-width skeletons No agreed universal thinning algorithm exists to produce character skeletons from different languages, which is a pre-process for all subsequent phases of character recognition such as segmentation, feature extraction, classification, etc Written natural languages based on their intrinsic properties can be classified as cursive and non-cursive Thinning algorithms when applied on cursive languages, poses greater complexity due to their distinct non-isolated boundaries and complex character shapes such as in Arabic, Sindhi, Urdu, etc Such algorithms can easily be extended for parallel implementations Selecting certain pixel arrangement grid templates over the other pixel patterns for the purpose of generating character skeletons exploits the parallel programming The success key is in determining the right pixel arrangement grids that can reduce the cost of iterations required to evaluate each pixel for selecting for thinning or ignoring This paper presents an improved parallel thinning algorithm, which can be easily extended for cursive or non-cursive languages alike by introducing a modified set of preservation rules via pixel arrangement grid templates, making it both robust to noise and speed Experimental results show its success over cursive languages like Arabic, Sindhi, Urdu and non-cursive languages like English, Chinese and even numerals Thus making it probably a universal thinning algorithm

7 citations


Proceedings Article
14 Jul 2005
TL;DR: Experimental results show that the proposed adaptive multithresholding technique (AMTT) reduces the color content by keeping the information loss minimum and hence the dissimilarity between original and binarized image is reduced.
Abstract: Applying binarization technique on a colored image can yield an image that is quite different from the original one if multi-thresholding is used. The solution proposed is an adaptive multithresholding technique (AMTT) that describes an adaptive multi-thresholding technique (AMTT) for binarization of color images depending upon the nature of the image. The technique improves the perception of the binarized images based on color hue and hence the dissimilarity between original and binarized image is reduced. This technique compensates for differences in illumination and shade by including information content in the thresholding calculation. AMTT calculates information loss in the image with respect to human perception on the basis of color hue. Experimental results show that the proposed AMTT reduces the color content by keeping the information loss minimum.

7 citations


15 Sep 2005
TL;DR: A Server-independent Agent Architecture is proposed and implemented to monitor intrusion detection and check the weaknesses of target host that normally attacker exploits to harm the network.
Abstract: Mobile Agents are an effective choice for many research and application areas due to several reasons, including improvements in latency and bandwidth of client-server applications, reducing network load and threat assessment. Intrusion Detection Systems and Vulnerability assessment systems are used to monitor network traffic, to measure and prioritize the risks associated with network and host based systems. These systems monitor suspicious activities and alert the system or network administrator. All inbound and outbound traffic can be monitored either in whole network or individual host. Amalgamating two technologies, i.e. Mobile agents and Network security systems will provide many benefits to the administrators where agents will autonomously roam and assess the network and the system. Though many Agent models are available to provide agent management services, but they are mostly server/platform dependent. These models may fail when intended host is to be targeted for its security assessment and the supporting compatible environment, i.e. server, is not available there. This paper surveys some server dependent agent models, implements and tests its results. It also points out the flaws of different server dependent agent models. We also propose and implement a Server-independent Agent Architecture to monitor intrusion detection and check the weaknesses of target host that normally attacker exploits to harm the network.

7 citations


Proceedings ArticleDOI
01 Dec 2005
TL;DR: This work is an attempt to resolve two fundamental and flaming security vulnerabilities in these multihop MANETs: the identification of a malicious node(s) and design of a robust security model that could be implemented, even in a hostile environment in the presence of a number of non-colluding nodes.
Abstract: The traditional stipulation of security services in the mobile ad hoc networks (MANET) context faces a set of challenges specific to this new technology While a multihop wireless mobile ad hoc network as an extended MANET can be formed without the need for any pre-existing infrastructure with minimum setup and administration costs; the security is undeniably a major "roadblock" in the commercial application of this technology at a production scale Unlike other commercial grade wireless networking technologies, and wired networks, which rely mainly on existing infrastructure and favor urban areas with varying degree of in-place security infrastructure and centralized administration, these multihop MANETs are to be deployed to remote geographical locations, even inside hostile territories The security challenges of such networks therefore, do not just include security of data and communication channels, but also the physical security of the hardware resources as well This work is an attempt to resolve two fundamental and flaming security vulnerabilities in these multihop MANETs: The identification of a malicious node(s) and design of a robust security model that could be implemented, even in a hostile environment in the presence of a number of non-colluding nodes The solutions are proposed by exploiting the mobile agent technologies The only requirement is that any nodes that wish to communicate securely must simply establish a priori a shared set of mobile agents to be used subsequently by their communicating protocol suites

6 citations


Proceedings ArticleDOI
27 Aug 2005
TL;DR: The talk will elaborate different metrics that may be used in requirement elicitation process and hence may help improve the requirement document resulting in improved productivity in the overall software development.
Abstract: The current software engineering practices observed in the local software industry generally lacks the software metrics planning part. The specific planning required to improve the software practices, to gauge the productivity of the individuals, and productivity at different phases of software development life cycle (SDLC), is generally missing. These organizations do not have any planning to gauge and control the business development issues and improve their productivities. During development phase, most of these software houses do collect data about different aspects of the project like project planning, requirement specification, testing, and bugs found. However, these organizations, generally, do not understand the importance of such data elements, how to maintain the data, how to built measurements plan, and most importantly how to use this data to improve their processes and productivity. In fact most of the time, the available data gets destroyed and hence never used. The current talk will discuss the need and importance of software metrics, the role of metrics in different phases of SDLC, and particularly the requirement phase. The requirement elicitation is considered as the most important aspect of software development, and if not handled properly, may cause severe productivity and quality issues. The talk will elaborate different metrics that may be used in requirement elicitation process and hence may help improve the requirement document resulting in improved productivity in the overall software development.

5 citations


Book ChapterDOI
11 Sep 2005
TL;DR: This paper compares two techniques for biclustering of gene expression data i.e. a recent technique based on crossing minimization paradigm and the other being Order Preserving Sub Matrix technique, with the main parameter for evaluation being the quality of the results in the presence of noise in gene expressionData.
Abstract: Production of gene expression chip involves a large number of error-prone steps that lead to a high level of noise in the corresponding data. Given the variety of available biclustering algorithms, one of the problems faced by biologists is the selection of the algorithm most appropriate for a given gene expression data set. This paper compares two techniques for biclustering of gene expression data i.e. a recent technique based on crossing minimization paradigm and the other being Order Preserving Sub Matrix (OPSM) technique. The main parameter for evaluation being the quality of the results in the presence of noise in gene expression data. The evaluation is based on using simulated data as well as real data. Several limitations of OPSM were exposed during the analysis, the key being its susceptibility to noise.

4 citations


Proceedings Article
17 Nov 2005
TL;DR: An extension to the Learning Real-Time A* (LRTA*) algorithm by utilizing a color coded coordination scheme, which suggests that by using the proposed modification, an improvement in LRTA*.
Abstract: In this paper we present an extension to the Learning Real-Time A* (LRTA*) algorithm by utilizing a color coded coordination scheme. The new algorithm (C3LRTA*) has been applied to solve randomly generated mazes with multiple problem solvers. Our work suggests that by using the proposed modification, we get an improvement in LRTA*. Multiple agents coordinate their actions by using color code, a heritage from those agents, which have previous traversed the current state. We have evaluated this coordination scheme on a large number of test cases with random obstacles and varying obstacle ratio. Experimentation has shown that C3LRTA* performs better then LRTA*. In addition, an increase in the number of agents and/or obstacle ratio, solution quality is improved as compared to LRTA*

4 citations


Proceedings ArticleDOI
01 Dec 2005
TL;DR: C3LRTA* is effective in both search time and solution quality and can be made more efficient if the number of agents and/or obstacle ratio is increased.
Abstract: In this paper, we have modified original LRTA* and C3LRTA* Algorithms, instead of single target we have use multiple targets in randomly generated mazes. Both modified algorithms have been applied to solve randomly generated mazes with multiple targets. We have evaluated both modified algorithms on a large number of test cases with random obstacles and varying obstacle ratio. Through simulation experiments, we have shown that C3LRTA* is effective in both search time and solution quality. In addition, the strategy used in C3LRTA* can be made more efficient if the number of agents and/or obstacle ratio is increased

3 citations


Proceedings ArticleDOI
01 Dec 2005
TL;DR: An improvement to the Multisync system by allowing for hash based comparisons of files is presented and it is conjectured that the proposed approach to file synchronization results in performance improvement.
Abstract: File synchronization problem has been solved using many different approaches. One recent approach is to use intelligent agents to synchronize files. Multisync is such a system. The original system implemented file synchronization across machines using agents. However the system did not cater for virus attack or files with different times but same contents, which can result in extraneous synchronization. The current paper presents an improvement to the system by allowing for hash based comparisons of files. Using MD5, the paper demonstrates that a marked improvement is achieved in file synchronization. It is conjectured that the proposed approach to file synchronization results in performance improvement. Experimental results and statistics have also been presented

Proceedings ArticleDOI
01 Dec 2005
TL;DR: A model is proposed to embed primitive intelligence in the network intrusion detection systems based on Quinlain ID3 algorithm of decision tree construction and inductive learning that can be very useful to detect unknown attacks.
Abstract: Classifiers of the contemporary network intrusion detection systems do not use any inductive learning technique to take inferences from the available independent data to arrive at a conclusion for classification of unknown threats. This makes the systems vulnerable to new attacks. The author proposes a model to embed primitive intelligence in the network intrusion detection systems. This model is based on Quinlain ID3 algorithm of decision tree construction and inductive learning. This model can be very useful to detect unknown attacks because it develops an optimized decision tree from available training set and can takes inference from the known (test) data to classify unknown patterns by adding new rules in the rule set

Proceedings ArticleDOI
01 Dec 2005
TL;DR: A software architecture reconstruction methodology based on pattern mining from the source code using an expert system is presented, which extracts the design decisions, which lead to the quality attributes of the system.
Abstract: Software architecture reconstruction is required for many purposes. There exist many approaches to reconstruct the architecture of a system however; they do not extend the extraction process up to the design decisions and the quality attributes of the system. This paper presents a software architecture reconstruction methodology based on pattern mining from the source code using an expert system. From this reconstructed architecture, we extract the design decisions, which lead to the quality attributes of the system

Proceedings ArticleDOI
01 Dec 2005
TL;DR: It is indicated that recent methods can be efficiently exploited and, with such a drastic improvement, how to make the physical based method more practically adopted methods for 3D games and in animation industries.
Abstract: Earlier, hardware-accelerated stenciled shadow volume (SSV) techniques had not been widely adopted by 3D games and applications due in large part to the lack of robustness of described techniques This situation persisted despite widely available hardware support Since perception being changed, our aims are at providing an exhaustive study In particular we discuss the advantages, limitations, rendering quality and cost of each algorithm Stencil shadow volumes (SSVs) are still popular for their simplicity, adaptability to different approaches, hybrid nature, robustness, being powered by graphic hardware compatibilities and real time physical based rendering capabilities, making it highly demanding in graphics and gaming industries We also addressed the rendering quality and cost of recent algorithms Finally, we indicate that recent methods can be efficiently exploited and, with such a drastic improvement, how to make the physical based method more practically adopted methods for 3D games and in animation industries discussing optimizations, workflow, and scene management employed in commercial 3D engines

Proceedings ArticleDOI
01 Dec 2005
TL;DR: This research proposes a new approach that is based on public & private key encryptions that applies more stringent requirements that are much harder to break by an intruder.
Abstract: A wireless network provides a significant advantage over the traditional wired network. The primary advantage is in the convenience, and in quick to deploy arrangement in at places with limited supports. This additional benefit is however not easy to realize without looking in to several points that requires more stringent analysis. One of the significant factor due to its very nature, wireless communication more prone to an unauthorized access by a user who happens to be in the vicinity, and is able to break in the security barriers. Most of the authentication mechanisms that use shared key mechanism carry the risk of loosing key during key distribution, or during authentication process. During this the highest amount of risk is present at the access points. The current wireless networks are based on 802.11b wired equivalent privacy (WEP) protocol that is based on shared key authentication. The protocol was originally designed keeping requirements of a wired network. This works fine under normal conditions, but in special circumstances it fails to provide data security. This research proposes a new approach that is based on public & private key encryptions. Once a session is established, a key is generated of that particular session. This session key shall be used through out the communications. The proposed approach is implemented on a simulator. This approach, in our finding applies more stringent requirements that are much harder to break by an intruder

Proceedings ArticleDOI
01 Dec 2005
TL;DR: The system Urdu morphological guesser (UMG) uses morphological guessing on Urdu while utilizing ranking and clues to refine and improve the guesses.
Abstract: Analysis of morphology of Urdu is a preliminary task to several NLP tasks. Since it is impractical to maintain updated lexicons due to coinage of new words morphological guessing is also used. However guessing depends entirely on affixes and a lot of undesired answers emerge. Our system Urdu morphological guesser (UMG) uses morphological guessing on Urdu while utilizing ranking and clues to refine and improve the guesses. It covers open class words and enables guessing of part of speech, number, gender, causation, cases and language of origin

Proceedings ArticleDOI
01 Dec 2005
TL;DR: Eight QAs, seventeen sub-QAs, and thirty one architectural mechanisms (AMs) are presented along with the justification to adopt them.
Abstract: Large systems like grids and grid monitoring tools (GMTs) experience frequent revisions and are required to be analyzed for steadiness, manageability, longer life span and indispensability Many tools have been developed for grid monitoring, but no specific tool exists to cater for all required QAs It has also been observed that some QAs are not properly handled in most of the existing tools Hence there is need to compile a list of necessary QAs and means to achieve these QAs at architectural level Eight QAs, seventeen sub-QAs, and thirty one architectural mechanisms (AMs) are presented along with the justification to adopt them