scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Systems Journal in 2001"


Journal ArticleDOI
TL;DR: The inherent strengths of biometrics-based authentication are outlined, the weak links in systems employing biometric authentication are identified, and new solutions for eliminating these weak links are presented.
Abstract: Because biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. It is important that such biometrics-based authentication systems be designed to withstand attacks when employed in security-critical applications, especially in unattended remote applications such as e-commerce. In this paper we outline the inherent strengths of biometrics-based authentication, identify the weak links in systems employing biometrics-based authentication, and present new solutions for eliminating some of these weak links. Although, for illustration purposes, fingerprint authentication is used throughout, our analysis extends to other biometrics-based methods.

1,709 citations


Journal ArticleDOI
TL;DR: It is argued that the social capital resident in communities of practice leads to behavioral changes, which in turn positively influence business performance, and is linked to the basic dimensions of social capital.
Abstract: As organizations grow in size, geographical scope, and complexity, it is increasingly apparent that sponsorship and support of communities of practice--groups whose members regularly engage in sharing and learning, based on common interests--can improve organizational performance. Although many authors assert that communities of practice create organizational value, there has been relatively little systematic study of the linkage between community outcomes and the underlying social mechanisms that are at work. To build an understanding of how communities of practice create organizational value, we suggest thinking of a community as an engine for the development of social capital. We argue that the social capital resident in communities of practice leads to behavioral changes, which in turn positively influence business performance. We identify four specific performance outcomes associated with the communities of practice we studied and link these outcomes to the basic dimensions of social capital. These dimensions include connections among practitioners who may or may not be co-located, relationships that build a sense of trust and mutual obligation, and a common language and context that can be shared by community members. Our conclusions are based on a study of seven organizations where communities of practice are acknowledged to be creating value.

953 citations


Journal ArticleDOI
A. D. Marwick1
TL;DR: This paper serves as an introduction to the subject for those papers in this issue that discuss technology, and the strongest contribution to current solutions is made by technologies that deal largely with explicit knowledge, such as search and classification.
Abstract: Selected technologies that contribute to knowledge management solutions are reviewed using Nonaka's model of organizational knowledge creation as a framework. The extent to which knowledge transformation within and between tacit and explicit forms can be supported by the technologies is discussed, and some likely future trends are identified. It is found that the strongest contribution to current solutions is made by technologies that deal largely with explicit knowledge, such as search and classification. Contributions to the formation and communication of tacit knowledge, and support for making it explicit, are currently weaker, although some encouraging developments are highlighted, such as the use of text-based chat, expertise location, and unrestricted bulletin boards. Through surveying some of the technologies used for knowledge management, this paper serves as an introduction to the subject for those papers in this issue that discuss technology.

567 citations


Journal ArticleDOI
Laurence Prusak1
TL;DR: The history of knowledge management is looked at and insights into what knowledge management means today and where it may be headed in the future are offered.
Abstract: In this essay I look at the history of knowledge management and offer insights into what knowledge management means today and where it may be headed in the future. This is an updated version of an article first published in Knowledge Directions, the journal of the Institute for Knowledge Management, fall 1999.

502 citations


Journal ArticleDOI
P. Gongla1, C. R. Rizzuto1
TL;DR: An evolution model based on observing over 60 communities over a five-year period is presented, and the evolution in terms of people and organization behavior, supporting processes, and enabling technology factors are discussed.
Abstract: In 1995, IBM Global Services began implementing a business model that included support for the growth and development of communities of practice focused on the competencies of the organization. This paper describes our experience working with these communities over a five-year period, concentrating specifically on how the communities evolved. We present an evolution model based on observing over 60 communities, and we discuss the evolution in terms of people and organization behavior, supporting processes, and enabling technology factors. Also described are specific scenarios of communities within IBM Global Services at various stages of evolution.

396 citations


Journal ArticleDOI
TL;DR: It is argued that it is essential for those designing knowledge management systems to consider the human and social factors at play in the production and use of knowledge.
Abstract: Knowledge management is often seen as a problem of capturing, organizing, and retrieving information, evoking notions of data mining, text clustering, databases, and documents. We believe that this view is too simple. Knowledge is inextricably bound up with human cognition, and the management of knowledge occurs within an intricately structured social context. We argue that it is essential for those designing knowledge management systems to consider the human and social factors at play in the production and use of knowledge. We review work—ranging from basic research to applied techniques—that emphasizes cognitive and social factors in knowledge management. We then describe two approaches to designing socially informed knowledge management systems, social computing and knowledge socialization.

346 citations


Journal ArticleDOI
Laura M. Haas1, Peter Schwarz1, P. Kodali, E. Kotlar2, Julia E. Rice1, William C. Swope1 
TL;DR: The DiscoveryLink offering is described, focusing on two key elements, the wrapper architecture and the query optimizer, and how it can be used to integrate the access to life sciences data from heterogeneous data sources.
Abstract: Vast amounts of life sciences data reside today in specialized data sources, with specialized query processing capabilities. Data from one source often must be combined with data from other sources to give users the information they desire. There are database middleware systems that extract data from multiple sources in response to a single query. IBM's DiscoveryLink is one such system, targeted to applications from the life sciences industry. DiscoveryLink provides users with a virtual database to which they can pose arbitrarily complex queries, even though the actual data needed to answer the query may originate from several different sources, and none of those sources, by itself, is capable of answering the query. We describe the DiscoveryLink offering, focusing on two key elements, the wrapper architecture and the query optimizer, and illustrate how it can be used to integrate the access to life sciences data from heterogeneous data sources.

295 citations


Journal ArticleDOI
TL;DR: An overview of the Blue Gene project at IBM Research is provided to advance the understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software.
Abstract: In December 1999, IBM announced the start of a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project has two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. This project should enable biomolecular simulations that are orders of magnitude larger than current technology permits. Major areas of investigation include: how to most effectively utilize this novel platform to meet our scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets, with reasonable cost, through novel machine architectures. This paper provides an overview of the Blue Gene project at IBM Research. It includes some of the plans that have been made, the intended goals, and the anticipated challenges regarding the scientific work, the software application, and the hardware design.

290 citations


Journal ArticleDOI
TL;DR: A domain ontology for molecular biology and bioinformatics is used in a retrieval-based information integration system for biologists, in which the ontology is used both to drive a visual query interface and as a global schema against which complex intersource queries are expressed.
Abstract: This paper describes the Transparent Access to Multiple Bioinformatics Information Sources project, known as TAMBIS, in which a domain ontology for molecular biology and bioinformatics is used in a retrieval-based information integration system for biologists. The ontology, represented using a description logic and managed by a terminology server, is used both to drive a visual query interface and as a global schema against which complex intersource queries are expressed. These source-independent declarative queries are then rewritten into collections of ordered source-dependent queries for execution by a middleware layer. In bioinformatics, the majority of data sources are not databases but tools with limited accessible interfaces. The ontology helps manage the interoperation between these resources. The paper emphasizes the central role that is played by the ontology in the system. The project distinguishes itself from others in the following ways: the ontology, developed by a biologist, is substantial; the retrieval interface is sophisticated; the description logic is managed by a sophisticated terminology server. A full pilot application is available as a JavaTM applet integrating five sources concerned with proteins. This pilot is currently undergoing field trials with working biologists and is being used to answer real questions in biology, one of which is used as a case study throughout the paper.

271 citations


Journal ArticleDOI
T. Nasukawa1, Tohru Nagano1
TL;DR: By applying the prototype system named TAKMI (Text Analysis and Knowledge Mining) to textual databases in PC help centers, the system can automatically detect product failures; determine issues that have led to rapid increases in the number of calls and their underlying reasons; and analyze help center productivity and changes in customers' behavior involving a particular product, without reading any of the text.
Abstract: Large text databases potentially contain a great wealth of knowledge. However, text represents factual information (and information about the author's communicative intentions) in a complex, rich, and opaque manner. Consequently, unlike numerical and fixed field data, it cannot be analyzed by standard statistical data mining methods. Relying on human analysis results in either huge workloads or the analysis of only a tiny fraction of the database. We are working on text mining technology to extract knowledge from very large amounts of textual data. Unlike information retrieval technology that allows a user to select documents that meet the user's requirements and interests, or document clustering technology that organizes documents, we focus on finding valuable patterns and rules in text that indicate trends and significant features about specific topics. By applying our prototype system named TAKMI (Text Analysis and Knowledge Mining) to textual databases in PC help centers, we can automatically detect product failures; determine issues that have led to rapid increases in the number of calls and their underlying reasons; and analyze help center productivity and changes in customers' behavior involving a particular product, without reading any of the text. We have verified that our framework is also effective for other data such as patent documents.

235 citations


Journal ArticleDOI
TL;DR: This paper reports on the experiences with two systems that were developed at the University of Pennsylvania: K2, a view integration implementation, and GUS, a data warehouse.
Abstract: The integrated access to heterogeneous data sources is a major challenge for the biomedical community. Several solution strategies have been explored: link-driven federation of databases, view integration, and warehousing. In this paper we report on our experiences with two systems that were developed at the University of Pennsylvania: K2, a view integration implementation, and GUS, a data warehouse. Although the view integration and the warehouse approaches each have advantages, there is no clear "winner." Therefore, in selecting the best strategy for a particular application, users must consider the data characteristics, the performance guarantees required, and the programming resources available. Our experiences also point to some practical tips on how database updates should be published, and how XML can be used to facilitate the processing of updates in a warehousing environment.

Journal ArticleDOI
TL;DR: This paper presents the "conflicting entities" administration paradigm for the specification of static and dynamic separation ofduty requirements in the workflow environment, and argues that RBAC does not support the complex work processes often associated with separation of duty requirements, particularly with dynamic separated of duty.
Abstract: Separation of duty, as a security principle, has as its primary objective the prevention of fraud and errors. This objective is achieved by disseminating the tasks and associated privileges for a specific business process among multiple users. This principle is demonstrated in the traditional example of separation of duty found in the requirement of two signatures on a check. Previous work on separation of duty requirements often explored implementations based on role-based access control (RBAC) principles. These implementations are concerned with constraining the associations between RBAC components, namely users, roles, and permissions. Enforcement of the separation of duty requirements, although an integrity requirement, thus relies on an access control service that is sensitive to the separation of duty requirements. A distinction between separation of duty requirements that can be enforced in administrative environments, namely static separation of duty, and requirements that can only be enforced in a run-time environment, namely dynamic separation of duty, is required. It is argued that RBAC does not support the complex work processes often associated with separation of duty requirements, particularly with dynamic separation of duty. The workflow environment, being primarily concerned with the facilitation of complex work processes, provides a context in which the specification of separation of duty requirements can be studied. This paper presents the "conflicting entities" administration paradigm for the specification of static and dynamic separation of duty requirements in the workflow environment.

Journal ArticleDOI
Robert L. Mack1, Yael Ravin1, Roy J. Byrd1
TL;DR: The role knowledge portals play in supporting knowledge work tasks and the component technologies embedded in portals are described, such as the gathering of distributed document information, indexing and text search, and categorization; and new functionality for future inclusion in knowledge portals are discussed.
Abstract: A fundamental aspect of knowledge management is capturing knowledge and expertise created by knowledge workers as they go about their work and making it available to a larger community of colleagues. Technology can support these goals, and knowledge portals have emerged as a key tool for supporting knowledge work. Knowledge portals are single-point-access software systems intended to provide easy and timely access to information and to support communities of knowledge workers who share common goals. In this paper we discuss knowledge portal applications we have developed in collaboration with IBM Global Services, mainly for internal use by Global Services practitioners. We describe the role knowledge portals play in supporting knowledge work tasks and the component technologies embedded in portals, such as the gathering of distributed document information, indexing and text search, and categorization; and we discuss new functionality for future inclusion in knowledge portals. We share our experience deploying and maintaining portals. Finally, we describe how we view the future of knowledge portals in an expanding knowledge workplace that supports mobility, collaboration, and increasingly automated project workflow.

Journal ArticleDOI
TL;DR: It is argued that in order to capture and internalize knowledge obtained through an alliance, a firm must have an alliance learning capability and that what is important is not necessarily a particular alliance strategy, but rather an alignment between alliance strategy and business strategy.
Abstract: Strategic alliances are no longer a strategic option but a necessity in many markets and industries Dynamic markets for both end products and technologies, coupled with the increasing costs of doing business, have resulted in a significant increase in the use of alliances Yet, managers are finding it increasingly difficult to capture value from alliances In this paper, we present a model that describes the knowledge resource exchange between alliance partners This model focuses on the different dimensions of knowledge resources (tacitness, specificity, and complexity) and their associated value implications, as well as the different roles of the partner based on its position within an industry network (complementor, competitor, supplier, customer, or other) We also argue that in order to capture and internalize knowledge obtained through an alliance, a firm must have an alliance learning capability We illustrate the use of this model in the computer industry by analyzing the publicly announced alliances of Dell Computer Corporation and Sun Microsystems, Inc By applying our resource exchange model, we were able to analyze the alliance strategy for each firm and to understand the alignment between the announced business strategy and alliance strategy for each firm The findings suggest that what is important is not necessarily a particular alliance strategy, but rather an alignment between alliance strategy and business strategy

Journal ArticleDOI
TL;DR: This paper takes a process perspective and reflects upon the value e-business knowledge contributes in the enhancement of three core operating processes: customer relationship management, supply chain management, and product development management.
Abstract: The new business landscape ushered in by e-business has revolutionized business operations but, to date, has not integrated well with internal knowledge management initiatives. Through the development of e-business focused knowledge, organizations can accomplish three critical tasks: (1) evaluate what type of work organizations are doing in the e-business environment (know-what); (2) understand how they are doing it (know-how); and (3) determine why certain practices and companies are likely to undergo change for the foreseeable future (know-why). In this paper we take a process perspective and reflect upon the value e-business knowledge contributes in the enhancement of three core operating processes: customer relationship management, supply chain management, and product development management. Understanding how e-business impacts these core processes and the subprocesses within them, and then leveraging that knowledge to enhance these processes, is key to an organization's success in deriving superior marketplace results. In this paper, therefore, we highlight the central role knowledge management plays in diagnosing and managing e-business-driven changes in organizations.

Journal ArticleDOI
TL;DR: This paper considers the problem in the light of commercially available secure coprocessors--whose internal memory is still much, much smaller than the typical database size--and constructs an algorithm that both provides asymptotically optimal performance and also promises reasonable performance in real implementations.
Abstract: What does it take to implement a server that provides access to records in a large database, in a way that ensures that this access is completely private--even to the operator of this server? In this paper, we examine the question: Using current commercially available technology, is it practical to build such a server, for real databases of realistic size, that offers reasonable performance--scaling well, parallelizing well, working with the current client infrastructure, and enabling server operators of otherwise unknown credibility to prove their service has these privacy properties? We consider this problem in the light of commercially available secure coprocessors--whose internal memory is still much, much smaller than the typical database size--and construct an algorithm that both provides asymptotically optimal performance and also promises reasonable performance in real implementations. Preliminary prototypes support this analysis, but leave many areas for further work.

Journal ArticleDOI
TL;DR: Experimental results show the high precision of the proposed classifier and the complementarity of the bioinformatics tools studied in the paper.
Abstract: In this paper we propose new techniques to extract features from protein sequences. We then use the features as inputs for a Bayesian neural network (BNN) and apply the BNN to classifying protein sequences obtained from the PIR (Protein Information Resource) database maintained at the National Biomedical Research Foundation. To evaluate the performance of the proposed approach, we compare it with other protein classifiers built based on sequence alignment and machine learning methods. Experimental results show the high precision of the proposed classifier and the complementarity of the bioinformatics tools studied in the paper.

Journal ArticleDOI
TL;DR: This paper compares and contrasts different approaches to content adaptation, including authoring different versions to accommodate different environments, using application server technology such as JavaServer pagesTM (JSPTM) to create multiple versions of dynamic applications, and dynamically transcoding information generated by a single application.
Abstract: The promise of e-business is coming true: both businesses and individuals are using the Web to buy products and services. Both want to extend the reach of e-business to new environments. Customers want to check accounts, access information, and make purchases with their cellular phones, pagers, and personal digital assistants (PDAs). Banks, airlines, and retailers are competing to provide the most ubiquitous, convenient service for their customers. Web applications designed to take advantage of the rich rendering capabilities of advanced desktop browsers on large displays do not generally render effectively on the small screens available on phones and PDAs. Some devices have little or no graphics capability, or they require different markup languages, such as Wireless Markup Language (WML), for text presentation. Transcoding is technology for adapting content to match constraints and preferences associated with specific environments. This paper compares and contrasts different approaches to content adaptation, including authoring different versions to accommodate different environments, using application server technology such as JavaServer pagesTM (JSPTM) to create multiple versions of dynamic applications, and dynamically transcoding information generated by a single application. For dynamic transcoding, the paper describes several different transcoding methodologies employed by the IBM WebSphereTM Transcoding Publisher product, including HyperText Markup Language (HTML) simplification, Extensible Markup Language stylesheet selection and application, HTML conversion to WML, WML deck fragmentation, and image transcoding. The paper discusses how to decide whether transcoding should be performed at the content source or in a network intermediary. It also describes a means of identifying the device and network characteristics associated with a request and using that information to decide how to transcode the response. Finally, the paper discusses the need for new networking benchmarks to characterize the server load and performance characteristics for dynamic transcoding.

Journal ArticleDOI
TL;DR: The broader scope of virtual screening is discussed and the recent work in docking one million compounds into the estrogen hormone receptor is described in order to highlight the technical feasibility of performing very large-scale virtual screening as a route to identifying novel drug leads.
Abstract: Virtual screening, or in silico screening, is a new approach attracting increasing levels of interest in the pharmaceutical industry as a productive and cost-effective technology in the search for novel lead compounds. Although the principles involved--the computational analysis of chemical databases to identify compounds appropriate for a given biological receptor--have been pursued for several years in molecular modeling groups, the availability of inexpensive high-performance computing platforms has transformed the process so that increasingly complex and more accurate analyses can be performed on very large data sets. The virtual screening technology of Protherics Molecular Design Ltd. is based on its integrated software environment for receptor-based drug design, called Prometheus. In particular, molecular docking is used to predict the binding modes and binding affinities of every compound in the data set to a given biological receptor. This method represents a very detailed and relevant basis for prioritizing compounds for biological screening. This paper discusses the broader scope of virtual screening and, as an example, describes our recent work in docking one million compounds into the estrogen hormone receptor in order to highlight the technical feasibility of performing very large-scale virtual screening as a route to identifying novel drug leads.

Journal ArticleDOI
TL;DR: This paper describes the basic principles of electronic TPAs, followed by an overview of the proposed TPA language, and describes examples of solutions constructed using TPAs and BPF.
Abstract: In business-to-business interactions spanning electronic commerce, supply chain management, and other applications, the terms and conditions describing the electronic interactions between businesses can be expressed as an electronic contract or trading partner agreement (TPA). From the TPA, configuration information and code that embody the terms and conditions can be generated automatically at each trading partner's site. The TPA expresses the rules of interaction between the parties to the TPA while maintaining complete independence of the internal processes at each party from the other parties. It represents a long-running conversation that comprises a single unit of business. This paper summarizes the needs of interbusiness electronic interactions. Then it describes the basic principles of electronic TPAs, followed by an overview of the proposed TPA language. The business-to-business protocol framework (BPF) provides various tools and run-time services for supporting TPA-based interaction and integration with business applications. Finally, we describe examples of solutions constructed using TPAs and BPF.

Journal ArticleDOI
TL;DR: This analysis suggests that the TPC benchmarks tend to exercise the following aspects of the system differently than the production workloads: concurrency control mechanism, workload-adaptive techniques, scheduling and resource allocation policies, and I/O optimizations for temporary and index files.
Abstract: There has been very little empirical analysis of any real production database workloads. Although the Transaction Processing Performance Council benchmarks C (TPC-CTM) and D (TPC-DTM) have become the standard benchmarks for on-line transaction processing and decision support systems, respectively, there has not been any major effort to systematically analyze their workload characteristics, especially in relation to those of real production database workloads. In this paper, we examine the characteristics of the production database workloads of ten of the world's largest corporations, and we also compare them to TPC-C and TPC-D. We find that the production workloads exhibit a wide range of behavior. In general, the two TPC benchmarks complement one another in reflecting the characteristics of the production workloads, but some aspects of real workloads are still not represented by either of the benchmarks. Specifically, our analysis suggests that the TPC benchmarks tend to exercise the following aspects of the system differently than the production workloads: concurrency control mechanism, workload-adaptive techniques, scheduling and resource allocation policies, and I/O optimizations for temporary and index files. We also reexamine Amdahl's rule of thumb for a typical data processing system and discover that both the TPC benchmarks and the production workloads generate on the order of 0.5 to 1.0 bit of logical I/O per instruction, surprisingly close to the much earlier figure.

Journal ArticleDOI
TL;DR: Max, a working prototype that includes a high-throughput crystallization and evaluation setup in the wet laboratory and an intelligent software system in the computer laboratory, is described, able to prepare and evaluate over 40 thousand crystallization experiments a day.
Abstract: Current structural genomics projects are likely to produce hundreds of proteins a year for structural analysis. The primary goal of our research is to speed up the process of crystal growth for proteins in order to enable the determination of protein structure using single crystal X-ray diffraction. We describe Max, a working prototype that includes a high-throughput crystallization and evaluation setup in the wet laboratory and an intelligent software system in the computer laboratory. A robotic setup for crystal growth is able to prepare and evaluate over 40 thousand crystallization experiments a day. Images of the crystallization outcomes captured with a digital camera are processed by an image-analysis component that uses the two-dimensional Fourier transform to perform automated classification of the experiment outcome. An information repository component, which stores the data obtained from crystallization experiments, was designed with an emphasis on correctness, completeness, and reproducibility. A case-based reasoning component provides support for the design of crystal growth experiments by retrieving previous similar cases, and then adapting these in order to create a solution for the problem at hand. While work on Max is still in progress, we report here on the implementation status of its components, discuss how our work relates to other research, and describe our plans for the future.

Journal ArticleDOI
TL;DR: A recent microsecond-length molecular dynamics simulation on a small protein, villin headpiece subdomain, with an explict atomic-level representation of both protein and solvent, has marked the beginning of direct and realistic simulations of the folding processes.
Abstract: Understanding the mechanism of protein folding is often referred to as the second half of genetics. Computational approaches have been instrumental in the efforts. Simplified models have been applied to understand the physical principles governing the folding processes and will continue to play important roles in the endeavor. Encouraging results have been obtained from all-atom molecular dynamics simulations of protein folding. A recent microsecond-length molecular dynamics simulation on a small protein, villin headpiece subdomain, with an explict atomic-level representation of both protein and solvent, has marked the beginning of direct and realistic simulations of the folding processes. With growing computer power and increasingly accurate representations together with the advancement of experimental methods, such approaches will help us to achieve a detailed understanding of protein folding mechanisms.

Journal ArticleDOI
TL;DR: As e-business matures, companies require enterprise-scalable functionality for their corporate Internet and intranet environments, and successful companies recognize that their security infrastructures need to address the e- business challenge.
Abstract: As e-business matures, companies require enterprise-scalable functionality for their corporate Internet and intranet environments. To support the expansion of their computing boundaries, businesses have embraced Web application servers. These servers support servlets, JavaServer PagesTM, and Enterprise JavaBeansTM technologies, providing simplified development and flexible deployment of Web-based applications. However, securing this malleable model presents a challenge. Successful companies recognize that their security infrastructures need to address the e-business challenge. They are aware of the types of attacks that malevolent entities can launch against their servers and can plan appropriate defenses.

Journal ArticleDOI
Messaoud Benantar1
TL;DR: The details of the Internet public key infrastructure, which provides the secure digital certification required to establish a network of trust for public commerce, are explored.
Abstract: Long before the advent of electronic systems, different methods of information scrambling were used. Early attempts at data security in electronic computers employed some of the same transformations. Modern secret key cryptography brought much greater security, but eventually proved vulnerable to brute-force attacks. Public key cryptography has now emerged as the core technology for modern computing security systems. By associating a public key with a private key, many of the key distribution problems of earlier systems are avoided. The Internet public key infrastructure provides the secure digital certification required to establish a network of trust for public commerce. This paper explores the details of the infrastructure.

Journal ArticleDOI
TL;DR: This paper describes several in-process metrics whose usefulness has been proven with ample implementation experiences at the IBM Rochester AS/400® software development laboratory and contends that most of them are applicable to most software projects and should be an integral part of software testing.
Abstract: In-process tracking and measurements play a critical role in software development, particularly for software testing. Although there are many discussions and publications on this subject and numerous proposed metrics, few in-process metrics are presented with sufficient experiences of industry implementation to demonstrate their usefulness. This paper describes several in-process metrics whose usefulness has been proven with ample implementation experiences at the IBM Rochester AS/400® software development laboratory. For each metric, we discuss its purpose, data, interpretation, and use and present a graphic example with real-life data. We contend that most of these metrics, with appropriate tailoring as needed, are applicable to most software projects and should be an integral part of software testing.

Journal ArticleDOI
James J. Whitmore1
TL;DR: This paper describes a systematic approach for defining, modeling, and documenting security functions within a structured design process in order to facilitate greater trust in the operation of resulting IT solutions.
Abstract: The task of developing information technology (IT) solutions that consistently and effectively apply security principles has many challenges, including: the complexity of integrating the specified security functions within the several underlying component architectures found in computing systems, the difficulty in developing a comprehensive set of baseline requirements for security, and a lack of widely accepted security design methods. With the formalization of security evaluation criteria into an international standard known as Common Criteria, one of the barriers to a common approach for developing extensible IT security architectures has been lowered; however, more work remains. This paper describes a systematic approach for defining, modeling, and documenting security functions within a structured design process in order to facilitate greater trust in the operation of resulting IT solutions.

Journal ArticleDOI
TL;DR: This paper describes the benefits of an intermediary-based transcoding approach and presents a formal framework for document transcoding that is meant to simplify the problem of composing transcoding operations.
Abstract: With the rapid increase in the amount of content on the World Wide Web, it is now becoming clear that information cannot always be stored in a form that anticipates all of its possible uses. One solution to this problem is to create transcoding intermediaries that convert data, on demand, from one form into another. Up to now, these transcoders have usually been stand-alone components, converting one particular data format to another particular data format. A more flexible approach is to create modular transcoding units that can be composed as needed. In this paper, we describe the benefits of an intermediary-based transcoding approach and present a formal framework for document transcoding that is meant to simplify the problem of composing transcoding operations.

Journal ArticleDOI
H. Kreger1
TL;DR: The JMX technology and application program interfaces are discussed in depth using examples pertinent to today's application developer, allowing management of Java technologies as well as management through Java technologies.
Abstract: Modern enterprise systems are composed of both centralized and distributed applications. Many of these applications are business-critical, creating the need for their control and management by existing management systems. A single suite of uniform instrumentation for manageability is needed to make this cost-effective. The JavaTM Management Extensions Agent and Instrumentation Specification, v 1.0, describes an isolation layer between an information technology resource and an arbitrary (enterprise-specific) set of management interfaces and systems. It includes a simple, yet sophisticated and extensible management agent that can accommodate communication with private or acquired enterprise management systems. The application programming interface is simple enough that manageability can be achieved in three to five lines of code. Yet, it is flexible enough that complex, distributed applications can be managed, allowing management of Java technologies as well as management through Java technologies. This paper includes an overview of application management issues and technologies. The JMX technology and application program interfaces are discussed in depth using examples pertinent to today's application developer.

Journal ArticleDOI
TL;DR: The value of the GeneMine system is that it automatically brings together and uncovers important functional information from a much wider range of sources than a given specialist would normally think to query, resulting in insights that the researcher was not planning to look for.
Abstract: As genome data and bioinformatics resources grow exponentially in size and complexity, there is an increasing need for software that can bridge the gap between biologists with questions and the worldwide set of highly specialized tools for answering them. The GeneMine system for small- to medium-scale genome analysis provides: (1) automated analysis of DNA (deoxyribonucleic acid) and protein sequence data using over 50 different analysis servers via the Internet, integrating data from homologous functions, tissue expression patterns, mapping, polymorphisms, model organism data and phenotypes, protein structural domains, active sites, motifs and other features, etc., (2) automated filtering and data reduction to highlight significant and interesting patterns, (3) a visual data-mining interface for rapidly exploring correlations, patterns, and contradictions within these data via aggregation, overlay, and drill-down, all projected onto relevant sequence alignments and three-dimensional structures, (4) a plug-in architecture that makes adding new types of analysis, data sources, and servers (including anything on the Internet) as easy as supplying the relevant URLs (uniform resource Locators), (5) a hypertext system that lets users create and share "live" views of their discoveries by embedding three-dimensional structures, alignments, and annotation data within their documents, and (6) an integrated database schema for mining large GeneMine data sets in a relational database. The value of the GeneMine system is that it automatically brings together and uncovers important functional information from a much wider range of sources than a given specialist would normally think to query, resulting in insights that the researcher was not planning to look for. In this paper we present the architecture of the software for integrating and mining very diverse biological data, and cross-validation of gene function predictions. The software is freely available at http://www.bioinformatics.ucla.edu/genemine.