scispace - formally typeset
Search or ask a question

Showing papers on "Software as a service published in 1999"


Journal ArticleDOI
TL;DR: This paper demonstrates that there is a strategic reason why software firms have followed consumers' desire to drop software protection, and shows that when network effects are strong, unprotecting is an equilibrium for a noncooperative industry.
Abstract: This paper demonstrates that there is a strategic reason why software firms have followed consumers' desire to drop software protection. We analyze software protection policies in a price-setting duopoly software industry selling differentiated software packages, where consumers' preference for particular software is affected by the number of other consumers who (legally or illegally) use the same software. Increasing network effects make software more attractive to consumers, thereby enabling firms to raise prices. However, it also generates a competitive effect resulting from feircer competition for market shares. We show that when network effects are strong, unprotecting is an equilibrium for a noncooperative industry.

281 citations


Journal ArticleDOI
TL;DR: Hecker, a developer at Netscape, discusses the business of commercial open-source software, including why a company might adopt an open- source model, how open- sources works, what business models might be usable and how various objections relating to open source might be answered.
Abstract: Hecker, a developer at Netscape, discusses the business of commercial open-source software, including why a company might adopt an open-source model, how open-source licensing works, what business models might be usable and how various objections relating to open source might be answered

235 citations


Proceedings ArticleDOI
16 May 1999
TL;DR: The Software Dock framework creates a distributed, agent based deployment framework to support the ongoing cooperation and negotiation among software producers themselves and amongSoftware producers and software consumers.
Abstract: Software deployment is an evolving collection of interrelated processes such as release, install, adapt, reconfigure, update, activate, deactivate, remove, and retire. The connectivity of large networks, such as the Internet, is affecting how software deployment is performed. It is necessary to introduce new software deployment technologies that leverage this connectivity. The Software Dock framework creates a distributed, agent based deployment framework to support the ongoing cooperation and negotiation among software producers themselves and among software producers and software consumers. This deployment framework is enabled by the use of a standardized deployment schema for describing software systems, called the Deployable Software Description (DSD) format. The Software Dock also employs agents to traverse between software producers and consumers in order to perform software deployment activities by interpreting the descriptions of software systems. The Software Dock infrastructure allows software producers to offer their customers high level deployment services that were previously not possible.

218 citations


Patent
29 Mar 1999
TL;DR: In this paper, the authors present a web-based application that enables users to proactively manage and accurately predict strategic software development and deliverables using distributed data collectors, an application server and a browser interface.
Abstract: A computer software application in the form of a software management and task completion and prediction apparatus by which project completion can be ascertained and management of a project can be maintained with a high efficiency and accuracy. The software application is a web-based application which enables users to proactively manage and accurately predict strategic software development and deliverables. This application and delivery management system comprises distributed data collectors, an application server and a browser interface. The data collectors automatically gather data already being generated by various tools within the organization, such as scheduling, defect tracking, requirements management and software quality tools. This data is constantly being collected and fed into the application server, thereby providing objective and updated information. New tools can be easily added without disrupting operations. The data collected is fed into the applications server which is the brain of the apparatus. The application server analyzes the data collected by the data collectors to generate a statistically significant probability curve. This curve is then compared to the original planned schedule of product delivery to determine if the project is meeting its targets. Based upon this comparison, the software application predicts a probable delivery date based upon the various inputs and variables from the development process. In addition, the application server also generate early warning alerts, as needed and indicated. At such time as the software apparatus identifies a potential problem, the alerts automatically inform the user, so as to mitigate a crisis and assist with resolution of the problem. The alerts are communicated to the designated user by e-mail.

218 citations


Patent
07 Jun 1999
TL;DR: Open, horizontal service platforms are described in this paper, where service providers can, via network operators, load software associated with a service onto a dedicated, service platform server, which is connected, via a LAN, to one or more remote devices (e.g., sensors, transducers, processors, etc).
Abstract: Open, horizontal service platforms are described Service providers can, via network operators, load software associated with a service onto a dedicated, service platform server The service platform server is connected, via a LAN, to one or more remote devices (eg, sensors, transducers, processors, etc) The functionality associated with the service can be distributed among two or more of the entities involved in the architecture The distributed software operates and/or monitors these remote devices to implement the subscribed service

182 citations


Journal ArticleDOI
TL;DR: Studying two companies, the authors found that Netscape was using a version of the Microsoft-style synchronize and stabilize process for PC software, but adapting it to build Internet browser and server products, and Microsoft's Internet groups were modifying their standard process to increase development speed and flexibility.
Abstract: There is probably little debate that Internet software companies must use more flexible development techniques and introduce new products faster than companies with more stable technology, established customer needs, and longer product cycles. Internet and PC software firms favor a more flexible style. The basic idea is to give programmers the autonomy to evolve designs iteratively but to force team members to synchronize their work frequently and then periodically stabilize their design changes or feature innovations. Studying two companies, the authors found that Netscape was using a version of the Microsoft-style synchronize and stabilize process for PC software, but adapting it to build Internet browser and server products. They also found that Microsoft's Internet groups were modifying their standard process to increase development speed and flexibility. The goal was to balance flexibility and speed with professional engineering discipline.

127 citations


Proceedings ArticleDOI
05 Jan 1999
TL;DR: A framework for evaluating data mining tools is presented and a methodology for applying this framework is described, which represents the first-hand experience using many of the leading datamining tools against real business data at the Center for Data Insight at Northern Arizona University.
Abstract: As data mining evolves and matures more and more businesses are incorporating this technology into their business practices. However, currently data mining and decision support software is expensive and selection of the wrong tools can be costly in many ways. This paper provides direction and decision-making information to the practicing professional. A framework for evaluating data mining tools is presented and a methodology for applying this framework is described. Finally a case study to demonstrate the methods effectiveness is presented. This methodology represents the first-hand experience using many of the leading data mining tools against real business data at the Center for Data Insight (CDI) at Northern Arizona University (NAU). This is not a comprehensive review of commercial tools but instead provides a method and a point-of-reference for selecting the best software tool for a particular problem. Experience has shown that there is not one best data-mining tool for all purposes. This instrument is designed to accommodate differences in environments and problem domains. It is expected that this methodology will be used to publish tool comparisons and benchmarking results.

77 citations


Journal ArticleDOI
TL;DR: The authors propose PM-Net, a model that captures the concurrent, iterative, and evolutionary nature of software development, which adopts the basic concepts of Petri nets, graphical models of information flow, with extensions to represent both decisions and artifacts.
Abstract: Technical and managerial complexity increasingly overwhelm project managers. To rein in that complexity, the authors propose PM-Net, a model that captures the concurrent, iterative, and evolutionary nature of software development. It adopts the basic concepts of Petri nets, graphical models of information flow, with extensions to represent both decisions and artifacts.

46 citations


Proceedings ArticleDOI
James Beck1, Alain Gefflaut1, Nayeem Islam1
01 Aug 1999
TL;DR: Unique features of MOCA include a distributed service discovery model, use of a single registry for both local and remote services, and a lazy service loading policy that minimizes memory consumption.
Abstract: MOCA is an adaptable service framework targeting mobile computing devices with limited memory footprint. TO ensure portability across a large spectrum of these devices, it is written in Java.%4OCA is based on the notion of serrices, and assumes that applications can be decomposed into sets of cooperating services. A service is a loadable software component that performs a specific function such as data encryption or caching. The MOCA framework is composed of a service registry and a set of essential services. The registry provides life-cycle management qf services including dynamic registration and look-up. Essential services, stored on the device, provide the minimum functionality required to establish a generic secure computing environment on top of a Java Virtual Machine (JVM). In particular, MOCA securely supports multiple applications as well as optional services running on a single JVM. Optional services and applications can reside locally on the device or be dynamically downloaded from remote locations. MOCA also allows a device to adapt to its environment by enabling dynamic discovery and registration of remote services published by surrounding devices. A single mechanism is used to support both local and remote services, which allows a device to access remote services on other devices as if these services were local to the device itself. Unique features of MOCA include a distributed service discovery model, use of a single registry for both local and remote services, and a lazy service loading policy that minimizes memory consumption. keywords: component software, service framework, service discovery, mobile device, Java

42 citations


Journal ArticleDOI
TL;DR: A generic framework that incorporates database and knowledge-base tools, a formal set of software test and evaluation metrics, and a suite of advanced analytic techniques for extracting information and knowledge from available data can enable highly efficient and cost-effective management of large and complex software projects.
Abstract: The construction and maintenance of large, high-quality software projects is a complex, error-prone and difficult process. Tools employing software database metrics can play an important role in the efficient execution and management of such large projects. In this paper, we present a generic framework to address this problem. This framework incorporates database and knowledge-base tools, a formal set of software test and evaluation metrics, and a suite of advanced analytic techniques for extracting information and knowledge from available data. The proposed combination of critical metrics and analytic tools can enable highly efficient and cost-effective management of large and complex software projects. The framework has the potential for greatly reducing venture risks and enhancing production quality in the domain of large-scale software project management.

33 citations


Journal ArticleDOI
TL;DR: It seems that the open-source movement-Linux, Perl, Apache, and their many cousins-has finally hit the big time, but my, how the world has stayed the same.
Abstract: My, how the world has changed. IBM is now backing Apache, Netscape has put an extraordinary amount of useful software out into the open, and vendors such as Metrowerks, Sybase and Oracle have released versions of their tools to run on a give-away operating system. It seems that the open-source movement-Linux, Perl, Apache, and their many cousins-has finally hit the big time. But my, how the world has stayed the same. EGGS (a derivative of the Free Software Foundation's GNU C++) is one of the few compilers around that has kept pace with the ANSI standard, but CVS, the open-source version control system, is 1O years behind equivalent commercial offerings. Linux is now more robust than some commercial varieties of Unix, but it's impossible to compare the reliability of open-source project management tools to that of Microsoft Project because the former don't exist.

Proceedings ArticleDOI
05 Sep 1999
TL;DR: A model to alleviate the problem of software evolution by making different distributed service versions substitutable, which allows flexible interoperability between different versions of client and server software.
Abstract: Software evolution is one of the problematic areas in software management. In a distributed environment it is harder to tackle this problem because the dispersal of software makes it difficult to control the change, as well as the propagation of the change to whoever that is using the evolving service. The paper presents a model to alleviate this problem by making different distributed service versions substitutable. The mechanism comprises a mediator that enables clients of an old-version service to successfully request onto an instance of a new-version service. The mediator considers functionality compatibility, rather than operation signature compatibility, when mediating the request. Thus instead of forcing change on the client side, this model allows flexible interoperability between different versions of client and server software. To support the model, existing distributed object architectures may require some extension to their type repositories to maintain mapping information necessary for the work of the mediator.

Journal ArticleDOI
TL;DR: A software system for the management of geographically distributed high‐performance computers and co‐ordinates the co‐operative use of resources in autonomous computing sites.
Abstract: We present a software system for the management of geographically distributed high‐performance computers. It consists of three components: 1. The Computing Center Software (CCS) is a vendor‐independent resource management software for local HPC systems. It controls the mapping and scheduling of interactive and batch jobs on massively parallel systems; 2. The Resource and Service Description (RSD) is used by CCS for specifying and mapping hardware and software components of (meta‐)computing environments. It has a graphical user interface, a textual representation and an object‐oriented API; 3. The Service Coordination Layer (SCL) co‐ordinates the co‐operative use of resources in autonomous computing sites. It negotiates between the applications' requirements and the available system services.

Proceedings ArticleDOI
05 Jan 1999
TL;DR: The approach to mixed-initiative agent team management is discussed, some representational issues involved in identifying compatible agent team members and the capabilities needed to monitor team execution are discussed.
Abstract: The rapid growth in research and development of agent-based software systems has led to concerns about how human users will control the activities of teams of agents that must actively collaborate. We believe that practical multi-agent systems developed will often be comprised of small teams of heterogeneous agents, under direct supervision by users acting as "team leaders". We are now developing an environment for investigating approaches to controlling small to medium-sized groups of agents as coordinated teams. This environment will be used to explore mixed-initiative approaches to planning for the activities of agent teams and managing them during execution. Our approach arises out of a long-standing interest in mixed-initiative planning systems. In this paper, we discuss our approach to mixed-initiative agent team management, some representational issues involved in identifying compatible agent team members and the capabilities needed to monitor team execution.

01 Jan 1999
TL;DR: Computing through resource coalitions will create novel architectural challenges and opportunities and require new degrees of autonomy and automation in order to identify, compose, and track the resources.
Abstract: Widespread use of the Internet is enabling a fundamentally new approach to software development: computing through dynamically formed, task-specific, coalitions of distributed autonomous resources. The resources may be information, calculation, communication, control, or services. Unlike traditional software systems, the coalitions lack direct control over the incorporated resources, which are independently created and managed. Moreover, the resources may be transient, either because of the resource manager’s actions or because of service interruptions. Development tools for resource coalitions will require new degrees of autonomy and automation in order to identify, compose, and track the resources. An economically viable reward structure will be required to establish a rich population of available resources. Evaluation will require new models of adequacy rather than classical full correctness. Computing through resource coalitions will thus create novel architectural challenges and opportunities.

Journal ArticleDOI
TL;DR: Existing software radio research is reviewed, the SpectrumWare software radio system is described and some important research challenges that must be addressed are identified in order to applySoftware radio research to mobile networking applications.
Abstract: A software radio is a wireless communications device in which some or all of the physical layer functions are implemented in software. The flexibility provided by the software implementation enables a single device to interoperate with other devices using different wireless physical layer technologies, by simply invoking the appropriate software. A mobile computing device equipped with a software radio would have access to a wide range of connectivity options including cellular, wireless LAN and satellite systems. This would not only enable seamless anytime, anywhere connectivity, but also provide users the flexibility of choosing from the available connectivity options to best suit their price/performance requirements.Most software radio research to date has been driven by military and commercial cellular applications. Mobile networking applications require additional functionality present new software radio design constraints. This paper reviews existing software radio research, describes the SpectrumWare software radio system and identifies some important research challenges that must be addressed in order to apply software radio research to mobile networking applications.

Patent
23 Mar 1999
TL;DR: In this article, the problem of providing expandable applications by calling the install function for a software module and calling a subordination function giving a notice of the service of application that the software module requests is addressed.
Abstract: PROBLEM TO BE SOLVED: To obtain the technology for providing expandable applications by calling the install function for a software module and calling a subordinate functions for a software module giving a notice of the service of application that the software module requests. SOLUTION: A method implemented by a computer includes the reception of a software module 905 added to application 901. The software module 905 provides service for the application 901. In the application 901, the install function for the software module 905 generating an execution context for a software component can be called. Furthermore, the subordination function for the software module 905 notifying the service of the application 901 that the software module 905 requests can be called. COPYRIGHT: (C)2000,JPO

Proceedings ArticleDOI
16 May 1999
TL;DR: By increasing the opportunity for buying and customizing software instead of building it from scratch, DSE attacks Brook's "essential" difficulties of software development.
Abstract: Reducing the costs and risks associated with changing complex software systems has been a principal concern of software engineering research and development. One facet of this effort concerns decentralized software evolution (DSE), which, simply stated, enables third-parties to evolve a software application independent of the organization that originally developed it. Popular approaches to DSE include application programming interfaces or APIs, software plug-ins, and scripting languages. Application vendors employ DSE as a means of attracting additional users to their applications-and, consequentially, increasing their market share-since it opens up the possibility that a third-party modified version of the application would satisfy the needs of end-users unsatisfied with the original version. This benefits everyone involved: the original application vendor sells more product since customization constitutes use; third-party developers deliver a product in less time and with lower cost by reusing software as opposed to building it from scratch; and customers receive a higher quality product, customized to suit their needs, in less time and with lower cost. By increasing the opportunity for buying and customizing software instead of building it from scratch, DSE attacks Brook's "essential" difficulties of software development.

Journal ArticleDOI
TL;DR: This research proposes the development of a software architecture intended to support designers of large software systems in the early stages of software design, specifically conceptual design, based on principles derived from cognitive engineering.
Abstract: With the proliferation of large, complex software systems, reuse of previous software designs and software artifacts, such as operation concepts, requirements, specifications and source code, is an important issue for both industry and government. Reuse has long been expected to result in substantial productivity and quality gains. To date, this expectation has been largely unmet. One reason may be the lack of tools to support software reuse.This research proposes the development of one such tool, the Design Browser. The Design Browser is a software architecture intended to support designers of large software systems in the early stages of software design, specifically conceptual design. The Design Browser is based on principles derived from cognitive engineering (e.g. Woods & Roth, 1988 a); naturalistic decision-making, particularly Klein's (1989) recognition-primed decision making model; and Kolodner's (1993) approach to case-based reasoning.As a proof-of-concept demonstration, the Design Browser was implemented for a NASA satellite control sub-system?the command management system (CMS). An empirical evaluation was conducted. It used the CMS Design Browser and participants who were part of the three user groups often involved in large-scale commercial software development. These groups are the software design team, the users and management. The results of the evaluation show that all three groups found the CMS Design Browser quite useful as demonstrated by actual performance and subjective rating.

Proceedings ArticleDOI
24 Mar 1999
TL;DR: The quality of service parameters, software architecture used in e-commerce, experimental data about transaction processing in the Internet, characteristics of digital library databases used in E-commerce and communication measurements for such data are presented.
Abstract: The performance of network and communication software is a major concern for making the electronic commerce applications in a distributed environment a success. The quality of service in electronic commerce can generically be measured by convenience, privacy/security, response time, throughput, reliability, timeliness, accuracy, and precision. We present the quality of service parameters, software architecture used in e-commerce, experimental data about transaction processing in the Internet, characteristics of digital library databases used in e-commerce and communication measurements for such data. We present a summary of e-commerce companies and their status and give an example of electronic trading as an application.

Proceedings ArticleDOI
10 Nov 1999
TL;DR: This paper presents a background of software components together with a framework for realizing World Wide Web-based learning components and exemplifies some of the common framework services and discusses how these can be adapted to a specific organization or extended to achieve discipline specific services.
Abstract: As an emerging technology, distributed software components hold promise for software interoperability, composition and reuse. This paper reports on applying distributed components as a paradigm for realizing technology enriched learning. We present a background of software components together with a framework for realizing World Wide Web-based learning components. Primarily, the web provides a data-centric interface to learning participants. An activity-centric view is more typical in object-based systems and for many learners. We show how automated support for workflow can be applied to achieve activity-based learning components on the web. One of the primary goals of the framework is that it is open to utilize various services already commonly in place in a university setting. We exemplify some of the common framework services and discuss how these can be adapted to a specific organization or extended to achieve discipline specific services.

Proceedings ArticleDOI
31 May 1999
TL;DR: A software framework that facilitates the development of adaptive applications that allows the installation of application-dependent policies which govern the adaptive behaviour of the application.
Abstract: We present a software framework that facilitates the development of adaptive applications. This framework allows the installation of application-dependent policies which govern the adaptive behaviour of the application. These policies take into account user preferences and resource availability. The framework provides a generic interface to a variety of re-usable resource classes. We describe an application built using this framework.

Book ChapterDOI
11 Oct 1999
TL;DR: The presented system deals with network planning and provisioning according to network usage predictions based on customer subscription requests and shows how component technology is used to provide a flexible and extensible telecommunication business solution.
Abstract: Solutions in the network and service management layers of telecommunications management architectures are currently fragmented both in terms of standards and products. It is often, therefore, difficult for the developers of management systems to reuse and integrate management software from different sources in cost-effective solutions spanning the various TMN layers. This paper presents the analysis, architecture and design of a system that integrates service and network management building blocks in order to satisfy business process requirements for service order fulfilment. The presented system deals with network planning and provisioning according to network usage predictions based on customer subscription requests. It shows how component technology is used to provide a flexible and extensible telecommunication business solution.

Proceedings ArticleDOI
05 Jan 1999
TL;DR: Examining a software prototyping project held by one organization through use of structured interviews, group sessions, and scenario analysis techniques, a discussion of gathering information requirements by tapping into upper management, middle managers, and end-users' knowledge is provided leading to a development of an effective information system.
Abstract: It is well understood that large percentage of software development costs are incurred during the earlier phases of the software development process, namely the information requirement analysis. Given the importance of information requirements analysis during software development process, it is surprising that there is limited research in this area to advance the knowledge for better equipping information systems project managers and analysts. This paper examines a software prototyping project held by one organization through use of structured interviews, group sessions, and scenario analysis techniques. As suggested by Davis (1982) and Byrd et al. (1992), combining multiple elicitation techniques at more than one level in the organization allowed explicit and tacit knowledge to be surfaced. A discussion of gathering information requirements by tapping into upper management, middle managers, and end-users' knowledge is provided leading to a development of an effective information system.

22 Sep 1999
TL;DR: In this general expansion of software's role in telecommunications, two trends are accelerating the transfer of power from traditional equipment vendors and network operators to new software-oriented companies.
Abstract: If traditional telecom equipment vendors don't have a strategy for the software business, they may be in the wrong business altogether. Deregulation may have prompted the restructuring of the telecommunications industry, but technological innovation is what keeps the process going. And for technological innovation, there is no business like the software business. In the 1980s, the computer industry was transformed when microprocessing technologies allowed new companies such as Microsoft to create a software industry that was largely independent of hardware. In the near future, a series of technological changes could likewise transform the structure of the telecommunications industry by shifting much of the value it creates from hardware to software. For incumbents, the implications are far-reaching. Traditional providers of telecom equipment, already challenged by newcomers, will now have to struggle against more and tougher software-oriented competitors, though it is possible that the traditional providers will themselves seize emerging software opportunities. Changes in the software arena will have a more subtle impact on network operators, which may lose control over many of the services provided over their networks as the balance of power shifts toward software. Yet the impending transformation also offers network operators novel business opportunities--in network management, for example, and hosting software applications on their networks. Advances in software will eventually blur the distinction between network operators and equipment providers. But in the immediate future, the advances are likely to turn the equipment providers' territory into a battleground. Which software developments will matter most in telecommunications? How might equipment providers address them? The increasing importance of software Today's public switched telephone networks (PSTNs) already rely on millions of lines of software code to control their basic functions: switching traffic across networks, allocating capacity, identifying faults, and billing customers. In the coming generation of telecom applications and services, software will be even more important. New advanced telephony services--such as call forwarding, personalized numbering (calls rerouted to a person's office, home, mobile phone, or voice mail as appropriate), and interactive services--are created almost entirely in software. So are such recent innovations as capacity reservation and smart routing, which improve the reliability of networks based on Internet Protocol (IP). Moreover, on the Internet and other IP networks, it is software that provides the means of transporting pages of information and underpins enhanced services, such as streamed audio and video. Software allows service providers to deploy new services quickly and cheaply, for once it has been developed, it can be downloaded to networks immediately, with no need to reengineer the underlying hardware. And because software can be repeatedly duplicated at almost no additional cost, it offers huge economies of scale. Software also makes it easier to manage ever more complex systems. Today, many corporations run wide-area networks (WANs) and intranets linking offices across the globe. Such WANs and intranets are in turn connected to the Internet and to the WANs and intranets of other corporations. Managing these agglomerations of networks would be impossible without new kinds of network management software. In this general expansion of software's role in telecommunications, two trends are accelerating the transfer of power from traditional equipment vendors and network operators to new software-oriented companies. The first such trend is the migration of software from the core of the network to its periphery. In PSTNs, the software controlling the network is tied closely to the switches and related equipment at its center. Because the network operators manage this equipment themselves, they and their equipment providers determine who programs the network with software. …

Dissertation
01 Jan 1999
TL;DR: This thesis lays out a framework for describing and reasoning about adaptive systems in Open Service Architectures, with a special emphasis on coordination, mainly meant for analysis and design, but some of the ideas presented are also suitable as metaphors for implementations.
Abstract: An Open Service Architecture (OSA) is a software structure that makes an open set of information services available to an open set of users. The World Wide Web constitutes the most outstanding example of an OSA as of today. An important feature of an OSA is personalization, i.e. adapting the user interface, functionality, and information of services to its users. However, designers of such a feature are facing many problems, perhaps the biggest one being coordination. If services fail to coordinate how they adapt to users, chances are that the whole point of performing the adaptation, i.e. helping the user, is lost. In this thesis, I lay out a framework for describing and reasoning about adaptive systems in Open Service Architectures, with a special emphasis on coordination. This framework is mainly meant for analysis and design, but some of the ideas presented are also suitable as metaphors for implementations. An implementation of an adaptive system that was designed using this framework, adaptive help in the KIMSAC system, is also described. Supervisor: Annika Waern Examiner: Lars Thalmann

Journal ArticleDOI
Deependra Moitra1
TL;DR: This paper considers how some small and midsize software organizations, through innovative software engineering and management practices, have been successful in today's demanding and highly competitive business environment.
Abstract: Innovations in software development techniques-including improvements to process, method, and management-will distinguish success from failure in the years ahead. The paper considers how some small and midsize software organizations, through innovative software engineering and management practices, have been successful in today's demanding and highly competitive business environment.

Journal Article
TL;DR: A component-based, distributed software architecture to build a service and network management system for the emerging multimedia applications and specification of QoS parameters and mapping them onto ATM MIB Objects are developed.
Abstract: In this paper, we develop a component-based, distributed software architecture to build a service and network management system for the emerging multimedia applications. The architecture is constructed by using a generic software component model. The generic software component is instantiated to derive specific core service and management components. The software architecture is instantiated onto QoS service and management. Specification of QoS parameters and mapping them onto ATM MIB Objects are developed and explained. Prototype implementation issues are addressed.

Journal ArticleDOI
TL;DR: The paper considers how customer/seller faulty communication causes many software development problems and how developers must help customers realize that software development is an inherently complex and uncertain activity that can only succeed through close cooperation.
Abstract: The paper considers how customer/seller faulty communication causes many software development problems. At least part of this communications breakdown stems from customers' lack of comprehension regarding their role in the development process. Developers must help customers realize that software development is an inherently complex and uncertain activity that can only succeed through close cooperation.

ReportDOI
15 Nov 1999
TL;DR: A set of software acquisition and software engineering best practices that addresses the issues raised in the Senate Report are recommended, based upon the expense of The Aerospace Corporation in supporting the United States Air Force and the National Reconnaissance Office in the acquisition of DoD space systems.
Abstract: : The purpose of this white paper is to address the issues raised in the recently published Senate Armed Services Committee Report 106-50 concerning Software Management Improvements for the Department of Defense (DoD). The text, titled Software Management Improvements, extracted from Title 8 (Acquisition Policy, Acquisition Management, and Related Issues) of Senate Report 106-50, is given for reference in Table 1-1 of the body of this report. This paper recommends a set of software acquisition and software engineering best practices that addresses the issues raised in the Senate Report. These recommendations are based upon the expense of The Aerospace Corporation in supporting the United States Air Force (USAF) and the National Reconnaissance Office (NRO) in the acquisition of DoD space systems. The domain of application of the recommended best practices, therefore, is the acquisition and development of large software intensive, mission critical systems, such as space systems, which are for the most part unprecedented.