scispace - formally typeset
Search or ask a question

Showing papers on "Windows NT published in 1998"


Journal ArticleDOI
TL;DR: The paper describes the design of TAO, which is the high-performance, real-time CORBA 2.0-compliant implementation that runs on a range of OS platforms with real- time features including VxWorks, Chorus, Solaris 2.x, and Windows NT, and presents TAO'sreal-time scheduling service that can provide QoS guarantees for deterministic real-Time CORBA applications.

588 citations


Journal ArticleDOI
TL;DR: A new innovation from Digital allows most x86 Windows applications to run on Alpha platforms with good performance.
Abstract: A new innovation from Digital allows most x86 Windows applications to run on Alpha platforms with good performance.

224 citations


Proceedings ArticleDOI
16 Apr 1998
TL;DR: This paper examines the performance of desktop applications running on the Microsoft Windows NT operating system on Intel x86 processors, and contrasts these applications to the programs in the integer SPEC95 benchmark suite, and shows that the desktop applications have similar characteristics to theinteger SPEC95 benchmarks for many of these metrics.
Abstract: This paper examines the performance of desktop applications running on the Microsoft Windows NT operating system on Intel x86 processors, and contrasts these applications to the programs in the integer SPEC95 benchmark suite. We present measurements of basic instruction set and program characteristics, and detailed simulation results of the way these programs use the memory system and processor branch architecture. We show that the desktop applications have similar characteristics to the integer SPEC95 benchmarks for many of these metrics. However, compared to the integer SPEC95 applications, desktop applications have larger instruction working sets, execute instructions in a greater number of unique functions, cross DLL boundaries frequently, and execute a greater number of indirect calls.

149 citations


Proceedings ArticleDOI
23 Jun 1998
TL;DR: A detailed description of the MSCS architecture and the design decisions that have driven the implementation of the service are provided, and features added to make it easier to implement and manage fault-tolerant applications on M SCS are described.
Abstract: Microsoft Cluster Service (MSCS) extends the Windows NT operating system to support high-availability services. The goal is to offer an execution environment where off-the-shelf server applications can continue to operate, even in the presence of node failures. Later versions of MSCS will provide scalability via a node and application management system which allows applications to scale to hundreds of nodes. In this paper we provide a detailed description of the MSCS architecture and the design decisions that have driven the implementation of the service. The paper also describes how some major applications use the MSCS features, and describes features added to make it easier to implement and manage fault-tolerant applications on MSCS.

126 citations


Proceedings ArticleDOI
01 Oct 1998
TL;DR: This work has implemented performance isolation in the Silicon Graphics IRIX operating system for three important system resources: CPU time, memory, and disk bandwidth and shows that the proposed scheme is successful at providing workstation-like isolation under heavy load, S MP-like latency under light load, and SMP-like throughput in all cases.
Abstract: Shared-memory multiprocessors (SMPs) are being extensively used as general-purpose servers. The tight coupling of multiple processors, memory, and I/O provides enormous computing power in a single system, and enables the efficient sharing of these resources.The operating systems for these machines (UNIX or Windows NT) provide very few controls for sharing the resources of the system among the active tasks or users. This unconstrained sharing model is a serious limitation for a server because the load placed by one user can adversely affect other users' performance in an unpredictable manner. We show that this lack of isolation is caused by the resource allocation scheme (or lack thereof) carried over from singleuser workstations. Multi-user multiprocessor systems require more sophisticated resource management, and we show how the proposed "performance isolation" scheme can address the current weaknesses of these systems. We have implemented performance isolation in the Silicon Graphics IRIX operating system for three important system resources: CPU time, memory, and disk bandwidth. Running a number of workloads we show that our proposed scheme is successful at providing workstation-like isolation under heavy load, SMP-like latency under light load, and SMP-like throughput in all cases.

119 citations


Book
01 May 1998
TL;DR: This landmark work on object-oriented software design presents a catalog of simple and succinct solutions to common design problems, created by four experienced designers, and is now available on CD-ROM.
Abstract: From the Publisher: First published in 1995, this landmark work on object-oriented software design presents a catalog of simple and succinct solutions to common design problems. Created by four experienced designers, the 23 patterns contained herein have become an essential resource for anyone developing reusable object-oriented software. In response to reader demand, the complete text and pattern catalog are now available on CD-ROM. System Requirements: Netscape 2.0+ or IE 3.0 Search engine requires Java support Memory requirements: 8meg minimum, 16 preferred Windows 3.1, Windows NT 3.51+, Windows 95, MAC, UNIX

91 citations


Journal ArticleDOI
TL;DR: GelImager as discussed by the authors is a framework for gel image analysis and base-calling in fluorescence-based sequencing consisting of two primary programs, BaseFinder and GelImager.
Abstract: Software for gel image analysis and base-calling in fluorescence-based sequencing consisting of two primary programs, BaseFinder and GelImager, is described. BaseFinder is a framework for trace processing, analysis, and base-calling. BaseFinder is highly extensible, allowing the addition of trace analysis and processing modules without recompilation. Powerful scripting capabilities combined with modularity and multilane handling allow the user to customize BaseFinder to virtually any type of trace processing. We have developed an extensive set of data processing and analysis modules for use with the program in fluorescence-based sequencing. GelImager is a framework for gel image manipulation. It can be used for gel visualization, lane retracking, and as a front end to the Washington University Getlanes program. The programs were designed using a cross-platform development environment, currently allowing them to run in Windows NT, Windows 95, Openstep/Mach, and Rhapsody. Work is ongoing to deploy the software on additional platforms, including Solaris, Linux, and MacOS. This software has been thoroughly tested and debugged in the analysis of >2 million bp of raw sequence data from human chromosome 19 region q13. Overall sequencing accuracy was measured using a significant subset of these data, consisting of approximately 600 sequences, by comparing the individual shotgun sequences against the final assembled contigs. Also, results are reported from experiments that analyzed the accuracy of the software and two other well-known base-calling programs for sequencing the M13mp18 vector sequence. [The sequence data described in this paper have been submitted to the GenBank data library under accession no. AF025422]

85 citations


Proceedings ArticleDOI
03 Jun 1998
TL;DR: The results of the above study should provide system designers with guidelines, as well as insight, into the design of an architecture based on NT for supporting applications with components having real time constraints.
Abstract: Windows NT was not designed as a real time operating system, but market forces and the acceptance of NT in industrial applications have generated a need for achieving real time functionality with NT As its use for real time applications proliferates, based on an experimental evaluation of NT, we quantitatively characterize the obstacles placed by NT As a result of these observations, we provide a set of recommendations for users to consider while building real time applications on NT These are validated by the use of NT for a prototype application involving real time control that includes multimedia information processing The results of the above study should provide system designers with guidelines, as well as insight, into the design of an architecture based on NT for supporting applications with components having real time constraints

74 citations


Journal ArticleDOI
TL;DR: Spike as mentioned in this paper is a performance tool developed by DIGITAL to optimize Alpha executables on the Windows NT operating system, which is designed for large, call-intensive programs; it uses interprocedural optimization and profile feedback.
Abstract: Vol. 9 No. 4 1997 3 Spike is a performance tool developed by DIGITAL to optimize Alpha executables on the Windows NT operating system. This optimization system has two main components: the Spike Optimizer and the Spike Optimization Environment. The Spike Optimizer reads in an executable, optimizes the code, and writes out the optimized version. The Optimizer uses profile feedback from previous runs of an application to guide its optimizations. Profile feedback is not commonly used in practice because it is difficult to collect, manage, and apply profile information. The Spike Optimization Environment provides a user-transparent profile feedback system that solves most of these problems, allowing a user to easily optimize large applications composed of many executables and dynamic link libraries (DLLs). Optimizing an executable image after it has been compiled and linked has several advantages. The Spike Optimizer can see the entire image and perform interprocedural optimizations, particularly with regard to code layout. The Optimizer can use profile feedback easily, because the executable that is profiled is the same executable that is optimized; no awkward mapping of profile data back to the source language takes place. Also, Spike can be used when the sources to an application are not available, which is beneficial when DIGITAL is working with independent software vendors (ISVs) to tune applications. Applications can be loosely classified into two categories: loop-intensive programs and call-intensive programs. Conventional compiler technology is well suited to loop-intensive programs. The important loops in a program in this category are within a single procedure, which is typically the unit of compilation. The control flow is predictable, and the compiler can use simple heuristics to determine the frequently executed parts of the procedure. Spike is designed for large, call-intensive programs; it uses interprocedural optimization and profile feedback. In call-intensive programs, the important loops span multiple procedures, and the loop bodies contain procedure calls. Consequently, optimizations on the loops must be interprocedural. The control flow is Optimizing Alpha Executables on Windows NT with Spike Robert S. Cohn David W. Goodwin P. Geoffrey Lowney

73 citations


Journal ArticleDOI
TL;DR: An open and reconfigurable modular tool kit is presented for the design of CNC systems for machine tools and machining process monitoring, based on a fully integrated, open, real-time, preemptive DSP operating system and a Windows NT application.

59 citations


Proceedings Article
01 Jan 1998
TL;DR: A detailed description of the MSCS architecture and the design decisions that have driven the implementation of the service are provided, and features added to make it easier to implement and manage fault-tolerant applications on M SCS are described.
Abstract: Microsoft Cluster Service (MSCS) extends the Windows NT operating system to support high-availability services. The goal is to offer an execution environment where off-the-shelf server applications can continue to operate, even in the presence of node failures. Later versions of MSCS will provide scalability via a node and application management system that allows applications to scale to hundreds of nodes. This paper provides a detailed description of the MSCS architecture and the design decisions that have driven the implementation of the service. The paper also describes how some major applications use the MSCS features, and describes features added to make it easier to implement and manage fault-tolerant applications on MSCS.

Proceedings ArticleDOI
04 Nov 1998
TL;DR: A methodology and architecture for performing intelligent black box analysis of software that runs on the Windows NT platform to develop intelligent robustness testing techniques for commercial Off-The-Shelf (COTS) software and to benchmark the robustness of NT software in handling anomalous events.
Abstract: To date, most studies on the robustness of operating system software have focused on Unix based systems. The paper develops a methodology and architecture for performing intelligent black box analysis of software that runs on the Windows NT platform. The goals of the research are three fold: first, to develop intelligent robustness testing techniques for commercial Off-The-Shelf (COTS) software; second, to benchmark the robustness of NT software in handling anomalous events; and finally, to identify robustness gaps to permit fortification for fault tolerance. The random and intelligent data design library environment (RIDDLE) is a tool for analyzing operating system software, system utilities, desktop applications, component based software, and network services. RIDDLE was used to assess the robustness of native Windows NT system utilities as well as Win32 ports of the GNU utilities. Experimental results comparing the relative performance of the ported utilities versus the native utilities are presented.

MonographDOI
28 Sep 1998
TL;DR: WHOI Cable as discussed by the authors is a nonlinear solver for analyzing the statics and dynamics of oceanographic cable structures, including the effects of geometric and material nonlinearities, bending stiffness for seamless modeling of slack cables, and a model for the interaction of cable segments with the seafloor.
Abstract: A new computer program is described for analyzing the statics and dynamics of oceanographic cable structures. The numerical program, WHOI Cable, features a nonlinear solver that includes the effects of geometric and material nonlinearities, bending stiffness for seamless modeling of slack cables, and a model for the interaction of cable segments with the seafloor. The program solves surface and subsurface single-point mooring problems, systems with multiple anchored ends, and towing and drifter problems. Forcing includes waves, current, wind, ship speed, and pay-out of cable. The programs that make-up WHOI Cable run under Unix, DOS, and Windows. There is a familiar Windows-style interface available for Windows 95 and Windows NT platforms. The mathematical framework, numerical algorithm, and interface for WHOI Cable are described and example applications from design and validation studies are presented.

Proceedings Article
03 Aug 1998
TL;DR: A set of components collectively named NT-SwiFT (Software Implemented Fault Tolerance) which facilitates building fault-tolerant and highly available applications on Windows NT.
Abstract: More and more high available applications are implemented on Windows NT. However, the current version of Windows NT (NT4) does not provide some facilities that are needed to implement these fault tolerant applications. In this paper, we describe a set of components collectively named NT-SwiFT (Software Implemented Fault Tolerance) which facilitates building fault-tolerant and highly available applications on Windows NT. NT-SwiFT provides components for automatic error detection and recovery, checkpointing, event logging and replay, communication error recovery, incremental data replications, IP packets re-routing, etc. SwiFT components were originally designed on UNIX. The UNIX version was first ported to NT to run on UWIN [Korn97]. Gradually a large portion of the software has been re-implemented to take advantage of native NT system services. This paper describes these components and compares the differences in the UNIX and NT implementations. We also describe some applications using these components and discuss how to leverage NT system services and cope with some missing features.

Proceedings ArticleDOI
M.A. King1, J.F. Elder1, B. Gomolka, E. Schmidt, M. Summers, K. Toop 
11 Oct 1998
TL;DR: This paper summarizes a lengthy technical report (Gomolka et al., 1998), which details the evaluation procedure and the scoring of all component criteria, and should be useful to analysts selecting data mining tools to employ, as well as to developers aiming to produce better data mining products.
Abstract: Fourteen desktop data mining tools (or tool modules) ranging in price from US$75 to $25,000 (median <$1,000) were evaluated by four undergraduates inexperienced at data mining, a relatively experienced graduate student, and a professional data mining consultant. The tools ran under the Microsoft Windows 95, Microsoft Windows NT, or Macintosh System 7.5 operating systems, and employed decision trees, rule induction, neural networks, or polynomial networks to solve two binary classification problems, a multi-class classification problem, and a noiseless estimation problem. Twenty evaluation criteria and a standardized procedure for assessing tool qualities were developed and applied. The traits were collected in five categories: capability, learnability/usability, interoperability, flexibility, and accuracy. Performance in each of these categories was rated on a six-point ordinal scale, to summarize their relative strengths and weaknesses. This paper summarizes a lengthy technical report (Gomolka et al., 1998), which details the evaluation procedure and the scoring of all component criteria. This information should be useful to analysts selecting data mining tools to employ, as well as to developers aiming to produce better data mining products.

Book
01 Nov 1998
TL;DR: A comprehensive treatment of implementation considerations, Windows NT Thin Client Solutions: Implementing Terminal Server and Citrix Metaframe is an invaluable resource for system architects, system engineers, and network administrators who are integrating thin client technology into their networks.
Abstract: From the Publisher: A comprehensive treatment of implementation considerations, Windows NT Thin Client Solutions: Implementing Terminal Server and Citrix Metaframe is an invaluable resource for system architects, system engineers, and network administrators who are integrating thin client technology into their networks. This book is a critical resource to help you evaluate the potential benefits of thin client technology for your specific network environment. Each stage of implementation is covered in detail, including: Solutions for a wide variety of corporate networks, from single-server operations to multiple-server international enterprise networksExpert advice on determining desktop, client infrastructure, and wide area network requirementsVital information on integrating Citrix MetaFrame with Microsoft Terminal Server, a critical task for networks that use other operating systems in addition to Windows NT Authoritative material on how to successfully install, optimize, and troubleshoot applications in a multi-user environment

Book
01 Nov 1998
TL;DR: This book will approach the topic from the standpoint that a device driver is really an operating system extension, and will begin with an introduction to the general Windows NT operating system concepts relevant to drivers, then progress to more detailed information about the operating system.
Abstract: From the Publisher: The definitive, comprehensive, and technically accurate resource on Windows NT device drivers from the internationally-recognized experts in the field. The book will approach the topic from the standpoint that a device driver is really an operating system extension. Windows NT Device Drivers is the definitive technical reference in this topic area. The book will begin with an introduction to the general Windows NT operating system concepts relevant to drivers, then progress to more detailed information about the operating system, such as interrupt management and synchronization issues. Next, the I/O Subsystem, and how it interacts with drivers will be explored, followed by detailed information on the implementation of standard kernel mode drivers. Finally, alternative WNT driver architectures; such as SCSI, NDIS, Video miniport, and WDM Minidrivers will be discussed Provides NT Network Developers with the definitive resource NT device drivers More technically rigorous, comprehensive, and accurate than The WNT Device Driver Book by Baker Written by the widely-acknowledged and highly-respected experts in the field

Proceedings Article
03 Aug 1998
TL;DR: This paper investigates the performance of reading and writing large sequential files using the Windows NTtm 4.0 File System and shows that NTFS out-of-the-box performance is quite good, but overheads for small requests can be quite high.
Abstract: Large-scale database, data mining, and multimedia applications require large, sequential transfers and have bandwidth as a key requirement. This paper investigates the performance of reading and writing large sequential files using the Windows NTtm 4.0 File System. The study explores the performance of Intel Pentium ProTM based memory and IO subsystems, including the processor bus, the PCI bus, the SCSI bus, the disk controllers, and the disk media in a typical server or high-end desktop system. We provide details of the overhead costs at each level of the system and examine a variety of the available tuning knobs. We show that NTFS out-of-the-box performance is quite good, but overheads for small requests can be quite high. The best performance is achieved by using large requests, bypassing the file system cache, spreading the data across many disks and controllers, and using deep-asynchronous requests. This combination allows us to reach or exceed the half-power point of all the individual hardware components.

Patent
01 Oct 1998
TL;DR: In this paper, the authors present a system and method which allows the interchange of Cookie information and standard Common Gateway Interface (CGI) variables between a user system and an On-Line Transaction Processing (OLTP) enterprise server.
Abstract: A system and method which allows the interchange of Cookie information and standard Common Gateway Interface (CGI) variables between a user system and an On-Line Transaction Processing (OLTP) enterprise server. The present invention also discloses a specialized form of a transaction gateway, known as a security gateway, which runs on a Windows NT or UnixWare Web Server machine, and is built as a client application to interoperate with an enterprise-based OLTP security service. Finally, the present invention discloses an enterprise-based OLTP security service, which is used in conjunction with the security gateway described above, which processes user generated authentication requests, and if successful, calls an end service requested by a user.

Proceedings Article
03 Aug 1998
TL;DR: This paper describes the solutions for several problems that any network protocol implementation for Windows NT will encounter and comments on the utility of access to the source code for the Windows NT product.
Abstract: We have created a publicly-available implementation of IPv6 for Windows NT. Because we have made our source code available, we hope that our implementation can serve as a base for networking research and supply sample code for other implementations. In this paper we describe our solutions for several problems that any network protocol implementation for Windows NT will encounter. Based on our experience, we also comment on the utility of access to the source code for the Windows NT product.

Proceedings Article
03 Aug 1998
TL;DR: The design and implementation of a soft real time CPU server for the time-sensitive multimedia applications in the Windows NT environment, based on a careful manipulation of the real time(RT) priority class, and it does not require any modifications to the kernel.
Abstract: We present the design and implementation of a soft real time CPU server for the time-sensitive multimedia applications in the Windows NT environment. The server is a user-level daemon process from which multimedia applications can request and acquire periodic processing time in the well-known form of (processing time per period). Our server is based on a careful manipulation of the real time(RT) priority class, and it does not require any modifications to the kernel. It provides (1) the rate monotonic scheduling algorithm, (2) support for multiple processors (SMP model), (3) limited overrun protection among real-time (RT) processes, (4) fair allocation between the RT and time sharing (TS) processes so that TS processes are not starved for processing time, (5) accessibility by a normal user privilege, and (6) an efficient implementation. We have implemented the CPU scheduling server on top of the Windows NT 4.0 operating system with dual Pentium processors, and we have shown through experiments that our CPU scheduling server provides good soft real time support for the multimedia applications.

Proceedings Article
03 Aug 1998
TL;DR: This article will discuss the experiences porting the GNU development tools to the Win32 host and explore the development and architecture of the Cygwin32 library.
Abstract: Cygwin32 is a full-featured Win32 porting layer for UNIX applications, compatible with all Win32 hosts (currently Microsoft Windows NT, Windows 95, and Windows 98). It was invented in 1995 by Cygnus Solutions as part of the answer to the question of how to port the GNU development tools to the Win32 host. The Win32-hosted GNUPro compiler tools that use the library are available for a variety of embedded processors as well as a native version for writing Win32 programs. By basing this technology on the GNU tools, Cygnus provides developers with a high-performance, feature-rich 32-bit code development environment, including a graphical source-level debugger. Cygwin32 is a Dynamic-Linked Library (DLL) that provides a large subset of the system calls found in common UNIX implementations. The current release includes all POSIX.1/90 calls except for setuid and mkfifo, all ANSI C standard calls, and many common BSD and SVR4 services including Berkeley sockets. This article will discuss our experiences porting the GNU development tools to the Win32 host and explore the development and architecture of the Cygwin32 library.

Journal ArticleDOI
TL;DR: The upcoming Windows NT 5.0 release of Windows NT Clustering Service will improve ease of use through a wizard that guides the user through the creation of cluster resources.
Abstract: The Windows NT Clustering Service supports high-availability file servers, databases, and generic applications and services. A cluster is a collection of computer nodes-independent, self-contained computer systems-that work together to provide a more reliable and powerful system than a single node. In general, the goal of a cluster is to distribute a computing load over several systems, without users or system administrators being aware of the independent systems running the services. The Windows NT Clustering Service detects and restarts failed hardware or software components or migrates the failed component's functionality to another node if local restart is not possible. It also offers a much simpler user and programming interface. Microsoft Cluster Service for Windows NT has been shipping for about a year on Windows NT version 4.0. The upcoming Windows NT 5.0 release of Windows NT Clustering Service will improve ease of use through a wizard that guides the user through the creation of cluster resources.

Proceedings ArticleDOI
20 Apr 1998
TL;DR: An implementation model for TMO support mechanisms in CORBA-compliant commercial-off-the-self (COTS) platforms and an implementation of the proposed model realized on top of the Windows NT operating system and the Orbix object request are discussed.
Abstract: Object-oriented analysis and design methodologies have become popular in development of non-real-time business data processing applications. However, conventional object-oriented techniques have had minimal impacts on development of real-time applications mainly because these techniques do not explicitly address key characteristics of real-time systems, in particular timing requirements. The Time-triggered Message-triggered Object (TMO) structuring is in our view the most natural extension of the object-oriented design and implementation techniques which allows the system designer to explicitly specify timing characteristics of data and function components of an object. To facilitate TMO-based design of real-time systems in the most cost-effective manner it is essential to provide execution support mechanisms in well-established commercial software/hardware platforms compliant with industry standards. In this paper, we present an implementation model for TMO support mechanisms in CORBA-compliant commercial-off-the-self (COTS) platforms. We first introduce a natural and simple mapping between TMO's and CORBA objects. Then, we identify the services to be provided by the TMO support subsystem and an efficient way these services should be implemented. The rest of the paper discusses an implementation of the proposed model realized on top of the Windows NT operating system and the Orbix object request.

Book ChapterDOI
01 Jun 1998
TL;DR: An open system architecture that allows independently developed hard real-time applications to run together and supports their reconfiguration at run-time and the design and implementation of the open system within the framework of the Windows NT operating system are described.
Abstract: This paper describes an open system architecture that allows independently developed hard real-time applications to run together and supports their reconfiguration at run-time. In the open system, each real-time application is executed by a server and scheduled by a two-level hierarchical scheduler. At lower level, the OS scheduler schedules all the servers on the EDF basis. At the upper level, the server scheduler of each server schedules the ready jobs of the application that the server executes according to the algorithm chosen for the application. The two-level scheduler never accepts a real-time application that may not be schedulable in the open system, and once it accepts a real-time application, it guarantees the schedulability of the application regardless of the behaviors of other applications in the system. The paper also describes the design and implementation of the open system within the framework of the Windows NT operating system. The implementation consists of three key components: the two-level hierarchical kernel scheduler, common system service providers, and real-time application programming interface.

01 Jan 1998
TL;DR: The findings illustrate that general-purpose operating systems like Windows NT and Solaris are not yet suited to meet the demands of applications with stringent QoS requirements, and LynxOS does enable predictable and efficient ORB performance, thereby making it a compelling OS platform for real-time CORBA applications.
Abstract: There is increasing demand to extend Object Request Broker (ORB) middleware to support distributed applications with stringent real-time requirements. However, lack of proper OS support can yield substantial inefficiency and unpredictability for ORB middleware. This paper provides two contributions to the study of OS support for real-time ORBs. First, we empirically compare and evaluate the suitability of real-time operating systems, VxWorks and LynxOS, and general-purpose operating systems with real-time extensions, Windows NT, Solaris, and Linux, for real-time ORB middleware. While holding the hardware and ORB constant, we vary the operating system and measure platform-specific variations, such as latency, jitter, operation throughput, and CPU processing overhead. Second, we describe key areas where these operating systems must improve to support predictable, efficient, and scalable ORBs. Our findings illustrate that general-purpose operating systems like Windows NT and Solaris are not yet suited to meet the demands of applications with stringent QoS requirements. However, LynxOS does enable predictable and efficient ORB performance, thereby making it a compelling OS platform for real-time CORBA applications. Linux provides good raw performance, though it is not a real-time operating system. Surprisingly, VxWorks does not scale robustly. In general, our results underscore the need for a measure-driven methodology to pinpoint sources of priority inversion and non-determinism in real-time ORB endsystems.

Book
22 Dec 1998
TL;DR: This text provides a series of 12 lab exercises which ask students to write programs for NT's Win 32 API, and contains an introduction to the relevant NT concepts needed.
Abstract: This text shows how basic concepts, relating to operating systems, are designed and implemented on Windows NT. It provides a series of 12 lab exercises which ask students to write programs for NT's Win 32 API. Each exercise contains an introduction to the relevant NT concepts needed.

Proceedings ArticleDOI
07 Dec 1998
TL;DR: Purple Penelope is a prototype that extends Windows NT security to support discretionary labelling, easy-to-use role-based access controls and effective accounting and auditing measures to shared files.
Abstract: Modern interconnected computer systems handling classified information can be built using mainstream COTS software platforms. The technique provides each user with a private desktop in which to work, along with services for sharing data. Within a desktop, the user is helped to label their data. When data is shared, labelling prevents accidental compromise, but other measures defend against other forms of compromise. Purple Penelope is a prototype that extends Windows NT security to support this approach. It adds discretionary labelling, easy-to-use role-based access controls and effective accounting and auditing measures to shared files.

Proceedings ArticleDOI
17 Jun 1998
TL;DR: This work describes a framework for asynchronous as well as synchronous collaboration, allowing the available participants to share the view and the control of the session simultaneously, and to record the screen images or frame buffers for the absent participants to retrieve and playback the session at a later stage with VCR-like control.
Abstract: For the last decade, the research in CSCW (computer supported cooperative work) has been focusing on synchronous collaboration, which requires the participants involved in common tasks to remotely share computer display workspaces simultaneously without leaving their workplaces. However, to support truly global cooperative work, asynchronous collaboration is equally prominent, in order to accommodate the participants who may not be available for the synchronous CSCW session. These participating individuals, whether working synchronously or asynchronously, may be mobile and may have to connect to and disconnect from the session repeatedly with ubiquitous systems. We describe a framework for asynchronous as well as synchronous collaboration. The framework provides facilities to transfer the screen images or frame buffers of the ongoing CSCW session to remote users, allowing the available participants to share the view and the control of the session simultaneously, and to record the screen images or frame buffers for the absent participants to retrieve and playback the session at a later stage with VCR-like control (i.e. fast forward, rewind play and stop). The frame buffers are transferred and recorded in units of rectangles containing pixel values of the screen images. These rectangles are platform independent and can be dynamically directed to and displayed by heterogeneous systems such as X Windows or Windows NT, or by Web browser such as Netscape.

Journal Article
TL;DR: It is argued that for traces of today’s workloads to be accurate, they must capture the operating system execution as well as the native application execution, which has been a driving force behind the development and use of software tools such as the PatchWrx dynamic execution-tracing toolset.
Abstract: Vol. 10 No. 1 1998 The computer architecture research community commonly uses trace-driven simulation in pursuing answers to a variety of design issues. Architects spend a significant amount of time studying the characteristics of benchmark programs by examining traces, i.e., samples taken from program execution. Popular benchmark programs include the SPEC and the BYTEmark benchmark test suites. Since the underlying assumption is that these programs generate workloads that represent user applications, today’s computer designs have been optimized based on the characteristics of these benchmark programs. Although the authors of popular benchmarks are well intentioned, the resulting workloads lack operating system execution and consequently do not represent some of the most prevalent desktop applications, e.g., Microsoft Word, Microsoft Visual C/C++, and Microsoft Internet Explorer. Such applications make heavy use of application programming interfaces (APIs), which in turn execute many instructions in the operating system. As a result, the overall performance of many desktop applications depends on efficient operating system interaction. Clearly operating system overhead can greatly reduce the benefits of a new computer design feature. Past architectural studies, however, have generally ignored operating system interaction because few tools can generate operating system–rich traces. This paper discusses the ongoing joint efforts of Northeastern University and Compaq Computer Corporation to capture operating system–rich traces on DIGITAL Alpha-based machines running the Microsoft Windows NT operating system. We argue that for traces of today’s workloads to be accurate, they must capture the operating system execution as well as the native application execution. This need to capture complete program trace information has been a driving force behind the development and use of software tools such as the PatchWrx dynamic execution-tracing toolset, which we describe in this paper. The PatchWrx toolset was originally developed by Sites and Perl at Digital Equipment Corporation’s Systems Research Center. They described PatchWrx, as developed for Windows NT version 3.5, in “Studies of Tracing and Characterization of Windows NT–based System Workloads Jason P. Casmira David P. Hunter David R. Kaeli