scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Journal of Research and Development in 1996"


Journal ArticleDOI
J. F. Ziegler1
TL;DR: The terrestrial flux of nucleons can be attenuated by shielding, making a significant reduction in the electronic system soft-error rate, and estimates of such attenuation are made.
Abstract: This paper reviews the basic physics of those cosmic rays which can affect terrestrial electronics. Cosmic rays at sea level consist mostly of neutrons, protons, pions, muons, electrons, and photons. The particles which cause significant soft fails in electronics are those particles with the strong interaction: neutrons, protons, and pions. At sea level, about 95% of these particles are neutrons. The quantitative flux of neutrons can be estimated to within 3{times}, and the relative variation in neutron flux with latitude, altitude, diurnal time, earth`s sidereal position, and solar cycle is known with even higher accuracy. The possibility of two particles of a cascade interacting with a single circuit to cause two simultaneous errors is discussed. The terrestrial flux of nucleons can be attenuated by shielding, making a significant reduction in the electronic system soft-error rate. Estimates of such attenuation are made.

528 citations


Journal ArticleDOI
J. F. Ziegler1, Huntington W. Curtis1, H. P. Muhlfeld1, C. J. Montrose1, B. Chin 
TL;DR: The experimental work at IBM over the last fifteen years in evaluating the effect of cosmic rays on terrestrial electronic components became a significant factor in IBM`s efforts toward improved product reliability.
Abstract: This review paper has described the experimental work at IBM over the last fifteen years in evaluating the effect of cosmic rays on terrestrial electronic components. This work originated in 1978, went through several years of research to verify its magnitude, and became a significant factor in IBM`s efforts toward improved product reliability.

470 citations


Journal ArticleDOI
TL;DR: A review of experiments performed by IBM to investigate the causes of soft errors in semiconductor memory chips under field test conditions shows that cosmic rays are an important source of the ionizing radiation that causes soft errors.
Abstract: This paper presents a review of experiments performed by IBM to investigate the causes of soft errors in semiconductor memory chips under field test conditions. The effects of alpha-particles and cosmic rays are separated by comparing multiple measurements of the soft-error rate (SER) of samples of memory chips deep underground and at various altitudes above the earth. The results of case studies on four different memory chips show that cosmic rays are an important source of the ionizing radiation that causes soft errors. The results of field testing are used to confirm the accuracy of the modeling and the accelerated testing of chips.

170 citations


Journal ArticleDOI
L. B. Freeman1
TL;DR: It is concluded that a range in values of the critical charge of a cell due to normal manufacturing and operating tolerances must be considered when calculating soft-error rates for a chip.
Abstract: The critical charge, Q{sub crit}, of a memory array storage cell is defined as the largest charge that can be injected without changing the cell`s logic state. The Q{sub crit} of a Schottky-coupled complementary bipolar SRAM array is evaluated in detail. An operational definition of critical charge is made, and the critical charge for the cell is determined by circuit simulation. The dependence of critical charge on upset-pulse wave shape, statistical variations of power supply voltage, temperature gradients, manufacturing process tolerances, and design-related influences due to word- and drain-line resistance was also calculated. A 2{times} range in Q{sub crit} is obtained for the SRAM memory array cell from simulations of normal ({+-}3{sigma}) variations in manufacturing process tolerances, the shape of the upset current pulse, and local cell temperatures. Using SEMM, a 60{times} variation of the soft-error rate (SER) is obtained for this range in Q{sub crit}. The calculated SER is compared to experimental values for the chip obtained by accelerated testing. It is concluded that a range in values of the critical charge of a cell due to normal manufacturing and operating tolerances must be considered when calculating soft-error rates for a chip.

148 citations


Journal ArticleDOI
P. C. Murley1, G. R. Srinivasan1
TL;DR: SEMM (Soft-Error Monte Carlo Modeling) calculates the soft-error rate (SER) of semiconductor chips due to ionizing radiation, used primarily to determine whether chip designs meet SER specifications.
Abstract: The application of a computer program, SEMM (Soft-Error Monte Carlo Modeling), is described. SEMM calculates the soft-error rate (SER) of semiconductor chips due to ionizing radiation. Used primarily to determine whether chip designs meet SER specifications, the program requires detailed layout and process information and circuit Q{sub crit} values.

132 citations


Journal ArticleDOI
TL;DR: The main steelmaking processes are described and it is shown how scheduling affects the effectiveness of plant operations, and several different approaches for computerized scheduling solutions are described.
Abstract: This paper describes primary production scheduling in the steel industry—the problem and the approaches to the solution. The scheduling problem in steel plants is known to be among the most difficult of several industrial scheduling problems. We first describe the main steelmaking processes and show how scheduling affects the effectiveness of plant operations. We characterize the problems associated with scheduling steelmaking activities to achieve business objectives of delivering quality steel on time to customers, while minimizing operating costs. We then describe several different approaches for computerized scheduling solutions. They include application of techniques in operations research, artificial intelligence, and a hybrid of these two. We conclude by describing advanced techniques for integrated scheduling of steel plants.

117 citations


Journal ArticleDOI
H. H. K. Tang1
TL;DR: The key issues of cosmic-ray-induced soft-error rates, SER, in microelectronic devices are discussed from the viewpoint of fundamental atomic and nuclear interactions between high-energy particles and semiconductors.
Abstract: The key issues of cosmic-ray-induced soft-error rates, SER (also referred to as single-event upset, SEU, rates) in microelectronic devices are discussed from the viewpoint of fundamental atomic and nuclear interactions between high-energy particles and semiconductors. From sea level to moderate altitudes, the cosmic ray spectrum is dominated by three particle species: nucleons (protons and neutrons), pions, and muons. The characteristic features of high-energy nuclear reactions of these particles with light elements are reviewed. A major cause of soft errors is identified to be the ionization electron-hole pairs induced by the secondary nuclear fragments produced in certain processes. These processes are the inelastic collisions between the cosmic ray particles and nuclei in the host material. A state-of-the-art nuclear spallation reaction model, NUSPA, is developed to simulate these reactions. This model is tested and validated by a large set of nuclear experiments. It is used to generate the crucial database for the soft-error simulators which are currently used throughout IBM for device and circuit analysis. The relative effectiveness of nucleons, pions, and muons as soft-error-inducing agents is evaluated on the basis of nuclear reaction rate calculations and energy-deposition analysis.

114 citations


Journal ArticleDOI
TL;DR: An overview of the Vatican Library Project, a description of the system being developed to satisfy its needs, and a discussion of how the technical challenges are being addressed are provided are provided.
Abstract: The Vatican Library is an extraordinary repository of rare books and manuscripts. Among its 150,000 manuscripts are early copies of works by Aristotle, Dante, Euclid, Homer, and Virgil. Yet today access to the Library is limited. Because of the time and cost required to travel to Rome, only some 2000 scholars can afford to visit the Library each year. Through the Vatican Library Project, we are exploring the practicality of providing digital library services that extend access to portions of the Library's collections to scholars worldwide, as an early example of providing digital library services that extend and complement traditional library services. A core goal of the project is to provide access via the Internet to some of the Library's most valuable manuscripts, printed books, and other sources to a scholarly community around the world. A multinational, multidisciplinary team is addressing the technical challenges raised by that goal, including • Development of a multiserver system suitable for providing information to scholars worldwide. • Capture of images of the materials with faithful color and sufficient detail to support scholarly study. Protection of the on-line materials, especially images, from misappropriation. • Development of tools to enable scholars to locate desired materials. • Development of tools to enable scholars to scrutinize images of manuscripts. • In this paper, we provide an overview of the project, a description of the system being developed to satisfy its needs, and a discussion of how the technical challenges are being addressed.

99 citations


Journal ArticleDOI
TL;DR: The BooleDozer logic synthesis system has been widely used within IBM to successfully synthesize processor and ASIC designs and is described, including its organization, main algorithms, and how it fits into the design process.
Abstract: Logic synthesis is the process of automatically generating optimized logic-level representation from a high-level description. With the rapid advances in integrated circuit technology and the resultant growth in design complexity, designers increasingly rely on logic synthesis to shorten the design time while achieving performance objectives. This paper describes the IBM logic synthesis system BooleDozer™, including its organization, main algorithms, and how it fits into the design process. The BooleDozer logic synthesis system has been widely used within IBM to successfully synthesize processor and ASIC designs.

93 citations


Journal ArticleDOI
TL;DR: Simple relationships derived from measurement of more than 80 different chips manufactured over 20 years allow total cosmic soft-error rate (SER) to be estimated after only limited testing.
Abstract: This paper describes the experimental techniques which have been developed at IBM to determine the sensitivity of electronic circuits to cosmic rays at sea level. It relates IBM circuit design and modeling, chip manufacture with process variations, and chip testing for SER sensitivity. This vertical integration from design to final test and with feedback to design allows a complete picture of LSI sensitivity to cosmic rays. Since advanced computers are designed with LSI chips long before the chips have been fabricated, and the system architecture is fully formed before the first chips are functional, it is essential to establish the chip reliability as early as possible. This paper establishes techniques to test chips that are only partly functional (e.g., only 1Mb of a 16Mb memory may be working) and can establish chip soft-error upset rates before final chip manufacturing begins. Simple relationships derived from measurement of more than 80 different chips manufactured over 20 years allow total cosmic soft-error rate (SER) to be estimated after only limited testing. Comparisons between these accelerated test results and similar tests determined by ``field testing`` (which may require a year or more of testing after manufacturing begins) show that the experimental techniques are accuratemore » to a factor of 2.« less

93 citations


Journal ArticleDOI
G. R. Srinivasan1
TL;DR: The paper emphasizes the need for the SER simulation using the actual chip circuit model which includes device, process, and technology parameters as opposed to using either the discrete device simulation or generic circuit simulation that is commonly employed in SER modeling.
Abstract: This paper is an overview of the concepts and methodologies used to predict soft-error rates (SER) due to cosmic and high-energy particle radiation in integrated circuit chips. The paper emphasizes the need for the SER simulation using the actual chip circuit model which includes device, process, and technology parameters as opposed to using either the discrete device simulation or generic circuit simulation that is commonly employed in SER modeling. Concepts such as funneling, event-by-event simulation, nuclear history files, critical charge, and charge sharing are examined. Also discussed are the relative importance of elastic and inelastic nuclear collisions, rare event statistics, and device vs. circuit simulations. The semi-empirical methodologies used in the aerospace community to arrive at SERs [also referred to as single-event upset (SEU) rates] in integrated circuit chips are reviewed. This paper is one of four in this special issue relating to SER modeling. Together, they provide a comprehensive account of this modeling effort, which has resulted in a unique modeling tool called the Soft-Error Monte Carlo Model, or SEMM.

Journal ArticleDOI
TL;DR: A new mode of multiple encryption—triple-DES external feedback cipher block chaining with output feedback masking is proposed to provide increased protection against certain attacks (dictionary attacks and matching ciphertext attacks) which exploit the short message-block size of DES.
Abstract: We propose a new mode of multiple encryption—triple-DES external feedback cipher block chaining with output feedback masking. The aim is to provide increased protection against certain attacks (dictionary attacks and matching ciphertext attacks) which exploit the short message-block size of DES. The new mode obtains this protection through the introduction of secret masking values that are exclusive-ORed with the intermediate outputs of each triple-DES encryption operation. The secret mask value is derived from a fourth encryption operation per message block, in addition to the three used in previous modes. The new mode is part of a suite of encryption modes proposed in the ANSI X9.F.1 triple-DES draft standard (X9.52).

Journal ArticleDOI
TL;DR: A number of innovations in thin-film technology materials and processes associated with this type of head are reviewed and efforts to predict the performance of thin- film inductive heads and to understand and control head instabilities and noise are described.
Abstract: The development of IBM thin-film inductive recording heads is traced from 1964, through their first introduction in 1979, to the present. We review a number of innovations in thin-film technology materials and processes associated with this type of head. Design and technology changes made since 1979 have led to the development and implementation of heads in several successful recording systems. We also describe efforts to predict the performance of thin-film inductive heads and to understand and control head instabilities and noise.

Journal ArticleDOI
TL;DR: The IBM Thomas J Watson Research Center as mentioned in this paper provides a brief overview of the emerging field of computer-integrated surgery, followed by a research strategy that enables a computer-oriented research laboratory such as ours to participate in this emerging field.
Abstract: This paper describes some past and current research activities at the IBM Thomas J Watson Research Center We begin with a brief overview of the emerging field of computer-integrated surgery, followed by a research strategy that enables a computer-oriented research laboratory such as ours to participate in this emerging field We then present highlights of our past and current research in four key areas—orthopaedics, craniofacial surgery, minimally invasive surgery, and medical modeling—and elaborate on the relationship of this work to emerging topics in computer-integrated surgery

Journal ArticleDOI
K. E. Johnson1, C. M. Mate1, J. A. Merz1, R. L. White1, A. W. Wu1 
TL;DR: Design concepts and manufacturing considerations for high-areal-density thin-film disks are described, with reflections on future enhancements required for media of the next generation.
Abstract: In the last ten years, the fundamental technology of recording media has evolved from particulate to thin film. The introduction of magnetoresistive heads in 1990 has had further impact on the design of thin-film media. Changes in substrates, magnetic films, overcoats, and lubrication form the basis of the evolution. Design concepts and manufacturing considerations for high-areal-density thin-film disks are described in this paper, with reflections on future enhancements required for media of the next generation.

Journal ArticleDOI
TL;DR: The IBMLZ1 compression algorithm was designed not only for robust and highly efficient compression, but also for extremely high reliability, because compression removes redundancy in the source, the compressed data become extremely vulnerable to data corruption.
Abstract: Data compression allows more efficient use of storage media and communication bandwidth, and standard compression offerings for tape storage have been well established since the late 1980s. Compression technology lowers the cost of storage without changing applications or data access methods. The desire to extend these cost/performance benefits to higher-data-rate media and broader media forms, such as DASD storage subsystems, motivated the design and development of the IBMLZ1 compression algorithm and its implementing technology. The IBMLZ1 compression algorithm was designed not only for robust and highly efficient compression, but also for extremely high reliability. Because compression removes redundancy in the source, the compressed data become extremely vulnerable to data corruption. Key design objectives for the IBMLZ1 development team were efficient hardware execution, efficient use of silicon technology, and minimum system-integration overhead. Through new observations of pattern matching, match-length distribution, and the use of graph vertex coloring for evaluating data flows, the IBMLZ1 compression algorithm and the chip family achieved the above objectives.

Journal ArticleDOI
TL;DR: The DFT methodologies used for IBM ASICs and the design automation support that enables designers to be so productive with these methodologies are discussed.
Abstract: IBM manufactures a very large number of different application-specific integrated circuit (ASIC) chips each year. Although these chips are designed by many different customers having various levels of test experience and all having tight deadlines, IBM ASICs have a reputation for their high quality. This quality is due in large part to the heavy focus on design for test (DFT) and the use of design automation to help ensure that customers' chips can be manufactured, tested, and diagnosed with minimal engineering effort. Prospective customers of IBM ASIC technologies find an explicit set of DFT methodologies to follow which provide a relatively painless, almost push-button approach to the generation of high-quality, “sign-off” test vectors for their chips. This paper discusses the DFT methodologies used for IBM ASICs and the design automation support that enables designers to be so productive with these methodologies. Data are given for several recently processed chips, some designed outside IBM.

Journal ArticleDOI
TL;DR: A hierarchical design planning system that consists of a tightly integrated set of design and analysis tools that assists in achieving timing closure in high-performance designs, in production use at IBM internal and at external ASIC design centers.
Abstract: Design planning is emerging as a solution to some of the most difficult challenges of the deep-submicron VLSI design era. Reducing design turnaround time for extremely large designs with ever-increasing clock speeds, while ensuring first-pass implementation success, is exhausting the capabilities of traditional design tools. To solve this problem, we have designed and implemented a hierarchical design planning system that consists of a tightly integrated set of design and analysis tools. The integrated run-time environment, with its rich set of hierarchical, timing-driven design planning and implementation functions, provides an advanced platform for realizing a variety of ASIC and custom methodologies. One of the system's particular strengths is its tight integration with an incremental, static timing engine that assists in achieving timing closure in high-performance designs. The design planner is in production use at IBM internal and at external ASIC design centers.

Journal ArticleDOI
R. M. Jessani1, C. H. Olson2
TL;DR: The IBM PowerPC 603e FPU is an on-chip functional unit to support IEEE 754 standard single- and double-precision binary floating-point arithmetic operations to be a low-cost, low-power, high-performance engine in a single-chip superscalar microprocessor.
Abstract: The IBM PowerPC 603e floating-point unit (FPU) is an on-chip functional unit to support IEEE 754 standard single- and double-precision binary floating-point arithmetic operations. The design objectives are to be a low-cost, low-power, high-performance engine in a single-chip superscalar microprocessor. Using less than 15 mm 2 of the available silicon area on the chip (the size of the PowerPC 603e microprocessor is 98 mm 2 ) and operating at the peak clock frequency of 100 MHz, an average single-pumping multiply-add-fuse instruction has one-cycle throughput and four-cycle latency. An average double-pumping multiply-add-fuse instruction has two-cycle throughput and five-cycle latency. The estimated performance at 100 MHz is 105 against the SPECfp92 benchmark.

Journal ArticleDOI
Sandeep Gopisetty1, Raymond A. Lorie1, Jianchang Mao1, M. Mohiuddin1, Alexander Sorin1, E. Yair1 
TL;DR: A software system for the machine reading of forms data from their scanned images is described, with major components: form recognition and “dropout,” intelligent character recognition (ICR), and contextual checking.
Abstract: While document-image systems for the management of collections of documents, such as forms, offer significant productivity improvements, the entry of information from documents remains a labor-intensive and costly task for most organizations. In this paper, we describe a software system for the machine reading of forms data from their scanned images. We describe its major components: form recognition and “dropout,” intelligent character recognition (ICR), and contextual checking. Finally, we describe applications for which our automated forms reader has been successfully used.

Journal ArticleDOI
TL;DR: This paper describes the IBM ASIC design methodology, and focuses on the key areas of the methodology that enable a customer to exploit the technology in terms of performance, density, and testability, all in a fast-time-to-market ASIC paradigm.
Abstract: The IBM ASIC design methodology enables a product developer to fully incorporate the high-density, high-performance capabilities of the IBM CMOS technologies in the design of leading-edge products. The methodology allows the full exploitation of technology density, performance, and high testability in an ASIC design environment. The IBM ASIC design methodology builds upon years of experience within IBM in developing design flows that optimize performance, testability, chip density, and time to market for internal products. It has also been achieved by using industry-standard design tools and system design approaches, allowing IBM ASIC products to be marketed externally as well as to IBM internal product developers. This paper describes the IBM ASIC design methodology, and then focuses on the key areas of the methodology that enable a customer to exploit the technology in terms of performance, density, and testability, all in a fast-time-to-market ASIC paradigm. Also emphasized are aspects of the methodo logy that allow IBM to market its design experience and intellectual property.

Journal ArticleDOI
TL;DR: The basic physics of data recording and readout and the engineering of the primary building blocks of an optical drive (the optical head, the servo system, and the data channel) are discussed.
Abstract: Optical disk drives provide an effective solution to the growing need for removable high-capacity storage. In this paper, we review the technology behind the optical disk drives used in IBM's optical storage systems. The basic physics of data recording and readout and the engineering of the primary building blocks of an optical drive (the optical head, the servo system, and the data channel) are discussed. We also outline the technological directions of future optical drives as they must continue to improve in performance and capacity.

Journal ArticleDOI
TL;DR: This paper describes Serial Storage Architecture (SSA), a definition and general specification of a high-performance serial link for the attachment of input/output devices.
Abstract: This paper describes Serial Storage Architecture (SSA), a definition and general specification of a high-performance serial link for the attachment of input/output devices. An overview is given of the architecture itself, followed by a general description of the hardware implementation of a dual-port SSA node on a single chip.

Journal ArticleDOI
Akio Yamashita1, Tomio Amano1, Y. Hirayama1, N. Itoh1, Shin Katoh, T. Mano, Kazuharu Toyokawa 
TL;DR: Examples of successful applications-entry into a text database, creation of an electronic catalog, entry of family registration data, and entry of tag data in a manufacturing process-provide evidence of the processing accuracy and robustness of the framework.
Abstract: This paper describes a document entry system called the Document Recognition System (DRS), which facilitates the conversion of printed documents into electronic form. DRS was developed on a personal computer (PC) with an adapter card for recognizing more than 3000 Kanji characters. It provides a flexible framework for object-oriented management of data and processing modules. The framework allows the user to change the combination of processing modules and to select pipelining (parallel processing) or sequential processing. DRS includes processing modules for layout analysis functions such as blob detection, block segmentation, and model matching, and for character recognition functions such as Kanji character recognition, Japanese postprocessing, postprocessing by a user, and error correction through a user interface. The character recognition functions on the card and the other processing-related recognition functions on the PC work cooperatively in the proposed framework. Within the basic framework, we have customized DRS for practical applications. Examples of successful applications-entry into a text database, creation of an electronic catalog, entry of family registration data, and entry of tag data in a manufacturing process-provide evidence of the processing accuracy and robustness of the framework.

Journal ArticleDOI
Thomas R. Bednar1, R. A. Piro1, Douglas W. Stout1, L. Wissel1, Paul S. Zuchowski1 
TL;DR: A library strategy has been developed to enable IBM Microelectronics ASIC development to keep pace with rapid technology enhancements and to offer leading-edge performance to ASIC customers.
Abstract: A library strategy has been developed to enable IBM Microelectronics ASIC development to keep pace with rapid technology enhancements and to offer leading-edge performance to ASIC customers. Library elements are designed using migratable design rules to allow designs to be reused in future advanced technologies; and library contents, design methodology, test methodology, and packaging offerings for the ASICs also are consistent between current and future technologies. The benefit to the ASIC customer is an ASIC with a rich library of logic functions, arrays, and I/Os for today's designs, and with a ready migration path into future designs.

Journal ArticleDOI
T. W. McDaniel1, P. C. Arnett1
TL;DR: Optical data storage media for bit-wise recording of a microhologram using an incident radiation at a wavelength of about 405 nm are provided.
Abstract: In this paper, we review many of the technical issues that must be addressed in developing high-quality optical disk recording media. The geometric design of tracking grooves and embossed data on the disk substrate must be optimized to the characteristics of the optical drive to support robust data seeking, track following, and data addressing. To ensure adequate recording performance and drive compatibility, we have used modeling to optimize the media thin-film structure for proper optical, thermal, and magnetic characteristics. Because low substrate birefringence is a necessary media characteristic for high recording densities, we discuss measurements which assess whether birefringence has been adequately controlled in disk production. Testing is used throughout media development and manufacture to measure that the quality goals of the design are achieved. The mechanical integrity of the disk and its cartridge, particularly for autoload libraries, is confirmed by load/unload stress testing. Media life and data archivability are established by a combination of time-zero and accelerated life stress tests of media mechanical and recording performance. This attention to a broad range of technical details is essential to ensure reliable storage of data on removable optical disk media.

Journal ArticleDOI
TL;DR: Chip optimization tools are used to physically optimize the clock trees and scan connections, both to improve clock skew and to improve wirability.
Abstract: Recent advances in integrated circuit technology have imposed new requirements on the chip physical design process. At the same time that performance requirements are increasing, the effects of wiring on delay are becoming more significant. Larger chips are also increasing the chip wiring demand, and the ability to efficiently process these large chips in reasonable time and space requires new capabilities from the physical design tools. Circuit placement is done using algorithms which have been used within IBM for many years, with enhancements as required to support additional technologies and larger data volumes. To meet timing requirements, placement may be run iteratively using successively refined timing-derived constraints. Chip optimization tools are used to physically optimize the clock trees and scan connections, both to improve clock skew and to improve wirability. These tools interchange sinks of equivalent nets, move and create parallel copies of clock buffers, add load circuits to balance c lock net loads, and generate balanced clock tree routes. Routing is done using a grid-based, technology-independent router that has been used over the years to wire chips. There are numerous user controls for specifying router behavior in particular areas and on particular interconnection levels, as well as adjacency restrictions.

Journal ArticleDOI
TL;DR: A portable Faraday cup design is described for the accurate measurement of large-diameter, low-current, and high-energy proton beams traveling in air, for the accelerated testing of LSI parts.
Abstract: A portable Faraday cup design is described for the accurate measurement of large-diameter, low-current, and high-energy proton beams traveling in air. The unit has been tested with protons from 4 to 300 MeV. The unit has an accuracy of 10% for beams of 1 pA, improving to about 2% accuracy for ion currents of 20 pA to 1 {micro}A. For the accelerated testing of LSI parts, the Faraday cup is particularly critical, since this application requires very low currents, typically 10--100 pA (1 pA = 6 {times} 10{sup 6} protons/s), operating in an electronically noisy accelerator environment.

Journal ArticleDOI
Ryutarou Ohbuchi1, T. Miyazawa1, Masaki Aono1, A. Koide1, M. Kimura1, R. Yoshida1 
TL;DR: A large, heterogeneous network of supercomputers, workstations, and personal computers for clinicians and researchers at the National Cancer Center in Tokyo is described, including a medical-image database, a patient-record database, visualization of 3D medical images, computer-assisted diagnosis, genome and protein databases, and proof-of-concept synthetic environments.
Abstract: This paper describes a large, heterogeneous network of supercomputers, workstations, and personal computers for clinicians and researchers at the National Cancer Center in Tokyo. Intended uses of the system include a medical-image database, a patient-record database, visualization of 3D medical images, computer-assisted diagnosis, genome and protein databases, and proof-of-concept synthetic environments. An overview of the system is followed by detailed descriptions of two subsystems: the medical-image-database subsystem and the synthetic-environment subsystem. The medical-image-database subsystem integrates a large number of image-acquisition devices, such as X-ray computed tomography and ultrasound echography equipment. The image database is designed for a high image-data input rate by means of a storage hierarchy and high-speed network connections. The synthetic-environment subsystem includes two prototype applications (a neurosurgery simulator and an anatomy viewer) that explore the possibilities of synthetic-environment technology in medicine.

Journal ArticleDOI
B. Schlatter1
TL;DR: Dimensional management, an engineering methodology combined with software tools, was implemented in disk drive development engineering at IBM in San Jose to predict and optimize critical parameters in disk drives to ensure robust designs.
Abstract: Disk drives are multicomponent products in which product build variations directly affect quality. Dimensional management, an engineering methodology combined with software tools, was implemented in disk drive development engineering at IBM in San Jose to predict and optimize critical parameters in disk drives. It applies statistical simulation techniques to predict the amount of variation that can occur in the disk drive due to the specified design tolerances, fixturing tolerances, and assembly variations. This paper presents statistics describing the measurement values produced during simulations, a histogram showing the measurement values graphically, and an analysis of the process capability, C pk , to ensure robust designs. Additionally, it describes how modeling can determine the location(s) of the predicted variation, the contributing factors, and their percent of contribution. Although a complete 2.5-in. disk drive was modeled and all critical variations such as suspension-to-disk gaps, disk-stack envelope, and merge clearances were analyzed, this paper presents for illustration only one critical disk real estate parameter. The example shows the capability of this methodology. VSA®-3D software by Variation Systems Analysis was used.