scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Journal of Research and Development in 1982"


Journal ArticleDOI
TL;DR: The requirements and components for a proposed Document Analysis System, which assists a user in encoding printed documents for computer processing, are outlined and several critical functions have been investigated and the technical approaches are discussed.
Abstract: This paper outlines the requirements and components for a proposed Document Analysis System, which assists a user in encoding printed documents for computer processing. Several critical functions have been investigated and the technical approaches are discussed. The first is the segmentation and classification of digitized printed documents into regions of text and images. A nonlinear, run-length smoothing algorithm has been used for this purpose. By using the regular features of text lines, a linear adaptive classification scheme discriminates text regions from others. The second technique studied is an adaptive approach to the recognition of the hundreds of font styles and sizes that can occur on printed documents. A preclassifier is constructed during the input process and used to speed up a well-known pattern-matching method for clustering characters from an arbitrary print source into a small sample of prototypes. Experimental results are included.

718 citations


Journal ArticleDOI
TL;DR: This system has successfully detected all but a few timing problems for the IBM 3081 Processor Unit (consisting of almost 800 000 circuits) prior to the hardware debugging of timing.
Abstract: Timing Analysis is a design automation program that assists computer design engineers in locating problem timing in a clocked, sequential machine. The program is effective for large machines because, in part, the running time is proportional to the number of circuits. This is in contrast to alternative techniques such as delay simulation, which requires large numbers of test patterns, and path tracing, which requires tracing of all paths. The output of Timing Analysis includes "Slack" at each block to provide a measure of the severity of any timing problem. The program also generates standard deviations for the times so that a statistical timing design can be produced rather than a worst case approach. This system has successfully detected all but a few timing problems for the IBM 3081 Processor Unit (consisting of almost 800 000 circuits) prior to the hardware debugging of timing. The 3081 is characterized by a tight statistical timing design.

342 citations


Journal ArticleDOI
A. J. Blodgett1, D. R. Barbour1
TL;DR: The thermal conduction module (TCM) as mentioned in this paper utilizes a 90 × 90mm MLC substrate to interconnect up to 118 LSI devices and provides a cooling capacity of up to 300 W. The TCM is compared to prior technologies to illustrate the improvements in packaging density, reliability and performance.
Abstract: Innovations in package design coupled with major advances in multilayer ceramic (MLC) technology provide a high-performance LSI package for the IBM 3081 Processor Unit. The thermal conduction module (TCM) utilizes a 90 × 90-mm MLC substrate to interconnect up to 118 LSI devices. The substrate, which typically contains 130 m of impedance-controlled wiring, provides an array of 121 pads fors older connections to each device and an array of 1800 pins for interconnection with the next-level package. A unique thermal design provides a cooling capacity of up to 300 W. This paper describes the TCM design and outlines the processes for fabrication of these modules. The TCM is compared to prior technologies to illustrate the improvements in packaging density, reliability, and performance.

209 citations


Journal ArticleDOI
Charles C. Tappert1
TL;DR: A major advantage of this procedure is that it combines letter segmentation and recognition in one operation by, in essence, evaluating recognition at all possible segmentations, thus avoiding the usual segmentation-then-recognition philosophy.
Abstract: Dynamic programming has been found useful for performing nonlinear time warping for matching patterns in automatic speech recognition. Here, this technique is applied to the problem of recognizing cursive script. The parameters used in the matching are derived from time sequences of x-y coordinate data of words handwritten on an electronic tablet. Chosen for their properties of invariance with respect to size and translation of the writing, these parameters are found particularly suitable for the elastic matching technique. A salient feature of the recognition system is the establishment, in a training procedure, of prototypes by each writer using the system. In this manner, the system is tailored to the user. Processing is performed on a word-by-word basis after the writing is separated into words. Using prototypes for each letter, the matching procedure allows any letter to follow any letter and finds the letter sequence which best fits the unknown word. A major advantage of this procedure is that it combines letter segmentation and recognition in one operation by, in essence, evaluating recognition at all possible segmentations, thus avoiding the usual segmentation-then-recognition philosophy. Results on cursive writing are presented where the alphabet is restricted to the lower-case letters. Letter recognition accuracy is over 95 percent for each of three writers.

188 citations


Journal ArticleDOI
TL;DR: In this article, a multichip module for future VLSI computer packages on which an array of silicon chips is directly attached and interconnected by high-density thin-film lossy transmission lines is discussed.
Abstract: This paper discusses a multichip module for future VLSI computer packages on which an array of silicon chips is directly attached and interconnected by high-density thin-film lossy transmission lines. Since the high-performance VLSI chips contain a large number of off-chip driver circuits which are allowed to switch simultaneously in operation, low-inductance on-module capacitors are found to be essential for stabilizing the on-module power supply. Novel on-module capacitor structures are therefore proposed, discussed, and evaluated. Material systems and processing techniques for both the thin-film interconnection lines and the capacitor structures are also briefly discussed in the paper. Development of novel defect detection and repair techniques has been identified as essential for fabricating the Thin-Film Module with practical yields.

171 citations


Journal ArticleDOI
E. E. Davidson1
TL;DR: A methodology for optimizing the design of an electrical packaging system for a high speed computer is described and a set of rules is generated for driving a computer aided design system.
Abstract: A methodology for optimizing the design of an electrical packaging system for a high speed computer is described. The pertinent parameters are first defined and their sensitivities are derived so that the proper design trade-offs can ultimately be made. From this procedure, a set of rules is generated for driving a computer aided design system. Finally, there is a discussion of design optimization and circuit and package effects on machine performance.

134 citations


Journal ArticleDOI
TL;DR: The Boolean comparison technique was used on the IBM 3081 project to establish that hardware flowcharts and the detailed hardware logic design were functionally equivalent.
Abstract: Boolean comparison is a design verification technique in which two logic networks are compared for functional equivalence using analysis rather than simulation. Boolean comparison was used on the IBM 3081 project to establish that hardware flowcharts and the detailed hardware logic design were functionally equivalent. Hardware flowcharts are a graphic form of a hardware description language which describes the logical behavior of the machine in terms of the inputs, outputs, and latches. The logical correctness of the hardware flowcharts was previously established via cycle simulation. The concepts and techniques of Boolean comparison as used on the IBM 3081 project are described.

100 citations


Journal ArticleDOI
R. C. Chu1, U. P. Hwang1, Robert E. Simons1
TL;DR: Oktay and Kammerer as mentioned in this paper discussed an innovative conduction-cooling approach using He gas encapsulation which has been developed in response to the new LSI technology requirements, and the basic challenges encountered in building a thermal bridge from individual chips to the module and cold plate are described.
Abstract: The introduction of LSI packaging has significantly increased the number of circuits per silicon chip, and at the same time has greatly increased their heat flux density In comparison to earlier MST (monolithic systems technology) products, the heat flux which must be removed from the new multi-chip substrates (100 or more chips) has increased by an order of magnitude or more This paper discusses an innovative conduction-cooling approach using He gas encapsulation which has been developed in response to the new LSI technology requirements Background is provided on the liquid-encapsulated-module technology which preceded the new approach, and the basic challenges encountered in building a thermal bridge from individual chips to the module and cold plate are described The underlying theory of operation is presented using one-dimensional mathematical and discrete analog models The effects of various factors such as geometry, chip tilt, He concentration, air leakage, and materials are illustrated using these models A thermal sensitivity analysis is performed to determine variations in junction temperatures and the contributions of the major parameters The companion paper by Oktay and Kammerer which follows this one treats the more general "multi-dimensional" approach using numerical analysis techniques

95 citations


Journal ArticleDOI
TL;DR: In this article, a thermal model has been developed to describe the observed effects over the entire overpotential (polarization) curve of a focused laser beam to define the localized plating or etching region.
Abstract: We have developed experimental electroplating, electrodeless plating, and etching techniques that use a focused laser beam to define the localized plating or etching region. Enhancements in plating (etching) rates up to ≅1O3 to 1O4, compared to background rates, have been observed in the region of laser irradiation. A thermal model has been developed to describe the observed effects over the entire overpotential (polarization) curve. In the low overpotential region the enhancement is dominated by the increase in the local charge-transfer kinetics due to the local increase in temperature produced by absorption of the laser energy by the cathode (anode). At higher overpotentials, in the mass-transport-limited region, the main enhancement occurs due to hydrodynamic stirring caused by the the large local temperature gradients. Examples of gold, nickel, and copper electroplating are described to illustrate the value of this technique for micron-sized circuit personalization and repair. Additional examples of electroless laser-enhanced plating and exchange plating are also described.

92 citations


Journal ArticleDOI
Sevgin Oktay1, H. C. Kammerer1
TL;DR: The development and implementation of a novel packaging concept which meets the stringent and highly interactive demands on cooling, reliability, and reworkability of LSI technology, referred to as the thermal conduction module (TCM).
Abstract: The advent of LSI chip technology makes possible significantly increased performance and circuit densities by means of large-scale packaging of multiple devices on a single multi-layer ceramic (MLC) substrate. Integration at the chip and module levels has resulted in circuit densities as high as 2.5×10 7 circuits per cubic meter, with the necessity of removing heat fluxes on the order of 100 kW/m 2 . This paper describes the development and implementation of a novel packaging concept which meets the stringent and highly interactive demands on cooling, reliability, and reworkability of LSI technology. These requirements resulted in an innovative packaging approach, referred to as the thermal conduction module (TCM). The TCM uses individually spring-loaded “pistons” that contact each chip with helium gas, the conducting medium for removing heat efficiently. A dismountable hermetic seal makes multiple access possible for device and substrate rework, while ensuring mechanical and environmental protection of critical components. A wide range of thermal, mechanical, and environmental experiments are described with analytical and computer models. The one-dimensional approach used in the previous paper by Chu et al. is extended to three-dimensional computer modeling. Simulations of expected chip temperature distributions in the IBM 3081 Processor Unit are discussed. Enhanced thermal performance of the advanced packaging concept for future applications is also indicated.

82 citations


Journal ArticleDOI
TL;DR: Word Autocorrelation Redundancy Match (WARM) is an intelligent facsimile technology which compresses the image of textual documents at nominally 145:l by use of complex symbol matching on both the word and character level.
Abstract: Word Autocorrelation Redundancy Match (WARM) is an intelligent facsimile technology which compresses the image of textual documents at nominally 145:l by use of complex symbol matching on both the word and character level. At the word level, the complex symbol match rate i s enhanced by the redundancy of the word image. This creates a unique image compression capability that allows a document to be scanned for the I50 most common words, which make up roughly 50% of the text by usage, and upon their match the words are replaced for storageltransmission by a word identification number. The remaining text is scanned to achieve compaction at the character level and compared to both a previously stored library and a dynamically built library of complex symbol (character) shapes. Applying the complex symbol matching approach at both the word and character levels results in greater efJiciency than is achievable by state of the art CCITT methods.

Journal ArticleDOI
TL;DR: The IBM 3687 Supermarket Scanner as mentioned in this paper exploits the functional advantages of holography to create a dense, multiple-focal-plane scan pattern with small spot size and large depth of field.
Abstract: The IBM 3687 Supermarket Scanner is described, with emphasis on the holographic deflector disk used to create the scan pattern. The scanner exploits the functional advantages of holography to create a dense, multiple-focal-plane scan pattern with small spot size and large depth of field. The optical design of the holographic disk is discussed and basic disk fabrication concepts are introduced.

Journal ArticleDOI
TL;DR: In this paper, a scheme for optical information storage using photochemical hole burning (PHB) in amorphous systems is evaluated, and limits imposed by the nature of PHB in polymers and glasses and its dependence on temperature are discussed.
Abstract: A scheme for optical information storage using photochemical hole burning (PHB) in amorphous systems is evaluated. Limits imposed by the nature of PHB in polymers and glasses and its dependence on temperature are discussed. It is demonstrated that optical information storage can be multiplexed by a factor of 103 using the frequency dimension and PHB.

Journal ArticleDOI
TL;DR: A halftoning algorithm is presented in which novel concepts are combined resulting in an output image in which moire patterns are suppressed and, at the same time, the edges are enhanced.
Abstract: Most printers and some display devices are bilevel (black or white) and therefore not capable of reproducing continuous tone pictures. Digital halftoning algorithms transform digital gray scale images into bilevel ones which give the appearance of containing various shades of gray. A halftoning algorithm is presented in which novel concepts are combined resulting in an output image in which moire patterns are suppressed and, at the same time, the edges are enhanced. Various other artifacts associated with the halftoning process, such as contouring due to coarse quantization or to textural changes, are also absent from the output images in the proposed scheme. The algorithm separates the image into many small clusters which are processed independently and, therefore, it is capable of parallel implementation.

Journal ArticleDOI
L. J. Fried1, Janos Havas1, John S. Lechaton1, Joseph S. Logan1, G. Paal1, P. A. Totta1 
TL;DR: In this article, the design and process used to fabricate the interconnections on IBM's most advanced bipolar devices are described, including thin film metallurgy and contacts, e-beam lithography and associated resist technology, a high temperature lift-off stencil for metal pattern definition, planarized rf sputtered SiO 2 insulation/passivation, the zero-overlap via hole innovation, in situ rf cleaning of vias prior to metallization, and area array solder terminals.
Abstract: The ability to interconnect large numbers of integrated silicon devices on a single chip has been greatly aided by a three-level wiring capability and large numbers of solderable input/output terminals on the face of the chip. This paper describes the design and process used to fabricate the interconnections on IBM's most advanced bipolar devices. Among the subjects discussed are thin film metallurgy and contacts, e-beam lithography and associated resist technology, a high temperature lift-off stencil for metal pattern definition, planarized rf sputtered SiO 2 insulation/passivation, the “zero-overlap” via hole innovation, in situ rf sputter cleaning of vias prior to metallization, and area array solder terminals.

Journal ArticleDOI
Peter A. Franaszek1
TL;DR: Algorithms are described for constructing synchronous (fixed rate) codes for discrete noiseless channels where the constraints can be modeled by finite state machines and yield two classes of codes with minimum delay or look-ahead.
Abstract: Algorithms are described for constructing synchronous (fixed rate) codes for discrete noiseless channels where the constraints can be modeled by finite state machines. The methods yield two classes of codes with minimum delay or look-ahead.

Journal ArticleDOI
TL;DR: In this paper, a method for overcoming the inability of current spectroscopic techniques to detect the minute amount of material present in these thin-film assemblies has been successfully demonstrated by using integrated optics, where the material whose spectrum was desired was made into an asymmetric slab waveguide or a composite waveguide structure.
Abstract: Studies of submicron films and molecular monolayers with infrared and Raman spectroscopy have been hampered by the inability of current spectroscopic techniques to detect the minute amount of material present in these thin-film assemblies. A method for overcoming this problem by using integrated optics has been successfully demonstrated. In the case of Raman studies, the material whose spectrum was desired was made into an asymmetric slab waveguide or a composite waveguide structure in which both the optical field intensity of the in-coupled laser source and the scattering volume of the sample have been significantly increased. Using this technique we have obtained Raman spectra of thin polymer films (<80 nm) and the resonant Raman spectra of single dye monolayers (2.7 nm). Estimates of molecular orientation within the two-dimensional films have been made based on the results of polarized Raman measurements. In addition, the results of overcoating experiments illustrate the versatility and applicability of this technique to a wide variety of surface and thin-film studies.

Journal ArticleDOI
Javier Jiménez1, Jose L. Navalon1
TL;DR: It is argued that the fractal nature of these scenes precludes some of the savings in storage expected from vector over raster representation, although considerable savings still result.
Abstract: The application of vectorization algorithms to digital images derived from natural scenes is discussed. It is argued that the fractal nature of these scenes precludes some of the savings in storage expected from vector over raster representation, although considerable savings still result. Experimental results are given. Algorithms for contour following, line thinning, and polygonal approximation well adapted to complex images are presented. Finally, the Map Manipulation System, an experimental program package designed to explore the interaction between vector and raster information, is described briefly.

Journal ArticleDOI
K. Jain1, C. G. Willson1, Burn Jeng Lin1
TL;DR: In this paper, a new technique for speckle-free, fine-line high-speed lithography using high-power pulsed excimer lasers is described and demonstrated.
Abstract: A new technique for speckle-free, fine-line high-speed lithography using high-power pulsed excimer lasers is described and demonstrated. Use of stimulated Raman shifting is proposed for obtaining the most desirable set of spectral lines for any resist. This permits, for the first time, the optimization of the exposur wavelengths for a given resist, rather than the reverse situation. Excellent-quality images are obtained in 1-µm-thick diazo-type photoresists such as ® AZ-2400 and a diazonaph-thoquinone-®Novolak resist system by means of contact printing with a XeCl laser at 308 nm and a KrF laser at 248 nm. Resolution down to 1000 line pairs per millimeter is experimentally demonstrated. These images are comparable to state-of-the-art contact lithography obtained with conventional lamps. The major difference is that the excimer laser technique is approximately two orders of magnitude faster. Tests on reciprocity failure ins everal resists indicate a decrease in sensitivity by only a factor of three, despite the ≅108 times larger power density used in the laser exposures. The possibility of photochemical reactions being diflerent from those taking place in the case of lamp exposures is discussed in view of these results.

Journal ArticleDOI
C.H. Stapper1, P. P. Castrucci1, R. A. Maeder1, W. E. Rowe1, R. A. Verhelst1 
TL;DR: The methods developed at IBM to manage and improve the yield of some of its newer FET semiconductor products are described in this article, where the results are applied not only to day-to-day control of the manufacturing lines, but also in the long-range forecasting and planning of future semiconductor integrated circuit products.
Abstract: The methods developed at IBM to manage and improve the yield of some of its newer FET semiconductor products are described. A number of visual inspection and electric monitoring techniques have evolved since discrete semiconductors were manufactured. The data obtained with these techniques are used in self-checking yield models to give the relative yields for all the yield components. The results are applied not only to day-to-day control of the manufacturing lines, but also in the long-range forecasting and planning of future semiconductor integrated circuit products. An example is given comparing the actual and planned yield of a 64K-bit random access memory chip as a function of time. The results show the yield enhancement that was obtained with redundant circuits and additionally with the use of partially functional products. Another example shows the decrease in fault levels over a span of more than ten years.

Journal ArticleDOI
Donald P. Seraphim1
TL;DR: In this article, a new set of printed circuit technologies have been developed which permit construction of printed-circuit panels with several kilometers of controlled-impedance interconnections, including vacuum lamination, electroless plating, photosensitive dielectric, laser drilling, automatic twisted-pair wire bonding and other new approaches to printed circuits.
Abstract: A new set of printed-circuit technologies have been developed which permit construction of printed-circuit panels with several kilometers of controlled-impedance interconnections. Communications between internal layers of signal planes are achieved through small plated vias (drilled with a laser), while plated through-holes are used for the logic service terminals for cable terminations and module terminals. The panels are the largest currently known in the industry, 600 × 700 mm, and have the most layers, 20. This paper describes new package designs which are achievable with the exceptional versatility that the new technologies provide. These technologies encompass vacuum lamination, electroless plating, photosensitive dielectric, laser drilling, automatic twisted-pair wire bonding, and other new approaches to printed circuits.

Journal ArticleDOI
D. C. Bossen1, M. Y. Hsiao1
TL;DR: The model developed in this paper allows the system designer to project the dynamic error-detection and fault-isolation coverages of the system as a function of the failure rates of components and the types and placement of error checkers, which has resulted in significant improvements to both detection and isolation in the IBM 3081 Processor Unit.
Abstract: As computer technologies advance to achieve higher performance and density, intermittent failures become more dominant than solid failures, with the result that the effectiveness of any diagnostic procedure which relies on reproducing failures is greatly reduced. This problem is solved at the system level by a new strategy of dynamic error detection and fault isolation based on error checking and analysis of captured information. The model developed in this paper allows the system designer to project the dynamic error-detection and fault-isolation coverages of the system as a function of the failure rates of components and the types and placement of error checkers, which has resulted in significant improvements to both detection and isolation in the IBM 3081 Processor Unit. The model has also resulted in new probabilistic isolation strategies based on the likelihood of failures. Our experiences with this model on several IBM products, including the 3081, show good correlation between the model and practical experiments.

Journal ArticleDOI
Michael Monachino1
TL;DR: The design verification methodology presented here saved some 66% from the 3081 product schedule, when compared with a schedule utilizing a conventional verification method, on almost 800 000 LSI logic circuits.
Abstract: This paper describes the changing environment of large-scale hardware designs as influenced by technology advancements and the growing use of design verification in the design implementation process. The design verification methodology presented here saved some 66% from the 3081 product schedule, when compared with a schedule utilizing a conventional verification method, on almost 800 000 LSI logic circuits. The paper discusses the use of software modeling techniques to verify LSI hardware designs, methods used for deciding when modeling should be stopped and hardware can be built with sufficient assurance to permit additional verification to continue on the hardware, methods for testing the hardware as it is assembled into a very large processor complex, and the organization of the design verification system to avoid duplicate creation of test cases for different stages of the design process. Experiences encountered in designing and verifying the 3081 system, a discussion of some shortcomings, and an endorsement of certain techniques and improvements for use in future designs are also presented.

Journal ArticleDOI
TL;DR: The concepts of automated diagnostics that were developed for and that are implemented in the IBM 3081 Processor Complex are presented and very good correlation between projected and measured effectiveness is found.
Abstract: The concepts of automated diagnostics that were developed for and that are implemented in the IBM 3081 Processor Complex are presented in this paper. Significant features of the 3081 diagnostics methodology are the capability to isolate intermittent as well as solid hardware failures, and the automatic isolation of a failure to the failing field-replaceable unit (FRU) in a high percentage of the cases. These features, which permit a considerable reduction in the time to repair a failure as compared to previous systems, are achieved by designing a machine which has a very high level of error-detection capability as well as special functions to facilitate fault isolation using Level-Sensitive Scan Design (USD), and which includes a Processor Controller to implement diagnostic microprograms. Intermittent failures are isolated by analyzing data captured at the detection of the error, and the analysis is concurrent with customer operations if the error is recoverable. A further improvement in the degree of isolation is achieved for solid failures by using automatically generated validation tests which detect and isolate stuck faults in the logic. The diagnostic package was designed to meet a specified value of isolation effectiveness, stated as the average number of FRUs replaced per failure. The technique used to estimate the isolation effectiveness of the diagnostic package and to evaluate proposals for improving isolation is described. Testing of the diagnostic package by hardware bugging indicates very good correlation between projected and measured effectiveness.

Journal ArticleDOI
R. N. Gustafson1, F. J. Sparacio1
TL;DR: The design aspects and the characteristics of the 3081 Processor Unit are described and the tradeoffs that were made due to the implementation of LSI are presented and some thoughts concerning VLSI implementations are explored.
Abstract: Significantly new challenges were presented for the design of the 3081 Processor Unit since it was the first IBM large system implemented in LSI technology. Solutions had to be found for a new set of problems in order to achieve the required product objectives while maintaining an acceptable development cost and schedule. In this paper, the design aspects and the characteristics of the 3081 Processor Unit are described and the tradeoffs that were made due to the implementation of LSI are presented. A design strategy was chosen that included tradeoffs covering the areas of machine organization, performance level, implementation costs, testing and servicing aids, and development schedules. An innovative verification effort was introduced into the design process, capitalizing on a hardware flowcharting discipline and rigorous design rules. On the basis of this development experience, some thoughts concerning VLSI implementations are explored.

Journal ArticleDOI
George G. Collins1, Cary W. Halsted1
TL;DR: In this article, a single-step chlorobenzene liftoff process using a diazo-type resist to manufacturing lines produced problems not encountered during development and pilot-line work.
Abstract: Introduction of the single-step chlorobenzene liftoff process using a diazo-type resist to manufacturing lines produced problems not encountered during development and pilot-line work. Variances in the structure of the photoresist liftoff image are the result of complex interactions among exposure, chlorobenzene soaking, development, and post-application baking conditions. Effects produced by these variables can be controlled by monitoring the linewidth. overhang, and height of the lijtoff resist structure using a scanning electron microscope (SEMJ. Loss of resist thickness during the chlorobenzene soak is used instead of penetration, as measured on SEM photographs, to monitor the soaking process. Data are presented on the creation and stability of the overhang structure, the process controls required to achieve that stability, and the interactions among the process variables. The process, as practiced in a manufacturing mode, was found to have greatest reproduceability at low exposure, with a combination of long soaking times and high post-application baking temperatures.

Journal ArticleDOI
TL;DR: In this paper, a prototype of an electron-beam proximity printer is described which shadow-projects patterns of chip-size transmission masks onto wafers, and experiments to replicate mask patterns are directed at demonstrating the applicability of this lithographic method to high-speed printing of repetitive patterns on wafer.
Abstract: A laboratory prototype of an electron-beam proximity printer is described which shadow-projects patterns of chip-size transmission masks onto wafers. Electron-beam transmission masks with physical holes at transparent areas have been fabricated with the smallest structures down to 0.3 µm. Experiments to replicate mask patterns were directed at demonstrating the applicability of this lithographic method to high-speed printing of repetitive patterns on wafers. Linewidth resolution and positional accuracy, as well as exposure speed, meet the requirements for micron and submicron lithography.

Journal ArticleDOI
T. J. Chuang1
TL;DR: In this article, the laser-enhanced chemical etching of Si, SiO2, Ta, and Te films with halogen-containing gases excited by a pulsed CO2, laser and a continuous-wave (cw) Ar+ laser has been studied.
Abstract: The laser-enhanced chemical etching of Si, SiO2, Ta, and Te films with halogen-containing gases excited by a pulsed CO2, laser and a continuous-wave (cw) Ar+ laser has been studied. Detailed measurements of the etch rates as functions of the laser frequency, the laser intensity, and the gas pressure have been performed for some of the gas-solid systems. The enhanced surface reactions have been classified into three categories: those activated by the vibrational excitation of the etchant molecules, those with radicals generated by photodissociation, and those induced by laser excitation of solid substrates. Examples which illustrate the effects of laser radiation on these surface photochemical processes are given. Achievable etch rates and spatial resolutions for the various reaction mechanisms are also examined.

Journal ArticleDOI
TL;DR: An overview of the advances in technology and in the design process, as well as enhancements in the system design that were associated with the development of the IBM 3081 are presented.
Abstract: The IBM 3081 Processor Complex consists of a 3081 Processor Unit and supporting units for processor control, power, and cooling. The Processor Unit, completely implemented in LSI technology, has a dyadic organization of two central processors, each with a 26-ns machine cycle time, and executes System/370 instructions at approximately twice the rate of the IBM 3033. This paper presents an overview of the advances in technology and in the design process, as well as enhancements in the system design that were associated with the development of the IBM 3081. Application of LSI technology to the design of the 3081 Processor Unit, which contains almost 800 000 logic circuits, required extensions to silicon device packaging, interconnection, and cooling technologies. A key achievement in the 3081 is the incorporation of the thermal conduction module (TCM), which contains as many as 45 000 logic circuits, provides a cooling capacity of up to 300 W, and allows the elimination of one complete level of packaging--the card level. Reliability and serviceability objectives required new approaches to error detection and fault isolation. Innovations in system packaging and organization, and extensive design verification by software and hardware models, led to the realization of the increased performance characteristics of the system.

Journal ArticleDOI
TL;DR: The basic idea of the algorithm is to "coat" the borders between the regions from both sides in two separate border-following procedures called island following and object following, which can be considerably simplified for the binary image case.
Abstract: This paper presents a new segmentation and coding algorithm for nonbinary images. The algorithm performs contour coding of regions of equally valued and connected pixels. It consists of two distinct phases: raster scanning and border following. In this sense it is similar to algorithms presented by Kruse. However, the algorithm of this paper is considerably improved since it correctly segments truly nonbinary images. The basic idea of the algorithm is to "coat" (color, label) the borders (the cracks) between the regions from both sides in two separate border-following procedures called island following and object following. Thus, all adjacencies between the objects are systematically explored and noted. Furthermore, the raster scanner, which exhaustively searches the image for new regions, can easily determine from existing/nonexisting coating which boundaries have been traced out and which have not. The algorithm can be considerably simplified for the binary image case.