scispace - formally typeset
Search or ask a question

Showing papers in "Ibm Journal of Research and Development in 2001"


Journal ArticleDOI
TL;DR: This paper review in more detail related work that originated at IBM during the last four years and has led to the fabrication of high-performance organic transistors on flexible, transparent plastic substrates requiring low operating voltages.
Abstract: In this paper we review recent progress in materials, fabrication processes, device designs, and applications related to organic thin-film transistors (OTFTs), with an emphasis on papers published during the last three years. Some earlier papers that played an important role in shaping the OTFT field are included, and a number of previously published review papers that cover that early period more completely are referenced. We also review in more detail related work that originated at IBM during the last four years and has led to the fabrication of high-performance organic transistors on flexible, transparent plastic substrates requiring low operating voltages.

1,192 citations


Journal ArticleDOI
TL;DR: This paper provides an overview of the synthetic techniques used to prepare colloidal nanocrystals of controlled composition, size, shape, and internal structure and the methods for manipulating these Ns.
Abstract: This paper provides an overview of the synthetic techniques used to prepare colloidal nanocrystals (NCs) of controlled composition, size, shape, and internal structure and the methods for manipulat...

1,013 citations


Journal ArticleDOI
TL;DR: This paper reviews the crystal structures and physical properties of one family of crystalline, self-assembling, organic-inorganic hybrids based on the layered perovskite framework.
Abstract: Organic-inorganic hybrid materials enable the integration of useful organic and inorganic characteristics within a single molecular-scale composite. Unique electronic and optical properties have been observed, and many others can be envisioned for this promising class of materials. In this paper, we review the crystal structures and physical properties of one family of crystalline, self-assembling, organic-inorganic hybrids based on the layered perovskite framework. In addition to exhibiting a number of potentially useful properties, the hybrids can be deposited as thin films using simple and inexpensive techniques, such as spin coating or single- source thermal ablation. The relatively new field of "organic-inorganic electronics" offers a variety of exciting technological opportunities. Several recent demonstrations of electronic and optical devices based on organic-inorganic perovskites are presented as examples.

641 citations


Journal ArticleDOI
TL;DR: A high-resolution printing technique based on transferring a pattern from an elastomeric stamp to a solid substrate by conformal contact is developed, an attempt to enhance the accuracy of classical printing to a precision comparable with optical lithography, creating a low-cost, large-area, high- resolution patterning process.
Abstract: We are developing a high-resolution printing technique based on transferring a pattern from an elastomeric stamp to a solid substrate by conformal contact. This is an attempt to enhance the accuracy of classical printing to a precision comparable with optical lithography, creating a low-cost, large-area, high-resolution patterning process. First, we introduce the components of this technique, called soft lithography, and review its evolution. Topics described in detail are the stamp material, stamp architecture, pattern design rules, and printing tools. The accuracy of the prints made by thin patterned elastomeric layers supported on a stiff and flexible backplane is then assessed, and defects are characterized using a new electrical metrology approach. This is followed by a discussion of various printing processes used in our laboratory: 1) thiol printing for high-resolution patterns of noble metals that may also be used as sacrificial masks; 2) confined contact processing with liquids in cavities or channels to chemically convert a substrate or deposit layers of materials or biomolecules; 3) printing of catalysts to mediate patterned deposition of metals; and 4) structured, light-guiding stamps for transferring high-resolution patterns into photoresists. Finally, we compare classical and high-resolution printing approaches, and describe their potential for emerging micro-and nano-scale patterning technologies.

557 citations


Journal ArticleDOI
TL;DR: This group has developed techniques that detect the occurrence of software aging due to resource exhaustion, estimate the time remaining until the exhaustion reaches a critical level, and automatically perform proactive software rejuvenation of an application, process group, or entire operating system.
Abstract: Software failures are now known to be a dominant source of system outages. Several studies and much anecdotal evidence point to "software aging" as a common phenomenon, in which the state of a software system degrades with time. Exhaustion of system resources, data corruption, and numerical error accumulation are the primary symptoms of this degradation, which may eventually lead to performance degradation of the software, crash/hang failure, or other undesirable effects. "Software rejuvenation" is a proactive technique intended to reduce the probability of future unplanned outages due to aging. The basic idea is to pause or halt the running software, refresh its internal state, and resume or restart it. Software rejuvenation can be performed by relying on a variety of indicators of aging, or on the time elapsed since the last rejuvenation. In response to the strong desire of customers to be provided with advance notice of unplanned outages, our group has developed techniques that detect the occurrence of software aging due to resource exhaustion, estimate the time remaining until the exhaustion reaches a critical level, and automatically perform proactive software rejuvenation of an application, process group, or entire operating system, depending on the pervasiveness of the resource exhaustion and our ability to pinpoint the source. This technology has been incorporated into the IBM Director for xSeries servers. To quantitatively evaluate the impact of different rejuvenation policies on the availability of cluster systems, we have developed analytical models based on stochastic reward nets (SRNs). For timebased rejuvenation policies, we determined the optimal rejuvenation interval based on system availability and cost. We also analyzed a rejuvenation policy based on prediction, and showed that it can further increase system availability and reduce downtime cost. These models are very general and can capture a multitude of cluster system characteristics, failure behavior, and performability measures, which we are just beginning to explore.

307 citations


Journal ArticleDOI
J. M. Shaw1, Paul Seidler1
TL;DR: The increased resolution capability of photoresists combined with optical tool enhancements has enabled the fabrication of 1.2 million transistors/cm with feature sizes of 180 nm, significantly smaller than the 248-nm exposure wavelength of the current optical exposure tool—an achievement that was not considered possible a few years ago.
Abstract: For the past forty years inorganic silicon and gallium arsenide semiconductors, silicon dioxide insulators, and metals such as aluminum and copper have been the backbone of the semiconductor industry. However, there has been a growing research effort in “organic electronics” to improve the semiconducting, conducting, and lightemitting properties of organics (polymers, oligomers) and hybrids (organic–inorganic composites) through novel synthesis and self-assembly techniques. Performance improvements, coupled with the ability to process these “active” materials at low temperatures over large areas on materials such as plastic or paper, may provide unique technologies and generate new applications and form factors to address the growing needs for pervasive computing and enhanced connectivity. If we review the growth of the electronics industry, it is clear that innovative organic materials have been essential to the unparalleled performance increase in semiconductors, storage, and displays at the consistently lower costs that we see today. However, the majority of these organic materials are either used as sacrificial stencils (photoresists) or passive insulators and take no active role in the electronic functioning of a device. They do not conduct current to act as switches or wires, and they do not emit light. For semiconductors, two major classes of passive organic materials have made possible the current cost/performance ratio of logic chips: photoresists and insulators. Photoresists are the key materials that define chip circuitry and enable the constant shrinking of device dimensions [1–3]. In the late 1960s, photoresist materials limited the obtainable resolution of the optical tools to ;5.0 mm (;500 transistors/cm). As optical tools continued to improve, owing to unique lens design and light sources, new resists had to be developed to continue lithographic scaling. Chemists created unique photosensitive polymers to satisfy the resolution, sensitivity, and processing needs of each successive chip generation, and now photoresist materials improve the resolution that could normally be provided on an optical exposure tool. The increased resolution capability of photoresists combined with optical tool enhancements has enabled the fabrication of 1.2 million transistors/cm with feature sizes of 180 nm, significantly smaller than the 248-nm exposure wavelength of the current optical exposure tool—an achievement that was not considered possible a few years ago. Polymeric insulators have also been essential to the performance and reliability of semiconductor devices. They were first used in the packaging of semiconductor chips, where low-cost epoxy materials found applications as insulation for wiring in the fabrication of printed wiring boards and as encapsulants to provide support/protection and hence reliability for the chips [4, 5]. Although the first polymeric dielectrics were used in the packaging of chips, IBM recently introduced a polymer that replaces the silicon dioxide dielectric typically used on-chip throughout the industry as an insulator. The seven levels of metal wiring required to connect the millions of transistors on a chip can significantly affect chip performance because of signal propagation delay and crosstalk between wiring. Improvement in interconnect performance requires reduction of the resistance (R) and capacitance (C). IBM was the first to use copper to replace aluminum wiring as a low-resistivity metal, and the first to use a low-k

286 citations


Journal ArticleDOI
Marie Angelopoulos1
TL;DR: This paper reviews some of potential applications and briefly describes possible future applications of conducting polymers for use as interconnections or for electronic devices.
Abstract: Conjugated polymers in the nondoped and doped conducting state have an array of potential applications in the microelectronics industry. Conducting polymers are effective discharge layers as well as conducting resists in electron beam lithography, find applications in metallization (electrolytic and electroless) of plated through-holes for printed circuit board technology, provide excellent electrostatic discharge protection for packages and housings of electronic equipment, provide excellent corrosion protection for metals, and may have applications in electromagnetic interference shielding. This paper reviews some of these applications and briefly describes possible future applications of conducting polymers for use as interconnections or for electronic devices.

282 citations


Journal ArticleDOI
TL;DR: This architecture is the first of its kind to employ real-time main-memory content compression at a performance competitive with the best the market has to offer.
Abstract: Several technologies are leveraged to establish an architecture for a low-cost, high-performance memory controller and memory system that more than double the effective size of the installed main memory without significant added cost. This architecture is the first of its kind to employ real-time main-memory content compression at a performance competitive with the best the market has to offer. A large low-latency shared cache exists between the processor bus and a content-compressed main memory. Highspeed, low-latency hardware performs realtime compression and decompression of data traffic between the shared cache and the main memory. Sophisticated memory management hardware dynamically allocates main-memory storage in small sectors to accommodate storing the variable-sized compressed data without the need for "garbage" collection or significant wasted space due to fragmentation. Though the main-memory compression ratio is limited to the range 1:1-64:1, typical ratios range between 2:1 and 6:1, as measured in "real-world" system applications.

195 citations


Journal ArticleDOI
TL;DR: This paper presents an overview of several resolution-enhancement techniques being developed and implemented in IBM for its leading-edge CMOS logic and memory products.
Abstract: Advances in lithography have contributed significantly to the advancement of the integrated circuit technology. While nonoptical next-generation lithography (NGL) solutions are being developed, optical lithography continues to be the workhorse for high-throughput very-large-scale integrated (VLSI) lithography. Extending optical lithography to the resolution levels necessary to support today’s aggressive product road maps increasingly requires the use of resolution-enhancement techniques. This paper presents an overview of several resolution-enhancement techniques being developed and implemented in IBM for its leading-edge CMOS logic and memory products.

163 citations


Journal ArticleDOI
TL;DR: The adaptive multilevel finite element solution of the Poisson-Boltzmann equation for a microtubule on the NPACI Blue Horizon--a massively parallel IBM RS/6000® SP with eight POWER3 SMP nodes.
Abstract: By using new methods for the parallel solution of elliptic partial differential equations, the teraflops computing power of massively parallel computers can be leveraged to perform electrostatic calculations on large biological systems. This paper describes the adaptive multilevel finite element solution of the Poisson-Boltzmann equation for a microtubule on the NPACI Blue Horizon--a massively parallel IBM RS/6000® SP with eight POWER3 SMP nodes. The microtubule system is 40 nm in length and 24 nm in diameter, consists of roughly 600000 atoms, and has a net charge of -1800 e. Poisson-Boltzmann calculations are performed for several processor configurations, and the algorithm used shows excellent parallel scaling.

125 citations


Journal ArticleDOI
TL;DR: An overview of recent work in the laboratory on the chemical and physical processes that occur during post-exposure baking (PEB) of positive-tone CA resists is provided to provide a clearer understanding of how this critical step in the lithographic imaging process will affect extendibility of the CA resist concept to nanoscale feature sizes.
Abstract: Chemically amplified (CA) resists are in widespread use for the fabrication of leading-edge microelectronic devices, and it is anticipated that they will see use well into the future. The refinement and optimization of these materials to allow routine imaging at dimensions that will ultimately approach the molecular scale will depend on an improved in-depth understanding of the materials and their processing. We provide here an overview of recent work in our laboratory on the chemical and physical processes that occur during post-exposure baking (PEB) of positive-tone CA resists. Our results provide a clearer understanding of how this critical step in the lithographic imaging process will affect extendibility of the CA resist concept to nanoscale feature sizes.

Journal ArticleDOI
TL;DR: Results show that the hardware compression of main memory has a negligible penalty compared to an uncompressed main memory, and for memory-starved applications it increases performance significantly, and the memory content of an application can usually be compressed by a factor of 2.
Abstract: A novel memory subsystem called Memory Expansion Technology (MXT) has been built for fast hardware compression of main-memory content. This allows a memory expansion to present a "real" memory larger than the physically available memory. This paper provides an overview of the memory-compression architecture, its OS support under Linux and Windows®, and an analysis of the performance impact of memory compression. Results show that the hardware compression of main memory has a negligible penalty compared to an uncompressed main memory, and for memory-starved applications it increases performance significantly. We also show that the memory content of an application can usually be compressed by a factor of 2.

Journal ArticleDOI
TL;DR: This paper reviews machine learning techniques based on the use of hidden Markov models (HMMs) for investigating biomolecular sequences and describes gene-prediction HMMs and protein family HMMs.
Abstract: The vast increase of data in biology has meant that many aspects of computational science have been drawn into the field. Two areas of crucial importance are large-scale data management and machine learning. The field between computational science and biology is varyingly described as "computational biology" or "bioinformatics." This paper reviews machine learning techniques based on the use of hidden Markov models (HMMs) for investigating biomolecular sequences. The approach is illustrated with brief descriptions of gene-prediction HMMs and protein family HMMs.

Journal ArticleDOI
TL;DR: It is described how the funnellike nature of energy functions used for protein structure prediction determines their quality and can be quantified using landscape theory and multiple histogram sampling methods.
Abstract: Protein structure prediction is beginning to be, at least partially, successful. Evaluating predictions, however, has many elements of subjectivity, making it difficult to determine the nature and extent of improvements that are most needed. We describe how the funnellike nature of energy functions used for protein structure prediction determines their quality and can be quantified using landscape theory and multiple histogram sampling methods. Prediction algorithms exhibit a "caldera"-like landscape rather than a perfectly funneled one. Estimates are made of the expected number of effectively distinct structures produced by a prediction algorithm.

Journal ArticleDOI
W. E. Howard1, O. F. Prache1
TL;DR: The requirements for microdisplays are reviewed, and the case is made that organic light-emitting diodes (OLEDs) are the best candidate transducer technology for meeting these requirements.
Abstract: Microdisplays, some of which exploit the dense electronic circuitry in a silicon chip, are enabling a new wave of ultraportable information products, including headsets for viewing movies and cell phones with full-screen Internet access. This paper reports the approach to microdisplay development at eMagin Corporation. The requirements for microdisplays are reviewed, and the case is made that organic light-emitting diodes (OLEDs) are the best candidate transducer technology for meeting these requirements. A 1280 × 1024 (SXGA) monochrome OLED microdisplay is described as an example.

Journal ArticleDOI
Alessandro Curioni1, Wanda Andreoni1
TL;DR: The physical and chemical properties of Alq3, one of the organic materials most commonly used as the light-emitting layer of OLEDs, and the interface with possible metal cathodes are investigated by means of first-principles computer simulations.
Abstract: The physical and chemical properties of tris(8-hydroxyquinolinato)aluminum (Alq3), one of the organic materials most commonly used as the light-emitting layer of OLEDs, and the interface with possible metal cathodes are investigated by means of first-principles computer simulations. A number of new insights have emerged from this study, and we emphasize the consequences of the properties thus discovered with respect to the functioning of OLED devices. In particular, novel Alq3 derivatives can be designed with the aid of computations that should enhance the intrinsic luminescence of the pristine material.

Journal ArticleDOI
TL;DR: A new hybrid theoretical method is reviewed for modeling the reactivities of large molecular systems and the limitations of these models are described and it is indicated how they may be further improved to reliably model the reactions of complicated metalloenzymes.
Abstract: Electronic structure theory, which in recent years has been actively and effectively applied to the modeling of chemical reactions involving transition-metal complexes, is now also being applied to the modeling of biological processes involving metalloenzymes. In the first part of this paper, we review our recent electronic structure studies using relatively simple models of two metalloenzymes--methane monooxygenase and ribonucleotide reductase. In the second part of the paper, we review a new hybrid theoretical method we have developed for modeling the reactivities of large molecular systems. We describe the limitations of these models and indicate how they may be further improved to reliably model the reactivities of complicated metalloenzymes.


Journal ArticleDOI
TL;DR: Transient experiments show that the delay time of electroluminescence at low voltages in these multilayer devices is controlled by the buildup of internal space charges, which facilitates electron injection, rather than by charge-carrier transport through the organic layers.
Abstract: Trapped and interfacial charges have significant impact on the performance of organic light-emitting devices (OLEDs). We have studied devices consisting of 20 nm copper phthalocyanine (CuPc) as the buffer and hole-injection layer, 50 nm N, N′-di(naphthalene-1-yl)-N,N′-diphenyl-benzidine (NPB) as the hole transport layer, and 65 nm tris(8-hydroxyquinolinato)aluminum (Alq3) as the electron transport and emitting layer sandwiched between a high-work-function metal and a semitransparent Ca electrode. Current-voltage measurements show that the device characteristics in the negative bias direction and at low positive bias below the built-in voltage are influenced by trapped charges within the organic layers. This is manifested by a strong dependence of the current in this range on the direction and speed of the voltage sweep. Low-frequency capacitance-voltage and static charge measurements reveal a voltage-independent capacitance in the negative bias direction and a significant increase between 0 and 2 V in the given device configuration, indicating the presence of negative interfacial charges at the NPB/Alq3 interface. Transient experiments show that the delay time of electroluminescence at low voltages in these multilayer devices is controlled by the buildup of internal space charges, which facilitates electron injection, rather than by charge-carrier transport through the organic layers. To summarize, our results clearly demonstrate that the tailoring of internal barriers in multilayer devices leads to a significant improvement in device performance.

Journal ArticleDOI
TL;DR: The issues involved in EST clustering are reviewed and the total number of human genes is estimated to be at least 85000, which will be higher because of post-translational modification.
Abstract: A current question of considerable interest to both the medical and nonmedical communities concerns the number of human transcription units (which, for the purposes of this paper, are "genes") and proteins. Even with the recent announcement of the completion of the draft sequence of the human genome, it is still extremely difficult to predict the number of genes present in the genome. There are several methods for gene prediction, all involving computational tools. One way to approach this question, involving both computation and experiment, is to look at copies of fragments of messenger ribonucleic acid (mRNA) called expressed sequence tags (ESTs). The mRNA comes only from a gene being expressed, or translated, into RNA; by clustering mRNA fragments, we can try to reconstruct the expressed gene. While the final result is a very rough representation of the "true expressed transcript," it is probably within 20% of the real number. Here, we review the issues involved in EST clustering and present an estimate of the total number of human genes. Our results to date indicate that there are some 70000 transcription units, with an average of 1.2 different transcripts per transcription unit. Thus, we estimate the total number of human proteins to be at least 85000. The total number of proteins will be higher because of post-translational modification.

Journal ArticleDOI
TL;DR: This paper is an overview of work in the IBM Microelectronics Division to extend electron-beam lithography technology to the projection level for use in next-generation lithography, and the approach being explored--Projection Reduction Exposure with Variable Axis Immersion Lenses (PREVAIL)--combines the high exposure efficiency of massively parallel pixel projection with scanning-probe-forming systems to dynamically correct for aberrations.
Abstract: This paper is an overview of work in the IBM Microelectronics Division to extend electron-beam lithography technology to the projection level for use in next-generation lithography. The approach being explored--Projection Reduction Exposure with Variable Axis Immersion Lenses (PREVAIL)--combines the high exposure efficiency of massively parallel pixel projection with scanning-probe-forming systems to dynamically correct for aberrations. In contrast to optical lithography systems, electron-beam lithography systems are not diffraction-limited, and their ultimate attainable resolution is, for practical purposes, unlimited. However, their throughput has been--and continues to be--the major challenge in electron-beam lithography. The work described here, currently continuing, has been undertaken to address that challenge. Novel electron optical methods have been used and their feasibility ascertained by means of a Proof-Of-Concept (POC) system containing a Curvilinear Variable Axis Lens (CVAL) for achieving large-distance (>20 mm at a reticle) beam scanning at a resolution of <100 nm, and a high-emittance electron source for achieving uniform illumination of a 1-mm2 section of the reticle. A production-level prototype PREVAIL system, an "alpha" system, for the 100-nm node has been under development jointly with the Nikon Corporation. At the writing of this paper, its electron-optics subsystem had been brought up to basic operation and was being prepared for integration with its mechanical and vacuum subsystem, under development at Nikon facilities.

Journal ArticleDOI
TL;DR: This research was done to explore the feasibility of computer architectures in which data are decompressed/compressed on cache misses/writebacks, which led to and were implemented in IBM Memory Expansion Technology (MXT), which for typical systems yields a factor of 2 expansion in effective memory size with generally minimal effect on performance.
Abstract: An overview of a set of algorithms and data structures developed for compressed-memory machines is given. These include 1) very fast compression and decompression algorithms, for relatively small fixed-size lines, that are suitable for hardware implementation; 2) methods for storing variable-size compressed lines in main memory that minimize overheads due to directory size and storage fragmentation, but that are simple enough for implementation as part of a system memory controller; 3) a number of operating system modifications required to ensure that a compressed-memory machine never runs out of memory as the compression ratio changes dynamically. This research was done to explore the feasibility of computer architectures in which data are decompressed/compressed on cache misses/writebacks. The results led to and were implemented in IBM Memory Expansion Technology (MXT), which for typical systems yields a factor of 2 expansion in effective memory size with generally minimal effect on performance.

Journal ArticleDOI
Hiroshi Ito1
TL;DR: This paper describes and compares the dissolution behavior of different polymers employed in chemically amplified imaging at 248, 193, and 157 nm, and polymers bearing a hexafluoroisopropanol functionality for base solubility.
Abstract: The aqueous base development step is one of the most critical processes in modern lithographic imaging technology. Sinusoidal modulation of the exposing light intensity must be converted to a step function in the resist film during the development process. Thus, in designing high-performance resists, controlling the dissolution behavior of the resist polymer film in aqueous developer is of the utmost importance. This paper describes and compares the dissolution behavior of different polymers employed in chemically amplified imaging at 248, 193, and 157 nm. The polymers discussed in this paper are polyhydroxystyrene derivatives (248 nm), functionalized poly(cycloolefins) containing carboxylic acid (193 nm), and polymers bearing a hexafluoroisopropanol functionality for base solubility (157 nm).

Journal ArticleDOI
TL;DR: In this paper, a low-activation-energy chemically amplified resist based on ketal-protected poly(hydroxystyrene) is proposed for advanced mask-fabrication applications using the 75kV IBM EL4+ vector scan e-beam exposure system.
Abstract: Resists for advanced mask-making with high-voltage electron-beam writing tools have undergone dramatic changes over the last three decades. From PMMA and the other early chain-scission resists for micron dimensions to the aqueous-base-developable, dry-etchable chemically amplified systems being developed today, careful tuning of the chemistry and processing conditions of these resist systems has allowed the patterning of photomasks of increasing complexity containing increasingly finer features. Most recently, our research efforts have been focused on a low-activation-energy chemically amplified resist based on ketal-protected poly(hydroxystyrene). These ketal resist systems, or KRSs, have undergone a series of optimization and evaluation cycles in order to fine-tune their performance for advanced mask-fabrication applications using the 75-kV IBM EL4+ vector scan e-beam exposure system. The experiments have led to an optimized formulation, KRS-XE, that exhibits superior lithographic performance and has a high level of processing robustness. In addition, we describe advanced formulations of KRS-XE incorporating organometallic species, which have shown superior dry-etch resistance to novolak-based resists in the Cr etch process while maintaining excellent lithographic performance. Finally, current challenges facing the implementation of a chemically amplified resist in the photomask manufacturing process are outlined, along with current approaches being pursued to extend the capabilities of KRS technology.

Journal ArticleDOI
TL;DR: This paper discusses a class of algorithms for subset selection rooted in the principles of multiobjective optimization, and employs an objective function that encodes all of the desired selection criteria, and then uses a simulated annealing or evolutionary approach to identify the optimal subset from among the vast number of possibilities.
Abstract: Combinatorial chemistry and high-throughput screening have caused a fundamental shift in the way chemists contemplate experiments. Designing a combinatorial library is a controversial art that involves a heterogeneous mix of chemistry, mathematics, economics, experience, and intuition. Although there seems to be little agreement as to what constitutes an ideal library, one thing is certain: Only one property or measure seldom defines the quality of the design. In most real-world applications, a good experiment requires the simultaneous optimization of several, often conflicting, design objectives, some of which may be vague and uncertain. In this paper, we discuss a class of algorithms for subset selection rooted in the principles of multiobjective optimization. Our approach is to employ an objective function that encodes all of the desired selection criteria, and then use a simulated annealing or evolutionary approach to identify the optimal (or a nearly optimal) subset from among the vast number of possibilities. Many design criteria can be accommodated, including diversity, similarity to known actives, predicted activity and/or selectivity determined by quantitative structure-activity relationship (QSAR) models or receptor binding models, enforcement of certain property distributions, reagent cost and availability, and many others. The method is robust, convergent, and extensible, offers the user full control over the relative significance of the various objectives in the final design, and permits the simultaneous selection of compounds from multiple libraries in full- or sparse-array format.

Journal ArticleDOI
Santos F. Alvarado1, L. Rossi, Peter Müller1, Paul Seidler1, Walter Riess1 
TL;DR: An overview of the current status of the work on scanning-tunneling-microscope-based (STM) spectroscopy and electroluminescence (EL) excitation to study the physical and electronic structure of organic materials used in organic light-emitting devices (OLEDs).
Abstract: We present an overview of the current status of our work on scanning-tunneling-microscope-based (STM) spectroscopy and electroluminescence (EL) excitation to study the physical and electronic structure of organic materials used in organic light-emitting devices (OLEDs). By these means we probe the critical device parameters in charge-carrier injection and transport, i.e., the height of the barrier for charge-carrier injection at interfaces between different materials and the energy gap between positive and negative polaronic states. In combination with optical absorption measurements, we gauge the exciton binding energy, a parameter that determines energy transport and EL efficiency. In STM experiments involving organic EL excitation, the tip functions as an OLED electrode in a highly localized fashion, allowing one to map the spatial distribution of the EL intensity across thin-film samples with nanometer lateral resolution as well as to measure the local EL emission spectra and the influence of thin-film morphology.

Journal ArticleDOI
TL;DR: The design of a compressed random-access memory (C-RAM) is considered, using a C-RAM at the lowest level of a system's main-memory hierarchy, and it forms the basis for the memory organization of IBM Memory Expansion Technology (MXT).
Abstract: The design of a compressed random-access memory (C-RAM) is considered. Using a C-RAM at the lowest level of a system's main-memory hierarchy, cache lines are stored in a compressed format and dynamically decompressed to handle cache misses at the next higher level of memory. The requirement that compression/decompression, address translation, and memory management be performed by hardware has implications for the directory structure and storage allocation designs used within the C-RAM. Various new approaches, summarized here, are necessary in these areas in order to have methods that are amenable to hardware implementation. Furthermore, there are numerous design issues for the directory and storage management architectures. We consider a number of these issues, and present the results of evaluations of various approaches using analytic methods and simulations. This research was done as part of a project to explore the feasibility of compressed-memory systems; it forms the basis for the memory organization of IBM Memory Expansion Technology (MXT).

Journal ArticleDOI
TL;DR: This paper describes an analysis approach to evaluate finite cache penalties based on miss rates and queuing theory combined with empirical relations between various levels of a memory hierarchy which has been implemented in a spreadsheet and used successfully to perform early engineering tradeoffs for many uniprocessor and multiprocesser memory hierarchies.
Abstract: Advances in technology have provided a continuing improvement in processor speed and capacity of attached main memory. The increasing gap between main memory and processor cycle times has required increasingly more levels of caching to prevent performance degradation. The net result is that the inherent delay of a memory hierarchy associated with any computing system is becoming the major performance-determining factor and has inspired many types of analysis methods. While an accurate performance-evaluation tool requires the use of trace-driven simulators, good approximations and significant insight can be obtained by the use of analytical models to evaluate finite cache penalties based on miss rates (or miss ratios) and queuing theory combined with empirical relations between various levels of a memory hierarchy. Such tools make it possible to readily determine trends in performance vs. changes in input parameters. This paper describes such an analysis approach--one which has been implemented in a spreadsheet and used successfully to perform early engineering tradeoffs for many uniprocessor and multiprocessor memory hierarchies.

Journal ArticleDOI
TL;DR: The results indicate that MXT improves price/performance by 25% to 70%.
Abstract: Memory Expansion Technology (MXT™) has been discussed in a number of forums. It is a hardware-implemented means for software-transparent on-the-fly compression of the main-memory content of a computer system. For a very broad set of workloads, it provides a compression ratio of 2:1 or better. This ability to compress and store data in fewer bytes effectively doubles the apparent capacity of memory at minimal cost. While it is clear that a doubling of memory at little cost is bound to improve the price/performance of a system that can be offered to customers, the magnitude of the impact of MXT on price/performance has not been quantified. This paper estimates the range of price/performance improvements for typical workloads from available data. To summarize, the results indicate that MXT improves price/performance by 25% to 70%. The competitive impact of such a large step function in price/performance from a single technology is profound; it is comparable to the entire gross margin in the competitive market for "PC servers."

Journal ArticleDOI
TL;DR: It is concluded that the stabilizing, or destabilizing, contribution of a salt bridge to protein structure is conformer-dependent, and it is shown that salt bridges and ion pairs, with less optimal geometry, often interconvert between being stabilizing and destabilizing.
Abstract: In this paper we address the interrelationship between electrostatic interactions and protein flexibility. Protein flexibility may imply small conformational changes due to the movement of backbone and of side-chain atoms, and/or large-scale molecular motions, in which parts of the protein move as rigid bodies with respect to one another. In particular, we focus on oppositely charged side chains interacting to form salt bridges. The paper has two parts: In the first, we illustrate that the majority of the salt bridges are formed within the independently folding, compact hydrophobic units (HFUs) of the proteins. On the other hand, salt bridges forming across the HFUs, where one amino acid resides in one HFU and its pairing "spouse" in a second, appear to be avoided. In the second part of the paper, we address electrostatic interactions in conformational isomers around the native state. We pick the protein Cyanovirin-N as an example. We show that salt bridges and ion pairs, with less optimal geometry, often interconvert between being stabilizing and destabilizing. We conclude that the stabilizing, or destabilizing, contribution of a salt bridge to protein structure is conformer-dependent.