scispace - formally typeset
Search or ask a question

Showing papers in "Lawrence Berkeley National Laboratory in 2001"


Journal Article
TL;DR: In this article, a bipartite graph based data clustering method is proposed, where terms and documents are simultaneously grouped into semantically meaningful co-categories and subject descriptors.
Abstract: Bipartite Graph Partitioning and Data Clustering* Hongyuan Zha Xiaofeng He Dept. of Comp. Sci. & Eng. Penn State Univ. State College, PA 16802 {zha,xhe}@cse.psu.edu Chris Ding Horst Simon NERSC Division Berkeley National Lab. Berkeley, CA 94720 {chqding,hdsimon} Qlbl. gov Ming Gu Dept. of Math. U.C. Berkeley Berkeley, CA 94720 mgu@math.berkeley.edu ABSTRACT M a n y data types arising from data mining applications can be modeled as bipartite graphs, examples include terms and documents in a text corpus, customers and purchasing items in market basket analysis and reviewers and movies in a movie recommender system. In this paper, we propose a new data clustering method based on partitioning the underlying bipartite graph. The partition is constructed by minimizing a normalized sum of edge weights between unmatched pairs of vertices of the bipartite graph. We show that an approxi­ mate solution to the minimization problem can be obtained by computing a partial singular value decomposition ( S V D ) of the associated edge weight matrix of the bipartite graph. We point out the connection of our clustering algorithm to correspondence analysis used in multivariate analysis. We also briefly discuss the issue of assigning data objects to multiple clusters. In the experimental results, we apply our clustering algorithm to the problem of document clustering to illustrate its effectiveness and efficiency. 1. INTRODUCTION Cluster analysis is an important tool for exploratory data mining applications arising from many diverse disciplines. Informally, cluster analysis seeks to partition a given data set into compact clusters so that data objects within a clus­ ter are more similar than those in distinct clusters. The liter­ ature on cluster analysis is enormous including contributions from many research communities, (see [6, 9] for recent sur­ veys of some classical approaches.) M a n y traditional clus­ tering algorithms are based on the assumption that the given dataset consists of covariate information (or attributes) for each individual data object, and cluster analysis can be cast as a problem of grouping a set of n-dimensional vectors each representing a data object in the dataset. A familiar ex­ ample is document clustering using the vector space model [1]. Here each document is represented by an n-dimensional vector, and each coordinate of the vector corresponds to a term in a vocabulary of size n. This formulation leads to the so-called term-document matrix A = (oy) for the rep­ resentation of the collection of documents, where o y is the so-called term frequency, i.e., the number of times term i occurs in document j. In this vector space model terms and documents are treated asymmetrically with terms consid­ ered as the covariates or attributes of documents. It is also possible to treat both terms and documents as first-class citizens in a symmetric fashion, and consider a y as the fre­ quency of co-occurrence of term i and document j as is done, for example, in probabilistic latent semantic indexing [12]. In this paper, we follow this basic principle and propose a new approach to model terms and documents as vertices in a bipartite graph with edges of the graph indicating the co-occurrence of terms and documents. In addition we can optionally use edge weights to indicate the frequency of this co-occurrence. Cluster analysis for document collections in this context is based on a very intuitive notion: documents are grouped by topics, on one hand documents in a topic tend to more heavily use the same subset of terms which form a term cluster, and on the other hand a topic usually is characterized by a subset of terms and those documents heavily using those terms tend to be about that particular topic. It is this interplay of terms and documents which gives rise to what we call bi-clustering by which terms and documents are simultaneously grouped into semantically co- Categories and Subject Descriptors 11.3.3 [ I n f o r m a t i o n S e a r c h a n d R e t r i e v a l ] : Clustering; G.1.3 [ N u m e r i c a l L i n e a r A l g e b r a ] : Singular value de­ composition; G.2.2 [ G r a p h T h e o r y ] : G r a p h algorithms General Terms Algorithms, theory Keywords document clustering, bipartite graph, graph partitioning, spectral relaxation, singular value decomposition, correspon­ dence analysis *Part of this work was done while Xiaofeng He was a grad­ uate research assistant at N E R S C , Berkeley National Lab. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CIKM '01 November 5-10, 2001, Atlanta, Georgia. U S A Copyright 2001 A C M X - X X X X X - X X - X / X X / X X ...$5.00. O u r clustering algorithm computes an approximate global optimal solution while probabilistic latent semantic indexing relies on the E M algorithm and therefore might be prone to local m i n i m a even with the help of some annealing process. x

295 citations


Journal Article
TL;DR: In this article, the centrality dependence of the charged multiplicity, transverse energy, and elliptic flow from hydrodynamics is studied in a hydrodynamic model, using a variety of different initializa- tions which model the initial energy or entropy production process as a hard or soft process, respectively.
Abstract: Centrality Dependence of Multiplicity, Transverse Energy, and Elliptic Flow from Hydrodynamics a Department of Physics, The Ohio State University, 174 West 18th Avenue, Columbus, OH 43210, b Institut fur Theoretische Physik, Universitat Regensburg, D-93040 Regensburg, Germany c Lawrence Berkeley National Laboratory, Berkeley CA 94720, USA d Department of Physics, University of Jyvaskyla, P.O. Box 35, FIN-40351 Jyvaskyla, Finland e Helsinki Institute of Physics, P.O. Box 64, FIN-00014 University of Helsinki, Finland P.F. Kolb a b , U. Heinz a , P. Huovinen c , K.J. Eskola d e , and K. Tuominen d USA (Submitted to Nucl. Phys. A on March 21, 2001) The centrality dependence of the charged multiplicity, transverse energy, and elliptic ow coe cient is studied in a hydrodynamic model, using a variety of di erent initializa- tions which model the initial energy or entropy production process as a hard or soft process, respectively. While the charged multiplicity depends strongly on the chosen initial- ization, the p T -integrated elliptic ow for charged particles as a function of charged particle multiplicity and the p T - di erential elliptic ow for charged particles in minimum bias events turn out to be almost independent of the initialization. PACS numbers: 25.75-q, 25.75.Ld Keywords: Relativistic heavy-ion collisions Elliptic ow Hydrodynamic model I. INTRODUCTION Elliptic ow 1] is a collective ow pattern which de- velops in non-central relativistic heavy-ion collisions as a result of the spatial deformation of the initial trans- verse overlap area. It requires rescattering among the produced particles as a mechanism to map the initial spatial deformation of the reaction zone onto the nally observed hadron momentum distributions. It is quan- ti ed by the second harmonic coe cient v 2 of an az- imuthal Fourier decomposition of the measured spectrum dN= ( dy p T dp T d ) 2]. Its magnitude v 2 and its shape v 2 ( p T ) as a function of the hadron transverse momentum are sensitive to the scattering rate among the produced secondaries, especially during the dense early stage of the expansion 3,4]. The largest elliptic ow signal, es- pecially at high p T , arises in hydrodynamic simulations 1,5,6] which assume local thermal equilibrium at every space-time point, i.e. essentially instantaneous thermal- ization or in nite scattering rate. Surprisingly, such hy- drodynamic simulations 5] are in very good agreement p with rst results from s = 130 A GeV Au+Au collisions at the Relativistic Heavy Ion Collider (RHIC) 7{11], up to transverse momenta of 1 : 5 ; 2 GeV/ c . It was recently suggested 8,9,12] that a combined ana- lysis of the full set of hadronic single particle spectra and their elliptic ow as a function of collision central- ity should allow to outline the domain of applicability of the hydrodynamic approach. Inside this domain such an analysis would constrain the initial baryon and en- ergy density and the nal freeze-out temperature su - ciently well to become sensitive to details of the equation of state (EOS) of the reball matter 9]. In this context v 2 provides access to the EOS during the early expansion stage 3{5] which, at RHIC energies, is presumably in the quark-gluon plasma (QGP) phase. Direct veri cation of the phase transition between QGP at high energy den- sity and a hadron resonance gas at lower energy density should then become possible by accurately measuring the excitation function of radial and elliptic ow 5]. In the hydrodynamic limit the EOS a ects the elliptic p ow signal through the velocity of sound, c s = dP=de 1]. However, the sensitivity of v 2 on c s is not very strong, and even a rst order phase transition, where c s vanishes in the mixed phase, a ects v 2 only on the 20% level 5]. This makes accurate measurements and systematic theo- retical checks indispensable. One possible source of am- biguity, which has not been systematically investigated in previous studies 1,5,6,8,9,12], is the sensitivity of the radial and elliptic ow pattern on the shape of the ini- tial transverse density pro le. The latter depends on the scaling of secondary particle production with the num- ber of colliding nucleons which itself is controlled by the collision centrality. In the present work, we investigate ve options which are expected to span the realistic range of possibilities. In the rst four, either the initial energy or the initial en- tropy density are assumed to scale with either the num- ber of wounded nucleons (\soft or on-perturbative particle production) or the number of binary nucleon- nucleon collisions (\hard or \perturbative particle pro- duction). In the fth model perturbative particle pro- duction is modi ed by implementing gluon shadowing 13] in the initial state and by limiting the growth of the production cross section by gluon saturation in the nal state (\saturation model 14,15]). This brings in some non-perturbative elements as well. In the rst four parametrizations we normalize the initial energy density pro le such that for central collisions we reproduce within errors the total charged multiplicity density at midrapid- ity, dN ch =d j j j p 1 = 555 10%, as measured by PHOBOS for Au+Au at s = 130 A GeV 16]. On the other hand,

183 citations


ReportDOI
TL;DR: In this article, the authors examined energy use by office equipment and network equipment at the end of 1999 and classified office equipment into 11 types; for each type they estimated annual energy consumption for residential, commercial, and industrial use by combining estimates of stock, power requirements, usage, and saturation of power management.
Abstract: In spite of the recent explosive growth in the use of office and network equipment, there has been no recent study that estimates in detail how much electricity is consumed by that equipment in the United States. In this study, we examined energy use by office equipment and network equipment at the end of 1999. We classified office equipment into 11 types; for each type we estimated annual energy consumption for residential, commercial, and industrial use by combining estimates of stock, power requirements, usage, and saturation of power management. We also classified network equipment into six types and estimated the annual energy consumption for each type. We found that total direct power use by office and network equipment is about 74 TWh per year, which is about 2% of total electricity use in the U.S. When electricity used by telecommunications equipment and electronics manufacturing is included, that figure rises to 3% of all electricity use (Koomey 2000). More than 70% of the 74 TWh/year is dedicated to office equipment for commercial use. We also found that power management currently saves 23 TWh/year, and complete saturation and proper functioning of power management would achieve additional savings of 17 TWh/year. Furthermore, complete saturation of night shutdown for equipment not required to operate at night would reduce power use by an additional 7 TWh/year. Finally, we compared our current estimater with our 1995 forecast for 1999. We found that the total difference between our current estimate and the previous forecast is less than 15% and identified the factors that led to inaccuracies in the previous forecast. We also conducted a sensitivity analysis of the uncertainties in our current forecast and identified the data sets that have the largest impact on our current estimate of energy use.

162 citations


Patent
TL;DR: In this paper, the authors present methods and devices for the knowledge-based discovery and optimization of differences between cell types, and further provide means to optimize culture conditions for specific subpopulations of cells.
Abstract: The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time: quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.

107 citations


Journal Article
TL;DR: This paper discusses several hardware and software design techniques and issues, and then describes their application to an implementation of an enhanced FTP protocol called GridFTP, and results from two applications obtained at the Supercomputing 2000 conference.
Abstract: Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. Ensuring that the data is there in time for the computation in today's Internet is a massive problem. From our work developing a scalable distributed network cache, we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). In this paper, we discuss several hardware and software design techniques and issues, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. We also describe results from two applications using these techniques, which were obtained at the Supercomputing 2000 conference.

99 citations


ReportDOI
TL;DR: The work described in this paper was funded by the Assistant Secretary of Energy Efficiency and Renewable Energy of the U.S. Department of Energy under Contract No. DE-ACO3-76SF00098.
Abstract: LBNL-48357 E RNEST O RLANDO L AWRENCE B ERKELEY N ATIONAL L ABORATORY Community Wind Power Ownership Schemes in Europe and their Relevance to the United States Mark Bolinger Lawrence Berkeley National Laboratory 1 Cyclotron Rd., MS 90-4000 Berkeley, California 94720 Environmental Energy Technologies Division May 2001 Download from: http://eetd.lbl.gov/EA/EMP/ The work described in this study was funded by the Assistant Secretary of Energy Efficiency and Renewable Energy of the U.S. Department of Energy under Contract No. DE-ACO3-76SF00098.

94 citations


Journal Article
TL;DR: In this article, a detailed analysis of boiler energy use is presented, and the economic potential for saving energy in U.S. industry is estimated at 18-20% of total boiler energy consumption, resulting in energy savings approximately 1120-1190 TBtu.
Abstract: Steam systems are a part of almost every major industrial process today. Thirty-seven percent of the fossil fuel burned in US industry is burned to produce steam. In this paper we will establish baseline energy consumption for steam systems. Based on a detailed analysis of boiler energy use we estimate current energy use in boilers in U.S. industry at 6.1 Quads (6.4 EJ), emitting almost 66 MtC in CO{sub 2} emissions. We will discuss fuels used and boiler size distribution. We also describe potential savings measures, and estimate the economic energy savings potential in U.S. industry (i.e. having payback period of 3 years or less). We estimate the nationwide economic potential, based on the evaluation of 16 individual measures in steam generation and distribution. The analysis excludes the efficient use of steam and increased heat recovery. Based on the analysis we estimate the economic potential at 18-20% of total boiler energy use, resulting in energy savings approximately 1120-1190 TBtu ( 1180-1260 PJ). This results in a reduction of CO{sub 2} emissions equivalent to 12-13 MtC.

85 citations


ReportDOI
TL;DR: In this article, the authors provide guidance on the design, development, implementation, maintenance, and evaluation of energy-efficiency labels and standards for appliances, equipment, and lighting products.
Abstract: Energy-performance improvements in consumer products are an essential element in any government's portfolio of energy-efficiency and climate change mitigation programs. Governments need to develop balanced programs, both voluntary and regulatory, that remove cost-ineffective, energy-wasting products from the marketplace and stimulate the development of cost-effective, energy-efficient technology. Energy-efficiency labels and standards for appliances, equipment, and lighting products deserve to be among the first policy tools considered by a country's energy policy makers. The U.S. Agency for International Development (USAID) and the United Nations Foundation (UNF) recognize the need to support policy makers in their efforts to implement energy-efficiency standards and labeling programs and have developed this guidebook, together with the Collaborative Labeling and Appliance Standards Program (CLASP), as a primary reference. This guidebook was prepared over the course of the past year with significant contribution from the authors and reviewers mentioned previously. Their diligent participation has made this the international guidance tool it was intended to be. The lead authors would also like to thank the following individuals for their support in the development, production, and distribution of the guidebook: Marcy Beck, Elisa Derby, Diana Dhunke, Ted Gartner, and Julie Osborn of Lawrence Berkeley National Laboratory as well as Anthony Ma of Bevilacqua-Knight, Inc. This guidebook is designed as a manual for government officials and others around the world responsible for developing, implementing, enforcing, monitoring, and maintaining labeling and standards-setting programs. It discusses the pros and cons of adopting energy-efficiency labels and standards and describes the data, facilities, and institutional and human resources needed for these programs. It provides guidance on the design, development, implementation, maintenance, and evaluation of the programs and on the design of the labels and standards themselves. In addition, it directs the reader to references and other resources likely to be useful in conducting the activities described and includes a chapter on energy policies and programs that complement appliance efficiency labels and standards. This guidebook attempts to reflect the essential framework of labeling and standards programs. It is the intent of the authors and sponsors to distribute copies of this book worldwide at no charge for the general public benefit. The guidebook is also available on the web at www.CLASPonline.org and can be downloaded to be used intact or piecemeal for whatever beneficial purposes readers may conceive.

84 citations


ReportDOI
TL;DR: This report describes test measurements of the behavior of two microturbine generators (MTGs) under transient conditions under three different operating conditions: grid-connect; stand-alone single MTG with load banks; and two MTGs running in parallel with load Banks.
Abstract: This report describes test measurements of the behavior of two microturbine generators (MTGs) under transient conditions. The tests were conducted under three different operating conditions: grid-connect; stand-alone single MTG with load banks; and two MTGs running in parallel with load banks. Tests were conducted with both the Capstone 30-kW and Honeywell Parallon 75-kW MTGs. All tests were conducted at the Southern California Edison /University of California, Irvine (UCI) test facility. In the grid-connected mode, several test runs were conducted with different set-point changes both up and down and a start up and shutdown were recorded for each MTG. For the stand-alone mode, load changes were initiated by changing load-bank values (both watts and VARs). For the parallel mode, tests involved changes in the load-bank settings as well as changes in the power set point of the MTG running in grid-connect mode. Detailed graphs of the test results are presented. It should be noted that these tests were done using a specific hardware and software configuration. Use of different software and hardware could result in different performance characteristics for the same units.

64 citations


Journal ArticleDOI
TL;DR: In this paper, the capacity of fluvial brine-bearing formations to sequester CO-sub 2 was investigated using numerical simulations of CO{sub 2} injection and storage.
Abstract: The capacity of fluvial brine-bearing formations to sequester CO{sub 2} is investigated using numerical simulations of CO{sub 2} injection and storage. Capacity is defined as the volume fraction of the subsurface available for CO{sub 2} storage and is conceptualized as a product of factors that account for two-phase flow and transport processes, formation geometry, formation heterogeneity, and formation porosity. The space and time domains used to define capacity must be chosen with care to obtain meaningful results, especially when comparing different authors' work. Physical factors that impact capacity include permeability anisotropy and relative permeability to CO{sub 2}, brine/CO{sub 2} density and viscosity ratios, the shape of the trapping structure, formation porosity and the presence of low-permeability layering.

63 citations


Journal Article
TL;DR: The importing of building geometry into simulation of energy performance from the users' point of view is examined and the resulting savings may greatly increase the value obtained from simulation, the number of projects in which energy performance simulation is used, and expedite decision making in the design process.
Abstract: Building geometry is essential to any simulation of building performance. This paper examines the importing of building geometry into simulation of energy performance from the users' point of view. It lists performance requirements for graphic user interfaces that input building geometry, and discusses the basic options in moving from two- to three-dimensional definition of geometry and the ways to import that geometry into energy simulation. The obvious answer lies in software interoperability. With the BLIS group of interoperable software one can interactively import building geometry from CAD into EnergyPlus and dramatically reduce the effort otherwise needed for manual input.The resulting savings may greatly increase the value obtained from simulation, the number of projects in which energy performance simulation is used, and expedite decision making in the design process.

ReportDOI
TL;DR: In this paper, the authors present a review of recent literature on electricity reliability costs and present three distinct end-use approaches for tracking trends in reliability needs: (1) an analysis of the electricity-use requirements of office equipment in different commercial sectors; (2) an examination of the use of aggregate statistical indicators of industrial electricity use and economic activity to identify high reliability-requirement customer market segments; and (3) a case study of cleanrooms, which is a cross-cutting market segment known to have high reliability requirements.
Abstract: During the past three years, working with more than 150 organizations representing public and private stakeholders, EPRI has developed the Electricity Technology Roadmap. The Roadmap identifies several major strategic challenges that must be successfully addressed to ensure a sustainable future in which electricity continues to play an important role in economic growth. Articulation of these anticipated trends and challenges requires a detailed understanding of the role and importance of reliable electricity in different sectors of the economy. This report is intended to contribute to that understanding by analyzing key aspects of trends in the economic value of electricity reliability in the U.S. economy. We first present a review of recent literature on electricity reliability costs. Next, we describe three distinct end-use approaches for tracking trends in reliability needs: (1) an analysis of the electricity-use requirements of office equipment in different commercial sectors; (2) an examination of the use of aggregate statistical indicators of industrial electricity use and economic activity to identify high reliability-requirement customer market segments; and (3) a case study of cleanrooms, which is a cross-cutting market segment known to have high reliability requirements. Finally, we present insurance industry perspectives on electricity reliability as an example of a financial tool for addressing customers' reliability needs.

Journal Article
TL;DR: Schneider et al. as mentioned in this paper used computed tomography to obtain three-dimensional information about the location of specific proteins in cells, using a tilt series of images acquired using the amplitude contrast mode of the TXM.
Abstract: COMPUTED TOMOGRAPHY OF CRYOGENIC CELLS G. SCHNEIDER, and E. ANDERSON Center for X-ray Optics, Lawrence Berkeley National Laboratory, One Cyclotron Road MS 2-400, Berkeley, CA 94720, USA S. VOGT, C. KNOCHEL, and D. WEISS Institut fur Rontgenphysik, Universitat Gottingen, Geiststrase 11 D-37073 Gottingen, Germany M. LEGROS, and C. LARABELL Life Sciences, Lawrence Berkeley National Laboratory, One Cyclotron Road MS 6-2100, Berkeley, CA 94720, USA Received (to be inserted Revised by publisher) Soft X-ray microcopy has resolved 30 nm structures in biological cells. To protect the cells from radiation damage caused by X-rays, imaging of the samples has to be performed at cryogenic temperatures, which makes it possible to take multiple images of a single cell. Due to the small numerical aperture of zone plates, X-ray objectives have a depth of focus on the order of several microns. By treating the X-ray microscopic images as projections of the sample absorption, computed tomography (CT) can be performed. Since cryogenic biological samples are resistant to radiation damage, it is possible to reconstruct frozen-hydrated cells imaged with a full-field X-ray microscope. This approach is used to obtain three-dimensional information about the location of specific proteins in cells. To localize proteins in cells, immunolabelling with strongly X-ray absorbing nanoparticles was performed. With the new tomography setup developed for the X-ray microscope XM-1 installed at the ALS, we have performed tomography of immunolabelled frozen-hydrated cells to detect protein distributions inside of cells. As a first example, the distribution of the nuclear protein, male specific lethal 1 (MSL-1) in the Drosophila melanogaster cell was studied. 1. Introduction The structure of proteins can be studied by X-ray crystallography with atomic resolution, but their location in cells remains unknown. With immunolabelling it is possible to localize these proteins in cells. Up to now light microscopy has mainly been used to study their distribution in cells by tagging the investigated protein with fluorophore-conjugated antibodies. While light microscopes allows routine investigation of whole, unsectioned cells, the obtainable resolution is diffraction limited to about 200 nm. In addition, this technique reveals mainly the distribution of the fluorophore-conjugated antibodies whereas most unlabelled cell structure is not clearly visualized. Electron microscopy can reveal cell structures at much higher resolution level, but is limited by the thickness of the sample, i.e. only less than 1 µm thick objects can be imaged. Therefore, no conventional imaging technique exists which can visualize the three- dimensional distribution of proteins inside whole hydrated cells, e.g. in the cell nucleus, with higher than light microscopical resolution. Due to the shorter wavelengths of X-rays than visible light, X-ray microscopy provides higher resolving power than light microscopes. By utilizing the natural absorption contrast between protein and water at photon energies of about 0.5 keV, smallest cell structures of about 30 nm in size embedded in vitreous ice can be detected in X-ray microscope images 1-3 . The aim of this work is to apply computed tomography, which has already been demonstrated using artificial samples 4 , mineralized sheats of bacteria 5 and frozen-hydrated algae 6 , in order to localize specific proteins and organelles in unsectioned, frozen-hydrated cells. 2. Lateral Resolution and Depth of Focus The computed-tomography experiments presented in this work are all based on tilt series of images acquired using the amplitude contrast mode of the TXM. In the amplitude contrast mode, the microscope forms enlarged images of the intrinsic photoelectric absorption contrast of the object. However, the obtained image contrast is influenced both by the condenser illuminating the object and by the imaging X-ray objective. The e-beam written condenser zone plate used for these experiments has an outermost zone width of dr N = 54 nm 7 . At 2.4 nm wavelength, the numerical aperture is given by NA cond = λ / (2 dr N ) = 0.0222.

ReportDOI
TL;DR: Xu et al. as discussed by the authors performed batch reaction modeling of the geochemical evolution of three different aquifer mineralogies in the presence of CO 2 at high pressure, and found that the amount of carbonates that may be sequestered by precipitation of secondary carbonates is comparable with and can be larger than the effect of dissolved CO 2 dissolution in pore waters.
Abstract: Analysis of Mineral Trapping for CO 2 Disposal in Deep Aquifers Tianfu Xu, John A Apps, and Karsten Pruess Earth Sciences Division, Lawrence Berkeley National Laboratory, University of California, Berkeley, CA 94720 Abstract CO 2 disposal into deep aquifers has been suggested as a potential means whereby atmospheric emissions of greenhouse gases may be reduced However, our knowledge of the geohydrology, geochemistry, geophysics, and geomechanics of CO 2 disposal must be refined if this technology is to be implemented safely, efficiently, and predictably As a prelude to a fully coupled treatment of physical and chemical effects of CO 2 injection, we have analyzed the impact of CO 2 immobilization through carbonate precipitation A survey of all major classes of rock-forming minerals, whose alteration would lead to carbonate precipitation, indicated that very few minerals are present in sufficient quantities in aquifer host rocks to permit significant sequestration of CO 2 We performed batch reaction modeling of the geochemical evolution of three different aquifer mineralogies in the presence of CO 2 at high pressure Our modeling considered (1) redox processes that could be important in deep subsurface environments, (2) the presence of organic matter, (3) the kinetics of chemical interactions between the host rock minerals and the aqueous phase, and (4) CO 2 solubility dependence on pressure, temperature and salinity of the system The geochemical evolution under both natural background and CO 2 injection conditions was evaluated In addition, changes in porosity were monitored during the simulations Results indicate that CO 2 sequestration by matrix minerals varies considerably with rock type Under favorable conditions the amount of CO 2 that may be sequestered by precipitation of secondary carbonates is comparable with and can be larger than the effect of CO 2 dissolution in pore waters The precipitation of ankerite and siderite is sensitive to the rate of reduction of ferric mineral precursors such as glauconite, which in turn is dependent on the reactivity of associated organic material The accumulation of carbonates in the rock matrix and induced rock mineral alteration due to the presence of dissolved CO 2 lead to a considerable decrease in porosity The numerical experiments described here provide useful insight into sequestration mechanisms, and their controlling geochemical conditions and parameters

Journal Article
TL;DR: Evaluating different modeling approaches for their applicability to model based FDD of vapor compression chillers found the Gordon and Ng Universal Chiller model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values.
Abstract: Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which is empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected tomore » have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

Proceedings ArticleDOI
TL;DR: The Engineering Test Stand (ETS) as mentioned in this paper is a developmental lithography tool designed to demonstrate full-field EUV imaging and provide data for commercial tool development, and it is configured using a developmental projection system, while fabrication of an improved projection system proceeds in parallel.
Abstract: The Engineering Test Stand (ETS) is a developmental lithography tool designed to demonstrate full-field EUV imaging and provide data for commercial-tool development. In the first phase of integration, currently in progress, the ETS is configured using a developmental projection system, while fabrication of an improved projection system proceeds in parallel. The optics in the second projection system have been fabricated to tighter specifications for improved resolution and reduced flare. The projection system is a 4-mirror, 4x-reduction, ring-field design having a numeral aperture of 0.1, which supports 70 nm resolution at a k{sub 1} of 0.52. The illuminator produces 13.4 nm radiation from a laser-produced plasma, directs the radiation onto an arc-shaped field of view, and provides an effective fill factor at the pupil plane of 0.7. The ETS is designed for full-field images in step-and-scan mode using vacuum-compatible, magnetically levitated, scanning stages. This paper describes system performance observed during the first phase of integration, including static resist images of 100 nm isolated and dense features.

ReportDOI
TL;DR: The key accomplishments of this year's work were the acquisition of increasingly accurate data on DER technologies, including the development of methods for forecasting cost reductions for these technologies, and the creation of a credible example California {mu}Grid for use in this study and in future work.
Abstract: This report describes work completed for the California Energy Commission (CEC) on the continued development and application of the Distributed Energy Resources Customer Adoption Model (DER-CAM). This work was performed at Ernest Orlando Lawrence Berkeley National Laboratory (Berkeley Lab) between July 2000 and June 2001 under the Consortium for Electric Reliability Technology Solutions (CERTS) Distributed Energy Resources Integration (DERI) project. Our research on distributed energy resources (DER) builds on the concept of the microgrid ({mu}Grid), a semiautonomous grouping of electricity-generating sources and end-use sinks that are placed and operated for the benefit of its members. Although a {mu}Grid can operate independent of the macrogrid (the utility power network), the {mu}Grid is usually interconnected, purchasing energy and ancillary services from the macrogrid. Groups of customers can be aggregated into {mu}Grids by pooling their electrical and other loads, and the most cost-effective combination of generation resources for a particular {mu}Grid can be found. In this study, DER-CAM, an economic model of customer DER adoption implemented in the General Algebraic Modeling System (GAMS) optimization software is used, to find the cost-minimizing combination of on-site generation customers (individual businesses and a {mu}Grid) in a specified test year. DER-CAM's objective is to minimize the cost of supplying electricity to a specific customer by optimizing the installation of distributed generation and the self-generation of part or all of its electricity. Currently, the model only considers electrical loads, but combined heat and power (CHP) analysis capability is being developed under the second year of CEC funding. The key accomplishments of this year's work were the acquisition of increasingly accurate data on DER technologies, including the development of methods for forecasting cost reductions for these technologies, and the creation of a credible example California {mu}Grid for use in this study and in future work. The work performed during this year demonstrates the viability of DER-CAM and of our approach to analyzing adoption of DER.

Journal Article
TL;DR: Mcone and E.G. Hertwich Environmental Energy Technologies Division July 2001 as discussed by the authors, prepared for publication in the Journal of Life Cycle Assessment Research Supported in part by: The U.S Environmental Protection Agency National Exposure Research Laboratory
Abstract: LBNL - 48254 E RNEST O RLANDO L AWRENCE B ERKELEY N ATIONAL L ABORATORY The Human Toxicity Potential and a Strategy for Evaluating Model Performance in Life-Cycle Impact Assessment T.E. McKone and E.G. Hertwich Environmental Energy Technologies Division July 2001 This paper was prepared for publication in the Journal of Life Cycle Assessment Research Supported in part by: The U.S. Environmental Protection Agency National Exposure Research Laboratory

Journal Article
TL;DR: Transcription-coupled repair of Oxidative DNA Damage in Human Cells: Mechanisms and Consequences as discussed by the authors was the first work to address the problem of DNA damage in human cells.
Abstract: Transcription-Coupled Repair of Oxidative DNA Damage in Human Cells: Mechanisms and Consequences Susan Tsutakawa and Priscilla K. Cooper* Life Sciences Division, Lawrence Berkeley National Laboratory, University of California, Berkeley, CA 94720 Running Title: Transcription-Coupled Repair of Oxidative Damage * To whom correspondence should be addressed: Priscilla K. Cooper Life Sciences Division, Mail Stop 74-157 Lawrence Berkeley National Laboratory 1 Cyclotron Road Berkeley, CA 94720 Tel: (510) 486-7346 Fax: (510) 486-6816 Email: pkcooper@lbl.gov

ReportDOI
TL;DR: In this paper, the status and performance of fourteen state renewable energy funds supported by system-benefit charge (SBC) has been summarized and compared with a more detailed recent LBNL report that can be downloaded from the web.
Abstract: As competition in the supply and delivery of electricity has been introduced in the United States, states have sought to ensure the continuation of ''public benefits'' programs traditionally administered or funded by electric utilities One of the most popular policy mechanisms for ensuring such continued support has been the system-benefits charge (SBC) This paper summarizes the status and performance of fourteen state renewable energy funds supported by system-benefits charges, and is based on a more detailed recent LBNL report that can be downloaded from the web

Journal Article
TL;DR: In this paper, the authors describe results from highly ion extraction experiments at the Electron Beam Ion Trap (EBIT) facility which is now operated at Lawrence Berkeley National Laboratory after transfer from Lawrence Livermore National Laboratory.
Abstract: We describe results from highly ion extraction experiments at the Electron Beam Ion Trap (EBIT) facility which is now operated at Lawrence Berkeley National Laboratory after transfer from Lawrence Livermore National Laboratory. Requirements on ion source performance for the application of highly charged ions (e. g. Xe{sup 44+}) in surface analysis and materials science are discussed.

ReportDOI
TL;DR: In this article, the authors identify and evaluate sources of data for those factors that affect the transport to and concentration of outdoor particles in the indoor environment and provide a synthesis of the currently available information on building air-leakage parameters and their effect on indoor particle matter concentrations.
Abstract: The process of characterizing human exposure to particulate matter requires information on both particle concentrations in microenvironments and the time-specific activity budgets of individuals among these microenvironments. Because the average amount of time spent indoors by individuals in the US is estimated to be greater than 75%, accurate characterization of particle concentrations indoors is critical to exposure assessments for the US population. In addition, it is estimated that indoor particle concentrations depend strongly on outdoor concentrations. The spatial and temporal variations of indoor particle concentrations as well as the factors that affect these variations are important to health scientists. For them, knowledge of the factors that control the relationship of indoor particle concentrations to outdoor levels is particularly important. In this report, we identify and evaluate sources of data for those factors that affect the transport to and concentration of outdoor particles in the indoor environment. Concentrations of particles indoors depend upon the fraction of outdoor particles that penetrate through the building shell or are transported via the air handling (HVAC) system, the generation of particles by indoor sources, and the loss mechanisms that occur indoors, such as deposition. To address these issues, we (i) identify and assemble relevant information including the behavior of particles during air leakage, HVAC operations, and particle filtration; (ii) review and evaluate the assembled information to distinguish data that are directly relevant to specific estimates of particle transport from those that are only indirectly useful and (iii) provide a synthesis of the currently available information on building air-leakage parameters and their effect on indoor particle matter concentrations.

Journal Article
TL;DR: The application of model-based performance assessment at the whole building level is explored and the potential of simulation programs to act as ''plug-and-play'' components on building control networks is discussed.
Abstract: The application of model-based performance assessment at the whole building level is explored. The information requirements for a simulation to predict the actual performance of a particular real building, as opposed to estimating the impact of design options, are addressed with particular attention to common sources of input error and important deficiencies in most simulation models. The role of calibrated simulations is discussed. The communication requirements for passive monitoring and active testing are identified and the possibilities for using control system communications protocols to link on-line simulation and energy management and control systems are discussed. The potential of simulation programs to act as ''plug-and-play'' components on building control networks is discussed.

Journal Article
TL;DR: In this article, the authors describe recent Chinese government activities to develop industrial sector energy efficiency targets as a market-based mechanism for improving the energy efficiency of key industrial facilities, and describe how the Chinese government is still actively developing an overall policy for energy efficiency in the industrial sector modeled after policies in other industrialized countries.
Abstract: Chinese industrial sector energy-efficiency policy has gone through a number of distinct phases since the founding of the People’s Republic in 1949. An initial period of energy supply growth in the 1950s, 1960s, and 1970s was followed by implementation of significant energy efficiency programs in the 1980s. Many of these programs were dismantled in the 1990s during the continuing move towards a market-based economy. In an effort to once again strengthen energy efficiency, the Chinese government passed the Energy Conservation Law in 1997 which provides broad guidance for the establishment of energy efficiency policies. Article 20 of the Energy Conservation Law requires substantial improvement in industrial energy efficiency in the key energy-consuming industrial facilities in China. This portion of the Law declares that “the State will enhance energy conservation management in key energy-consuming entities.” In 1997, the industrial sector consumed nearly 30 EJ, or 76% of China’s primary energy. Even though primary energy consumption has dropped dramatically in recent years, due mostly to a decline in coal consumption, the Chinese government is still actively developing an overall policy for energy efficiency in the industrial sector modeled after policies in a number of industrialized countries. This paper will describe recent Chinese government activities to develop industrial sector energyefficiency targets as a “market-based” mechanism for improving the energy efficiency of key industrial facilities.

ReportDOI
TL;DR: In this article, the influence of pavement albedo on air temperature in Los Angeles was investigated through laboratory fabrication and exposure of 32 mixes of concrete, including white and gray-cement concrete.
Abstract: Increasing the solar reflectance (albedo) of a paved surface keeps it cooler in the sun, reducing convection of heat from pavement to air and thereby decreasing the ambient air temperature. Simulations of the influence of pavement albedo on air temperature in Los Angeles predict that increasing the albedo of 1,250 km2 of pavement by 0.25 would save cooling energy worth $15M yr-1, and reduce smog-related medical and lost-work expenses by $76M yr-1. Most sidewalks and a small fraction of roads and parking areas are paved with portland cement concrete, which can be made quite reflective through suitable choice of cement and aggregate. Variations with composition and environmental exposure of the albedos of portland cement concrete pavements were investigated through laboratory fabrication and exposure of 32 mixes of concrete. Twenty-four mixes yielded substandard, ''rough'' concretes due to high, unmet aggregate water demand. The albedos of the remaining eight ''smooth'' concrete mixes ranged from 0.41 to 0.77 (mean 0.59). Simulated weathering, soiling, and abrasion each reduced average concrete albedo (mean decreases 0.06, 0.05, and 0.19, respectively), though some samples became slightly more reflective through weathering or soiling. Simulated rain (wetting) strongly depressed the albedos of concretes (mean decrease 0.23) until their surfaces were dried. Concrete albedo grew as the cement hydration reaction progressed (mean increase 0.08), but stabilized within six weeks of casting. White-cement concretes were on average significantly more reflective than gray-cement concretes. The albedo of the most-reflective white-cement concrete was 0.18 to 0.39 higher than that of the most-reflective gray-cement concrete, depending on state of exposure. Concrete albedo generally correlated with cement albedo and sand albedo, and, after abrasion, with rock albedo. Cement albedo had a disproportionately strong influence on the reflectance of concrete. Efflorescence and surface carbonation whitened some gray-cement mixes.

ReportDOI
TL;DR: The first comprehensive renewable energy certificate program has been put into place to monitor and track RPS compliance in Texas as discussed by the authors, with new wind projects reportedly contracted for under 3(US)-242/kWh, in part as a result of a 1.7(US) production tax credit and an outstanding wind resource.
Abstract: Texas has rapidly emerged as one of the leading wind power markets in the United States. This development can be largely traced to a well-designed and carefully implemented renewables portfolio standard (RPS). The RPS is a new policy mechanism that has received increasing attention as an attractive approach to support renewable power generation. Though replacing existing renewable energy policies with an as-of-yet largely untested approach in the RPS is risky, early experience from Texas suggests that an RPS can effectively spur renewables development and encourage competition among renewable energy producers. Initial RPS targets in Texas will be far exceeded by the end of 2001, with as much as 930 MW of wind slated for installation this year. RPS compliance costs appear negligible, with new wind projects reportedly contracted for under 3(US)\242/kWh, in part as a result of a 1.7(US)\242/kWh production tax credit, an outstanding wind resource, and an RPS that is sizable enough to drive project economies of scale. Obliged retail suppliers have been willing to enter into long-term contracts with renewable generators, reducing important risks for both the developer and the retail supplier. Finally, the country's first comprehensive renewable energy certificate program has been put into place to monitor and track RPS compliance.

Journal Article
TL;DR: This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts.
Abstract: Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability can also offer solutions to building owners where reliable access to the electric grid is a challenge, in both less-developed countries and in industrialized countries where electric generating capacity has not kept pace with growth. We find that when properly designed and executed as part of a complete building solution, advanced facades can provide solutions to many of these challenges in building design today.

ReportDOI
TL;DR: The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel.
Abstract: The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (SE and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054] [DIRS 124754]). By making the large amount of information developed on Yucca Mountain available in stages, the DOE intends to provide the public and interested parties with time to review the available materials and to formulate and submit comments. Before determining whether to recommend the Yucca Mountain site, the Secretary will consider public and stakeholder comments, as well as the available technical information. « less

Journal Article
TL;DR: In this paper, the Compton recoil event rate can be reduced significantly by the choice of materials in the environment and dewar and by careful shielding, and the dewar can be eliminated by judicious choice of material.
Abstract: The remarkable sensitivity of depleted silicon to ionizing radiation is a nuisance to astronomers. Cosmic rays degrade images because of struck pixels, leading to modified observing strategies and the development of algorithms to remove the unwanted artifacts. In the new-generation CCD's with thick sensitive regions, cosmic-ray muons make recognizable straight tracks and there is enhanced sensitivity to ambient gamma radiation via Compton-scattered electrons ( worms ). Beta emitters inside the dewar, for example high-potassium glasses such as BK7, also produce worm-like tracks. The cosmic-ray muon rate is irreducible and increases with altitude. The gamma rays are mostly by-products of the U and Th decay chains; these elements always appear as traces in concrete and other materials. The Compton recoil event rate can be reduced significantly by the choice of materials in the environment and dewar and by careful shielding. Telescope domes appear to be significantly cleaner than basement laboratories and Coude spectrograph rooms. Radiation sources inside the dewar can be eliminated by judicious choice of materials. Cosmogenic activation during high-altitude flights does not appear to be a problem. Our conclusions are supported by tests at the Lawrence Berkeley National Laboratory low-level counting facilities in Berkeley and at Oroville, California (180 m underground).

Journal Article
TL;DR: Walker et al. as mentioned in this paper evaluated flow hood measurements for residential register flows and found that the flow hood performance was not significantly worse than that of the conventional flow hood measurement method, which was supported by the California Institute for Energy Efficiency, a research unit of the University of California, under Contract No. S9902A through the U.S. Department of Energy.
Abstract: LBNL-47382 Evaluation of flow hood measurements for residential register flows Walker, I.S., Wray, C.P., Dickerhoff, D.J., and Sherman, M.H. Energy Performance of Buildings Group Lawrence Berkeley National Laboratory One Cyclotron Road Berkeley, CA 94720 This work was supported by the California Institute for Energy Efficiency, a research unit of the University of California, under Contract No. S9902A through the U.S. Department of Energy under Contract No. DE-AC03-76SF00098.