scispace - formally typeset
Search or ask a question

Showing papers by "Lawrence Berkeley National Laboratory published in 1997"


Proceedings ArticleDOI
19 Jan 1997
TL;DR: An improved splice site predictor for the genefinding program Genie is presented and it is shown that Genie shows significant improvements in the sensitivity and specificity of gene structure identification.
Abstract: We present an improved splice site predictor for the genefinding program Genie. Genie is based on a generalized Hidden Markov Model (GHMM) that describes the grammar of a legal parse of a multi-exon gene in a DNA sequence. In Genie, probabilities are estimated for gene features by using dynamic programming to combine information from multiple content and signal sensors, including sensors that integrate matches to homologous sequences from a database. One of the hardest problems in genefinding is to determine the complete gene structure correctly. The splice site sensors are the key signal sensors that address this problem. We replaced the existing splice site sensors in Genie with two novel neural networks based on dinucleotide frequencies. Using these novel sensors, Genie shows significant improvements in the sensitivity and specificity of gene structure identification. Experimental results in tests using a standard set of annotated genes showed that Genie identified 86% of coding nucleotides correctly with a specificity of 85%, versus 80% and 84% in the older system. In further splice site experiments, we also looked at correlations between splice site scores and intron and exon lengths, as well as at the effect of distance to the nearest splice site on false positive rates.

1,550 citations


Journal ArticleDOI
TL;DR: In this paper, the α6/β4 heterodimer was found to play a significant role in directing polarity and tissue structure of mammary epithelial cells, suggesting the existence of intimate interactions between different integrin pathways as well as adherens junctions.
Abstract: In a recently developed human breast cancer model, treatment of tumor cells in a 3-dimensional culture with inhibitory β1-integrin antibody or its Fab fragments led to a striking morphological and functional reversion to a normal phenotype. A stimulatory β1-integrin antibody proved to be ineffective. The newly formed reverted acini re-assembled a basement membrane and re-established E-cadherin–catenin complexes, and re-organized their cytoskeletons. At the same time they downregulated cyclin D1, upregulated p21cip,waf-1, and stopped growing. Tumor cells treated with the same antibody and injected into nude mice had significantly reduced number and size of tumors in nude mice. The tissue distribution of other integrins was also normalized, suggesting the existence of intimate interactions between the different integrin pathways as well as adherens junctions. On the other hand, nonmalignant cells when treated with either α6 or β4 function altering antibodies continued to grow, and had disorganized colony morphologies resembling the untreated tumor colonies. This shows a significant role of the α6/β4 heterodimer in directing polarity and tissue structure. The observed phenotypes were reversible when the cells were disassociated and the antibodies removed. Our results illustrate that the extracellular matrix and its receptors dictate the phenotype of mammary epithelial cells, and thus in this model system the tissue phenotype is dominant over the cellular genotype.

1,329 citations


Journal ArticleDOI
TL;DR: In this paper, the impacts of surface albedo, evapotranspiration, and anthropogenic heating on the near-surface climate are discussed, and numerical simulations and field measurements indicate that increasing vegetation cover can be effective in reducing the surface and air temperatures near the ground.

1,282 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a light-curve width-corrected magnitudes as a function of redshift of distant (z = 0.35-0.46) supernovae to obtain a global measurement of the mass density.
Abstract: We have developed a technique to systematically discover and study high-redshift supernovae that can be used to measure the cosmological parameters. We report here results based on the initial seven of more than 28 supernovae discovered to date in the high-redshift supernova search of the Supernova Cosmology Project. We find an observational dispersion in peak magnitudes of ? -->MB=0.27; this dispersion narrows to ?MB, corr=0.19 after correcting the magnitudes using the light-curve width-luminosity relation found for nearby (z ? 0.1) Type Ia supernovae from the Cal?n/Tololo survey (Hamuy et al.). Comparing light-curve width-corrected magnitudes as a function of redshift of our distant (z = 0.35-0.46) supernovae to those of nearby Type Ia supernovae yields a global measurement of the mass density, ?M${r M}$ -->=0.88 -->+ 0.69?0.60 for a ? = 0 cosmology. For a spatially flat universe (i.e., ?M + ?? = 1), we find ?M${r M}$ -->=0.94 -->+ 0.34?0.28 or, equivalently, a measurement of the cosmological constant, ??=0.06 -->+ 0.28?0.34 ( < 0.51 at the 95% confidence level). For the more general Friedmann-Lema?tre cosmologies with independent ?M and ??, the results are presented as a confidence region on the ?M-?? plane. This region does not correspond to a unique value of the deceleration parameter q0. We present analyses and checks for statistical and systematic errors and also show that our results do not depend on the specifics of the width-luminosity correction. The results for ??-versus-?M are inconsistent with ?-dominated, low-density, flat cosmologies that have been proposed to reconcile the ages of globular cluster stars with higher Hubble constant values.

1,272 citations


Proceedings ArticleDOI
01 Oct 1997
TL;DR: The prevalence of unusual network events such as out-of-order delivery and packet corruption are characterized and a robust receiver-based algorithm for estimating "bottleneck bandwidth" is discussed that addresses deficiencies discovered in techniques based on "packet pair".
Abstract: We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales.

1,240 citations


Journal ArticleDOI
TL;DR: An adaptive algorithm is demonstrated that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery, and provides good performance over a wide range of underlying topologies.
Abstract: This paper describes scalable reliable multicast (SRM), a reliable multicast framework for light-weight sessions and application level framing. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The SRM framework has been prototyped in wb, a distributed whiteboard application, which has been used on a global scale with sessions ranging from a few to a few hundred participants. The paper describes the principles that have guided the SRM design, including the IP multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies.

1,230 citations


Journal ArticleDOI
TL;DR: Four distinct coding mutations in JAG1 are demonstrated, providing evidence that it is the causal gene for Alagille syndrome, and supporting the hypothesis that haploinsufficiency for this gene is one of the mechanisms causing the Alagile syndrome phenotype.
Abstract: Alagille syndrome is an autosomal dominant disorder characterized by abnormal development of liver, heart, skeleton, eye, face and, less frequently, kidney Analyses of many patients with cytogenetic deletions or rearrangements have mapped the gene to chromosome 20p12, although deletions are found in a relatively small proportion of patients (< 7%) We have mapped the human Jagged1 gene (JAG1), encoding a ligand for the developmentally important Notch transmembrane receptor, to the Alagille syndrome critical region within 20p12 The Notch intercellular signalling pathway has been shown to mediate cell fate decisions during development in invertebrates and vertebrates We demonstrate four distinct coding mutations in JAG1 from four Alagille syndrome families, providing evidence that it is the causal gene for Alagille syndrome All four mutations lie within conserved regions of the gene and cause translational frameshifts, resulting in gross alterations of the protein product Patients with cytogenetically detectable deletions including JAG1 have Alagille syndrome, supporting the hypothesis that haploinsufficiency for this gene is one of the mechanisms causing the Alagille syndrome phenotype

1,188 citations


Journal ArticleDOI
28 Mar 1997-Science
TL;DR: In this paper, the electrical properties of individual bundles of single-walled carbon nanotubes have been measured and the results are interpreted in terms of singleelectron charging and resonant tunneling through the quantized energy levels of the nanotube composing the rope.
Abstract: The electrical properties of individual bundles, or “ropes,” of single-walled carbon nanotubes have been measured. Below about 10 kelvin, the low-bias conductance was suppressed for voltages less than a few millivolts. In addition, dramatic peaks were observed in the conductance as a function of a gate voltage that modulated the number of electrons in the rope. These results are interpreted in terms of single-electron charging and resonant tunneling through the quantized energy levels of the nanotubes composing the rope.

1,173 citations


Journal ArticleDOI
29 Jul 1997-Nature
TL;DR: In this article, the authors present measurements of electrical transport in a single-electron transistor made from a colloidal nanocrystal of cadmium selenide, which enables the number of charge carriers on the nanocrystals to be tuned directly, and so permits the measurement of the energy required for adding successive charge carriers.
Abstract: The techniques of colloidal chemistry permit the routine creation of semiconductor nanocrystals1,2 whose dimensions are much smaller than those that can be realized using lithographic techniques3,4,5,6. The sizes of such nanocrystals can be varied systematically to study quantum size effects or to make novel electronic or optical materials with tailored properties7,8,9. Preliminary studies of both the electrical10,11,12,13 and optical properties14,15,16 of individual nanocrystals have been performed recently. These studies show clearly that a single excess charge on a nanocrystal can markedly influence its properties. Here we present measurements of electrical transport in a single-electron transistor made from a colloidal nanocrystal of cadmium selenide. This device structure enables the number of charge carriers on the nanocrystal to be tuned directly, and so permits the measurement of the energy required for adding successive charge carriers. Such measurements are invaluable in understanding the energy-level spectra of small electronic systems, as has been shown by similar studies of lithographically patterned quantum dots3,4,5,6 and small metallic grains17.

1,127 citations


Journal ArticleDOI
TL;DR: HyTech is a symbolic model checker for linear hybrid automata, a subclass of hybrids that can be analyzed automatically by computing with polyhedral state sets that combines automaton transitions for capturing discrete change with differential equations for capturing continuous change.
Abstract: A hybrid system consists of a collection of digital programs that interact with each other and with an analog environment. Examples of hybrid systems include medical equipment, manufacturing controllers, automotive controllers, and robots. The formal analysis of the mixed digital-analog nature of these systems requires a model that incorporates the discrete behavior of computer programs with the continuous behavior of environment variables, such as temperature and pressure. Hybrid automata capture both types of behavior by combining finite automata with differential inclusions (i.e. differential inequalities). HyTech is a symbolic model checker for linear hybrid automata, an expressive, yet automatically analyzable, subclass of hybrid automata. A key feature of HyTech is its ability to perform parametric analysis, i.e. to determine the values of design parameters for which a linear hybrid automaton satisfies a temporal requirement.

1,092 citations


01 Mar 1997
TL;DR: The electrical properties of individual bundles, or “ropes,” of single-walled carbon nanotubes have been measured, and dramatic peaks were observed in the conductance as a function of a gate voltage that modulated the number of electrons in the rope.
Abstract: The electrical properties of individual bundles, or “ropes,” of single-walled carbon nanotubes have been measured. Below about 10 kelvin, the low-bias conductance was suppressed for voltages less than a few millivolts. In addition, dramatic peaks were observed in the conductance as a function of a gate voltage that modulated the number of electrons in the rope. These results are interpreted in terms of single-electron charging and resonant tunneling through the quantized energy levels of the nanotubes composing the rope.

Journal ArticleDOI
TL;DR: The goal of this paper is to demonstrate that AFM is capable of producing atomic-scale knowledge, and to focus upon some of the contributions of the AFM to nanotribology.
Abstract: A few years after the invention of the scanning tunneling microscope (STM), the atomic force microscope (AFM) was developed Instead of measuring tunneling current, a new physical quantity could be investigated with atomic-scale resolution: the force between a small tip and a chosen sample surface This paper reviews progress and recent results obtained with AFM and other closely related techniques in the field of nanotribology, and attempts to point out many of the unresolved questions that remain The goal of this paper is to demonstrate that AFM is capable of producing atomic-scale knowledge As such, the authors will focus upon some of the contributions of the AFM to nanotribology They will almost exclusively discuss results that shed light on the actual atomic and molecular processes taking place, as opposed to the more applied investigations of microscale properties which are also carried out with AFM They will accompany this discussion by mentioning related theoretical efforts and simulations, although their main emphasis will be upon experimental results and the techniques used to obtain them, as well as suggested future directions In many ways, AFM techniques for quantitative, fundamental nanotribology are only in a nascent stage; certain key issues such as force calibration,more » tip characterization, and the effects of the experimental environment, are not fully resolved or standardized The authors thus begin with a critical discussion of the relevant technical aspects with using AFM for nanotribology 289 refs« less

Journal ArticleDOI
TL;DR: In this paper, it was shown that perturbations arising from discretization of the equations of self-gravitational hydrodynamics can grow into fragments in multiple-grid simulations, a process referred to as artificial fragmentation.
Abstract: We demonstrate with a new three-dimensional adaptive mesh refinement code that perturbations arising from discretization of the equations of self-gravitational hydrodynamics can grow into fragments in multiple-grid simulations, a process we term "artificial fragmentation." We present star formation calculations of isothermal collapse of dense molecular cloud cores. In simulation of a Gaussian-profile cloud free of applied perturbations, we find artificial fragmentation can be avoided across the isothermal density regime by ensuring the ratio of cell size to Jeans length, which we call the Jeans number, J ≡ Δx/λJ, is kept below 0.25. We refer to the constraint that λJ be resolved as the Jeans condition. When an m=2 perturbation is included, we again find it necessary to keep J≤0.25 to achieve a converged morphology. Collapse to a filamentary singularity occurs without fragmentation of the filament, in agreement with the predictions of Inutsuka & Miyama. Simulation beyond the time of this singularity requires an arresting agent to slow the runaway density growth. Physically, the breakdown of isothermality due to the buildup of opacity acts as this agent, but many published calculations have instead used artificial viscosity for this purpose. Because artificial viscosity is resolution dependent, such calculations produce resolution-dependent results. In the context of the perturbed Gaussian cloud, we show that use of artificial viscosity to arrest collapse results in significant violation of the Jeans condition. We also show that if the applied perturbation is removed from such a calculation, numerical fluctuations grow to produce substantial fragments not unlike those found when the perturbation is included. These findings indicate that calculations that employ artificial viscosity to halt collapse are susceptible to contamination by artificial fragmentation. The Jeans condition has important implications for numerical studies of isothermal self-gravitational hydrodynamics problems insofar as it is a necessary but not, in general, sufficient condition for convergence.

Journal ArticleDOI
TL;DR: In this article, the authors present measurements of electrical transport in a single-electron transistor made from a colloidal nanocrystal of cadmium selenide and show that the number of charge carriers can be tuned directly, and so permits the measurement of the energy required for adding successive charge carriers.
Abstract: The techniques of colloidal chemistry permit the routine creation of semiconductor nanocrystals, whose dimensions are much smaller than those that can be realized using lithographic techniques. The sizes of such nanocrystals can be varied systematically to study quantum size effects or to make novel electronic or optical materials with tailored properties. Preliminary studies of both the electrical and optical properties of individual nanocrystals have been performed recently. These studies show clearly that a single excess charge on a nanocrystal can markedly influence its properties. Here we present measurements of electrical transport in a single-electron transistor made from a colloidal nanocrystal of cadmium selenide. This device structure enables the number of charge carriers on the nanocrystal to be tuned directly, and so permits the measurement of the energy required for adding successive charge carriers. Such measurements are invaluable in understanding the energy-level spectra of small electronic systems, as has been shown by similar studies of lithographically patterned quantum dots and small metallic grains.

Journal ArticleDOI
TL;DR: A new boundary detection approach for shape modeling that detects the global minimum of an active contour model’s energy between two end points and explores the relation between the maximum curvature along the resulting contour and the potential generated from the image.
Abstract: A new boundary detection approach for shape modeling is presented. It detects the global minimum of an active contour model‘s energy between two end points. Initialization is made easier and the curve is not trapped at a local minimum by spurious edges. We modify the “snake” energy by including the internal regularization term in the external potential term. Our method is based on finding a path of minimal length in a Riemannian metric. We then make use of a new efficient numerical method to find this shortest path. It is shown that the proposed energy, though based only on a potential integrated along the curve, imposes a regularization effect like snakes. We explore the relation between the maximum curvature along the resulting contour and the potential generated from the image. The method is capable to close contours, given only one point on the objects‘ boundary by using a topology-based saddle search routine. We show examples of our method applied to real aerial and medical images.

Journal ArticleDOI
16 May 1997-Science
TL;DR: The versatility of this technology was demonstrated by an example of selective drug delivery, where cells were decorated with biotin through selective conjugation to ketone groups, and selectively killed in the presence of a ricin A chain-avidin conjugate.
Abstract: Cell surface oligosaccharides can be engineered to display unusual functional groups for the selective chemical remodeling of cell surfaces. An unnatural derivative of N-acetyl-mannosamine, which has a ketone group, was converted to the corresponding sialic acid and incorporated into cell surface oligosaccharides metabolically, resulting in the cell surface display of ketone groups. The ketone group on the cell surface can then be covalently ligated under physiological conditions with molecules carrying a complementary reactive functional group such as the hydrazide. Cell surface reactions of this kind should prove useful in the introduction of new recognition epitopes, such as peptides, oligosaccharides, or small organic molecules, onto cell surfaces and in the subsequent modulation of cell-cell or cell-small molecule binding events. The versatility of this technology was demonstrated by an example of selective drug delivery. Cells were decorated with biotin through selective conjugation to ketone groups, and selectively killed in the presence of a ricin A chain-avidin conjugate.


Journal ArticleDOI
TL;DR: It is demonstrated that inappropriate expression of SL-1 initiates a cascade of events that may represent a coordinated program leading to loss of the differentiated epithelial phenotype and gain of some characteristics of tumor cells.
Abstract: Matrix metalloproteinases (MMPs) regulate ductal morphogenesis, apoptosis, and neoplastic progression in mammary epithelial cells. To elucidate the direct effects of MMPs on mammary epithelium, we generated functionally normal cells expressing an inducible autoactivating stromelysin-1 (SL-1) transgene. Induction of SL-1 expression resulted in cleavage of E-cadherin, and triggered progressive phenotypic conversion characterized by disappearance of E-cadherin and catenins from cell–cell contacts, downregulation of cytokeratins, upregulation of vimentin, induction of keratinocyte growth factor expression and activation, and upregulation of endogenous MMPs. Cells expressing SL-1 were unable to undergo lactogenic differentiation and became invasive. Once initiated, this phenotypic conversion was essentially stable, and progressed even in the absence of continued SL-1 expression. These observations demonstrate that inappropriate expression of SL-1 initiates a cascade of events that may represent a coordinated program leading to loss of the differentiated epithelial phenotype and gain of some characteristics of tumor cells. Our data provide novel insights into how MMPs function in development and neoplastic conversion.

Journal ArticleDOI
03 Jan 1997-Science
TL;DR: In this paper, the earliest events associated with excited-state relaxation in tris-(2,2′-bipyridine)ruthenium(II) were observed to occur in ∼300 femtoseconds after the initial excitation.
Abstract: Time-resolved absorption spectroscopy on the femtosecond time scale has been used to monitor the earliest events associated with excited-state relaxation in tris-(2,2′-bipyridine)ruthenium(II). The data reveal dynamics associated with the temporal evolution of the Franck-Condon state to the lowest energy excited state of this molecule. The process is essentially complete in ∼300 femtoseconds after the initial excitation. This result is discussed with regard to reformulating long-held notions about excited-state relaxation, as well as its implication for the importance of non-equilibrium excited-state processes in understanding and designing molecular-based electron transfer, artificial photosynthetic, and photovoltaic assemblies in which compounds of this class are currently playing a key role.

Journal ArticleDOI
01 Mar 1997
TL;DR: This paper addresses the design of reactive real-time embedded systems by reviewing the variety of approaches to solving the specification, validation, and synthesis problems for such embedded systems.
Abstract: This paper addresses the design of reactive real-time embedded systems. Such systems are often heterogeneous in implementation technologies and design styles, for example by combining hardware application-specific integrated circuits (ASICs) with embedded software. The concurrent design process for such embedded systems involves solving the specification, validation, and synthesis problems. We review the variety of approaches to these problems that have been taken.

Journal ArticleDOI
18 Apr 1997-Science
TL;DR: The kinetics of a first-order, solid-solid phase transition were investigated in the prototypical nanocrystal system CdSe as a function of crystallite size and general rules that may be of use in the discovery of new metastable phases are suggested.
Abstract: The kinetics of a first-order, solid-solid phase transition were investigated in the prototypical nanocrystal system CdSe as a function of crystallite size. In contrast to extended solids, nanocrystals convert from one structure to another by single nucleation events, and the transformations obey simple unimolecular kinetics. Barrier heights were observed to increase with increasing nanocrystal size, although they also depend on the nature of the nanocrystal surface. These results are analogous to magnetic phase transitions in nanocrystals and suggest general rules that may be of use in the discovery of new metastable phases.

Journal ArticleDOI
TL;DR: In this article, the authors estimate potential annual savings and productivity gains of $6 billion to $19 billion from reduced respiratory disease, allergy and asthma symptoms, sick building symptoms, and worker performance.
Abstract: The existing literature contains strong evidence that characteristics of buildings and indoor environments significantly influence rates of respiratory disease, allergy and asthma symptoms, sick building symptoms, and worker performance. Theoretical considerations, and limited empirical data, suggest that existing technologies and procedures can improve indoor environments in a manner that significantly increases health and productivity. At present, we can develop only crude estimates of the magnitude of productivity gains that may be obtained by providing better indoor environments; however, the projected gains are very large. For the U.S., we estimate potential annual savings and productivity gains of $6 billion to $19 billion from reduced respiratory disease; $1 billion to $4 billion from reduced allergies and asthma, $10 billion to $20 billion from reduced sick building syndrome symptoms, and $12 billion to $125 billion from direct improvements in worker performance that are unrelated to health. Sample calculations indicate that the potential financial benefits of improving indoor environments exceed costs by a factor of 18 to 47. The policy implications of the findings are discussed and include a recommendation for additional research.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the kinetics of the oxygen reduction reaction (ORR) on Pt(hkl) surfaces in a different manner depending on the electrolyte.
Abstract: The kinetics of the oxygen reduction reaction (ORR) on Pt(hkl) surfaces is found to vary with crystal face in a different manner depending on the electrolyte. In perchloric acid, the variation in activity at 0.8 to 0.9 V is relatively small between the three low index faces, with activity increasing in the order (100) < (110) ≈ (111). A similar structure sensitivity was observed in KOH, increasing in the order (100) < (110) < (111), but with larger differences. In sulfuric acid, the variations in activity with crystal face were much larger, with the difference between the most active and the least active being about two orders of magnitude, and increased in the opposite order (111) << (100) < (110). The variations in activity with crystal face in perchloric acid and KOH arise from the structure sensitive inhibiting effect of , i.e., a strongly inhibiting effect on (100) and smaller effects on (110) and (111). The variations in activity with crystal face in sulfuric acid arise from highly structure specific adsorption of sulfate/bisulfate anions in this electrolyte, which has a strongly inhibiting effect on the (111) surface. The crystallite size effect for the ORR reported for supported Pt catalysts in sulfuric acid at ambient temperature is fully explained by applying our single crystal results to classical models of the variation in particle shape with size.

Journal ArticleDOI
TL;DR: In this article, methods are discussed to determine combinations of masses and of branching ratios precisely from experimentally observable distributions, which can greatly constrain the particular supersymmetric model and determine its parameters with an accuracy of a few percent.
Abstract: If supersymmetry exists at the electroweak scale, then it should be discovered at the CERN Large Hadron Collider (LHC). Determining masses of supersymmetric particles, however, is more difficult. In this paper, methods are discussed to determine combinations of masses and of branching ratios precisely from experimentally observable distributions. In many cases such measurements alone can greatly constrain the particular supersymmetric model and determine its parameters with an accuracy of a few percent. Most of the results shown correspond to one year of running at LHC at ``low luminosity,'' ${10}^{33}{\mathrm{cm}}^{\mathrm{\ensuremath{-}}2}{\mathrm{s}}^{\mathrm{\ensuremath{-}}1}.$

Journal ArticleDOI
01 Apr 1997
TL;DR: It is demonstrated that binary decision diagrams are an efficient representation for every special-case matrix in common use, notably sparse matrices, and that complete pivoting is no more difficult over these matrices than partial pivoting.
Abstract: In this paper, we discuss the use of binary decision diagrams to represent general matrices. We demonstrate that binary decision diagrams are an efficient representation for every special-case matrix in common use, notably sparse matrices. In particular, we demonstrate that for any matrix, the BDD representation can be no larger than the corresponding sparse-matrix representation. Further, the BDD representation is often smaller than any other conventional special-case representation: for the n×n Walsh matrix, for example, the BDD representation is of size O(log n). No other special-case representation in common use represents this matrix in space less than O(n²). We describe termwise, row, column, block, and diagonal selection over these matrices, standard an Strassen matrix multiplication, and LU factorization. We demonstrate that the complexity of each of these operations over the BDD representation is no greater than that over any standard representation. Further, we demonstrate that complete pivoting is no more difficult over these matrices than partial pivoting. Finally, we consider an example, the Walsh Spectrum of a Boolean function.

Journal ArticleDOI
01 Oct 1997
TL;DR: In this paper, a fast Fourier transform method for synthesizing approximate self-similar sample paths for one type of selfsimilar process, Fractional Gaussian Noise, is presented.
Abstract: Recent network traffic studies argue that network arrival processes are much more faithfully modeled using statistically self-similar processes instead of traditional Poisson processes [LTWW94, PF95]. One difficulty in dealing with self-similar models is how to efficiently synthesize traces (sample paths) corresponding to self-similar traffic. We present a fast Fourier transform method for synthesizing approximate self-similar sample paths for one type of self-similar process, Fractional Gaussian Noise, and assess its performance and validity. We find that the method is as fast or faster than existing methods and appears to generate close approximations to true self-similar sample paths. We also discuss issues in using such synthesized sample paths for simulating network traffic, and how an approximation used by our method can dramatically speed up evaluation of Whittle's estimator for H, the Hurst parameter giving the strength of long-range dependence present in a self-similar time series.

Book ChapterDOI
22 Jun 1997
TL;DR: HyTech is a symbolic model checker for linear hybrid automata, an expressive, yet automatically analyzable, subclass of hybrids, and a key feature of HyTech is its ability to perform parametric analysis, i.e. to determine the values of design parameters for which alinear hybrid automaton satisfies a temporal requirement.
Abstract: A hybrid system consists of a collection of digital programs that interact with each other and with an analog environment. Examples of hybrid systems include medical equipment, manufacturing controllers, automotive controllers, and robots. The formal analysis of the mixed digital-analog nature of these systems requires a model that incorporates the discrete behavior of computer programs with the continuous behavior of environment variables, such as temperature and pressure. Hybrid automata capture both types of behavior by combining finite automata with differential inclusions (i.e. differential inequalities). HyTech is a symbolic model checker for linear hybrid automata, an expressive, yet automatically analyzable, subclass of hybrid automata. A key feature of HyTech is its ability to perform parametric analysis, i.e. to determine the values of design parameters for which a linear hybrid automaton satisfies a temporal requirement.

Proceedings ArticleDOI
11 Aug 1997
TL;DR: A framework for precisely specifying the context in which statistical objects are defined is introduced, which uses a three-step process to define normalized statistical objects.
Abstract: The summarizability of OLAP (online analytical processing) and statistical databases is an a extremely important property, because violating this condition can lead to erroneous conclusions and decisions. In this paper, we explore the conditions for summarizability. We introduce a framework for precisely specifying the context in which statistical objects are defined. We use a three-step process to define normalized statistical objects. Using this framework, we identify three necessary conditions for summarizability. We provide specific tests for each of the conditions that can be verified either from semantic knowledge or by checking the statistical database itself. We also provide the reasoning for our belief that these three summarizability conditions are sufficient as well.

Proceedings ArticleDOI
01 Dec 1997
TL;DR: Two key strategies for developing meaningful simulations in the face of the global Internet data network's great heterogeneity are discussed: searching for invariants and judiciously exploring the simulation parameter space.
Abstract: Simulating how the global Internet data network behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the “mix” of different applications used at a site and the levels of congestion (load) seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a look at a collaborative effort to build a common simulation environment for conducting Internet studies.

Journal ArticleDOI
TL;DR: A systematic evaluation of several autofocus functions used for analytical fluorescent image cytometry studies of counterstained nuclei shows that functions based on correlation measures have the best performance for this type of image.
Abstract: This work describes a systematic evaluation of several autofocus functions used for analytical fluorescent image cytometry studies of counterstained nuclei. Focusing is the first step in the automatic fluorescence in situ hybridization analysis of cells. Thirteen functions have been evaluated using qualitative and quantitative procedures. For the last of these procedures a figure-of-merit (FOM) is defined and proposed. This new FOM takes into account five important features of the focusing function. Our results show that functions based on correlation measures have the best performance for this type of image.