scispace - formally typeset
Search or ask a question

Showing papers by "California Institute of Technology published in 1999"


Journal ArticleDOI
TL;DR: In this paper, the authors describe the possibility of simulating physics in the classical approximation, a thing which is usually described by local differential equations, and the possibility that there is to be an exact simulation, that the computer will do exactly the same as nature.
Abstract: This chapter describes the possibility of simulating physics in the classical approximation, a thing which is usually described by local differential equations. But the physical world is quantum mechanical, and therefore the proper problem is the simulation of quantum physics. A computer which will give the same probabilities as the quantum system does. The present theory of physics allows space to go down into infinitesimal distances, wavelengths to get infinitely great, terms to be summed in infinite order, and so forth; and therefore, if this proposition is right, physical law is wrong. Quantum theory and quantizing is a very specific type of theory. The chapter talks about the possibility that there is to be an exact simulation, that the computer will do exactly the same as nature. There are interesting philosophical questions about reasoning, and relationship, observation, and measurement and so on, which computers have stimulated people to think about anew, with new types of thinking.

7,202 citations


Journal ArticleDOI
TL;DR: These air- and water-tolerant complexes were shown to exhibit an increased ring-closing metathesis activity at elevated temperature when compared to that of the parent complex 2 and the previously developed complex 3.

3,127 citations


Journal ArticleDOI
11 Jun 1999-Science
TL;DR: A laser cavity formed from a single defect in a two-dimensional photonic crystal is demonstrated and pulsed lasing action has been observed at a wavelength of 1.5 micrometers from optically pumped devices with a substrate temperature of 143 kelvin.
Abstract: A laser cavity formed from a single defect in a two-dimensional photonic crystal is demonstrated. The optical microcavity consists of a half wavelength–thick waveguide for vertical confinement and a two-dimensional photonic crystal mirror for lateral localization. A defect in the photonic crystal is introduced to trap photons inside a volume of 2.5 cubic half-wavelengths, approximately 0.03 cubic micrometers. The laser is fabricated in the indium gallium arsenic phosphide material system, and optical gain is provided by strained quantum wells designed for a peak emission wavelength of 1.55 micrometers at room temperature. Pulsed lasing action has been observed at a wavelength of 1.5 micrometers from optically pumped devices with a substrate temperature of 143 kelvin.

2,310 citations


Journal ArticleDOI
04 Mar 1999-Nature
TL;DR: In this article, a class of π;-conjugated compounds that exhibit large δ (as high as 1, 250 × 10−50 cm4 s per photon) and enhanced two-photon sensitivity relative to ultraviolet initiators were developed and used to demonstrate a scheme for three-dimensional data storage which permits fluorescent and refractive readout, and the fabrication of threedimensional micro-optical and micromechanical structures, including photonic bandgap-type structures.
Abstract: Two-photon excitation provides a means of activating chemical or physical processes with high spatial resolution in three dimensions and has made possible the development of three-dimensional fluorescence imaging1, optical data storage2,3 and lithographic microfabrication4,5,6. These applications take advantage of the fact that the two-photon absorption probability depends quadratically on intensity, so under tight-focusing conditions, the absorption is confined at the focus to a volume of order λ3 (where λ is the laser wavelength). Any subsequent process, such as fluorescence or a photoinduced chemical reaction, is also localized in this small volume. Although three-dimensional data storage and microfabrication have been illustrated using two-photon-initiated polymerization of resins incorporating conventional ultraviolet-absorbing initiators, such photopolymer systems exhibit low photosensitivity as the initiators have small two-photon absorption cross-sections (δ). Consequently, this approach requires high laser power, and its widespread use remains impractical. Here we report on a class of π;-conjugated compounds that exhibit large δ (as high as 1, 250 × 10−50 cm4 s per photon) and enhanced two-photon sensitivity relative to ultraviolet initiators. Two-photon excitable resins based on these new initiators have been developed and used to demonstrate a scheme for three-dimensional data storage which permits fluorescent and refractive read-out, and the fabrication of three-dimensional micro-optical and micromechanical structures, including photonic-bandgap-type structures7.

1,975 citations


Book ChapterDOI
TL;DR: In this paper, the authors review 74 experiments with no, low, or high performance-based financial incentives and find that the modal result has no effect on mean performance, and that higher incentive does improve performance often, typically judgment tasks that are responsive to better effort.
Abstract: We review 74 experiments with no, low, or high performance-based financial incentives. The modal result has no effect on mean performance (though variance is usually reduced by higher payment). Higher incentive does improve performance often, typically judgment tasks that are responsive to better effort. Incentives also reduce “presentation” effects (e.g., generosity and risk-seeking). Incentive effects are comparable to effects of other variables, particularly “cognitive capital” and task “production” demands, and interact with those variables, so a narrow-minded focus on incentives alone is misguided. We also note that no replicated study has made rationality violations disappear purely by raising incentives.

1,938 citations


Journal ArticleDOI
TL;DR: A more complete understanding of how to target DNA sites with specificity will lead not only to novel chemotherapeutics but also to a greatly expanded ability for chemists to probe DNA and to develop highly sensitive diagnostic agents.
Abstract: The design of small complexes that bind and react at specific sequences of DNA becomes important as we begin to delineate, on a molecular level, how genetic information is expressed. A more complete understanding of how to target DNA sites with specificity will lead not only to novel chemotherapeutics but also to a greatly expanded ability for chemists to probe DNA and to develop highly sensitive diagnostic agents.

1,769 citations


Journal ArticleDOI
TL;DR: In this paper, an energy-based criterion for selecting significant sidechain pairs was used to identify and evaluate cation-pi interactions in protein structures, and it was clearly demonstrated that, when a cationic sidechain (Lys or Arg) is near an aromatic side chain (Phe, Tyr, or Trp), the geometry is biased toward one that would experience a favorable cationpi interaction.
Abstract: Cation-pi interactions in protein structures are identified and evaluated by using an energy-based criterion for selecting significant sidechain pairs. Cation-pi interactions are found to be common among structures in the Protein Data Bank, and it is clearly demonstrated that, when a cationic sidechain (Lys or Arg) is near an aromatic sidechain (Phe, Tyr, or Trp), the geometry is biased toward one that would experience a favorable cation-pi interaction. The sidechain of Arg is more likely than that of Lys to be in a cation-pi interaction. Among the aromatics, a strong bias toward Trp is clear, such that over one-fourth of all tryptophans in the data bank experience an energetically significant cation-pi interaction.

1,753 citations


Journal ArticleDOI
TL;DR: The relations for the dispersion and the group velocity of the photonic band of the CROW's are obtained and it is found that they are solely characterized by coupling factor k(1) .
Abstract: We propose a new type of optical waveguide that consists of a sequence of coupled high- Q resonators. Unlike other types of optical waveguide, waveguiding in the coupled-resonator optical waveguide (CROW) is achieved through weak coupling between otherwise localized high- Q optical cavities. Employing a formalism similar to the tight-binding method in solid-state physics, we obtain the relations for the dispersion and the group velocity of the photonic band of the CROW's and find that they are solely characterized by coupling factor k 1 . We also demonstrate the possibility of highly efficient nonlinear optical frequency conversion and perfect transmission through bends in CROW's.

1,671 citations


Proceedings ArticleDOI
01 Jul 1999
TL;DR: Methods to rapidly remove rough features from irregularly triangulated data intended to portray a smooth surface are developed and it is proved that these curvature and Laplacian operators have several mathematically-desirable qualities that improve the appearance of the resulting surface.
Abstract: In this paper, we develop methods to rapidly remove rough features from irregularly triangulated data intended to portray a smooth surface. The main task is to remove undesirable noise and uneven edges while retaining desirable geometric features. The problem arises mainly when creating high-fidelity computer graphics objects using imperfectly-measured data from the real world. Our approach contains three novel features: an implicit integration method to achieve efficiency, stability, and large time-steps; a scale-dependent Laplacian operator to improve the diffusion process; and finally, a robust curvature flow operator that achieves a smoothing of the shape itself, distinct from any parameterization. Additional features of the algorithm include automatic exact volume preservation, and hard and soft constraints on the positions of the points in the mesh. We compare our method to previous operators and related algorithms, and prove that our curvature and Laplacian operators have several mathematically-desirable qualities that improve the appearance of the resulting surface. In consequence, the user can easily select the appropriate operator according to the desired type of fairing. Finally, we provide a series of examples to graphically and numerically demonstrate the quality of our results.

1,651 citations


Book
01 Jan 1999
TL;DR: Rolf Pfeifer and Christian Scheier provide a systematic introduction to this new way of thinking about intelligence and computers and derive a set of principles and a coherent framework for the study of naturally and artificially intelligent systems, or autonomous agents.
Abstract: From the Publisher: Researchers now agree that intelligence always manifests itself in behavior - thus it is behavior that we must understand. An exciting new field has grown around the study of behavior-based intelligence, also known as embodied cognitive science, "new AI," and "behavior-based AI.". "Rolf Pfeifer and Christian Scheier provide a systematic introduction to this new way of thinking about intelligence and computers. After discussing concepts and approaches such as subsumption architecture, Braitenberg vehicles, evolutionary robotics, artificial life, self-organization, and learning, the authors derive a set of principles and a coherent framework for the study of naturally and artificially intelligent systems, or autonomous agents. This framework is based on a synthetic methodology whose goal is understanding by designing and building.

1,647 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an initial results of a survey for star-forming galaxies in the redshift range 3.8 z 4.5 and z 3.23 deg2 to an apparent magnitude of IAB = 25.0.
Abstract: We present initial results of a survey for star-forming galaxies in the redshift range 3.8 z 4.5. This sample consists of a photometric catalog of 244 galaxies culled from a total solid angle of 0.23 deg2 to an apparent magnitude of IAB = 25.0. Spectroscopic redshifts in the range 3.61 ? z ? 4.81 have been obtained for 48 of these galaxies; their median redshift is z = 4.13. Selecting these galaxies in a manner entirely analogous to our large survey for Lyman-break galaxies at smaller redshift (2.7 z 3.4) allows a relatively clean differential comparison between the populations and integrated luminosity density at these two cosmic epochs. Over the same range of UV luminosity, the spectroscopic properties of the galaxy samples at z ~ 4 and z ~ 3 are indistinguishable, as are the luminosity function shapes and the total integrated UV luminosity densities [?UV(z = 3)/?UV(z = 4) = 1.1 ? 0.3]. We see no evidence at these bright magnitudes for the steep decline in the star formation density inferred from fainter photometric Lyman-break galaxies in the Hubble deep field (HDF). The HDF provides the only existing data on Lyman-break galaxy number densities at fainter magnitudes. We have reanalyzed the z ~ 3 and z ~ 4 Lyman-break galaxies in the HDF using our improved knowledge of the spectral energy distributions of these galaxies, and we find, like previous authors, that faint Lyman-break galaxies appear to be rarer at z ~ 4 than z ~ 3. This might signal a large change in the faint-end slope of the Lyman-break galaxy luminosity function between redshifts z ~ 3 and z ~ 4, or, more likely, be due to significant variance in the number counts within the small volumes probed by the HDF at high redshifts (~160 times smaller than the ground-based surveys discussed here). If the true luminosity density at z ~ 4 is somewhat higher than implied by the HDF, as our ground-based sample suggests, then the emissivity of star formation as a function of redshift would appear essentially constant for all z > 1 once internally consistent corrections for dust are made. This suggests that there may be no obvious peak in star formation activity and that the onset of substantial star formation in galaxies might occur at z 4.5.

Journal ArticleDOI
TL;DR: Experience Weighted Attraction (EWA) as mentioned in this paper is a special case of reinforcement learning that combines reinforcement learning and belief learning, and hybridizes their key elements, allowing attractions to begin and grow flexibly as choice reinforcement does but reinforcing unchosen strategies substantially as belief-based models implicitly do.
Abstract: In ‘experience-weighted attraction’ (EWA) learning, strategies have attractions that reflect initial predispositions, are updated based on payoff experience, and determine choice probabilities according to some rule (e.g., logit). A key feature is a parameter δ that weights the strength of hypothetical reinforcement of strategies that were not chosen according to the payoff they would have yielded, relative to reinforcement of chosen strategies according to received payoffs. The other key features are two discount rates, φ and ρ, which separately discount previous attractions, and an experience weight. EWA includes reinforcement learning and weighted fictitious play (belief learning) as special cases, and hybridizes their key elements. When δ= 0 and ρ= 0, cumulative choice reinforcement results. When δ= 1 and ρ=φ, levels of reinforcement of strategies are exactly the same as expected payoffs given weighted fictitious play beliefs. Using three sets of experimental data, parameter estimates of the model were calibrated on part of the data and used to predict a holdout sample. Estimates of δ are generally around .50, φ around .8 − 1, and ρ varies from 0 to φ. Reinforcement and belief-learning special cases are generally rejected in favor of EWA, though belief models do better in some constant-sum games. EWA is able to combine the best features of previous approaches, allowing attractions to begin and grow flexibly as choice reinforcement does, but reinforcing unchosen strategies substantially as belief-based models implicitly do.

Journal ArticleDOI
TL;DR: In this paper, it was shown that only a small fraction of the gas supplied actually falls on to the black hole, and that the binding energy it releases is transported radially outward by the torque so as to drive away the remainder in the form of a wind.
Abstract: Gas supplied conservatively to a black hole at rates well below the Eddington rate may not be able to radiate effectively and the net energy flux, including the energy transported by the viscous torque, is likely to be close to zero at all radii. This has the consequence that the gas accretes with positive energy so that it may escape. Accordingly, we propose that only a small fraction of the gas supplied actually falls on to the black hole, and that the binding energy it releases is transported radially outward by the torque so as to drive away the remainder in the form of a wind. This is a generalization of and an alternative to an `ADAF' solution. Some observational implications and possible ways to distinguish these two types of flow are briefly discussed.

Journal ArticleDOI
TL;DR: In this article, a wider-angle lens exposes an imposing image of commonality in market microstructure, showing that quoted spreads, quoted depth, and effective spreads co-move with market and industry-wide liquidity.
Abstract: Traditionally and understandably, the microscope of market microstructure has focused on attributes of single assets. Little theoretical attention and virtually no empirical work has been devoted to common determinants of liquidity nor to their empirical manifestation, correlated movements in liquidity. But a wider-angle lens exposes an imposing image of commonality. Quoted spreads, quoted depth, and effective spreads co-move with market- and industry-wide liquidity. After controlling for well-known individual liquidity determinants such as volatility, volume, and price, common influences remain significant and material. Recognizing the existence of commonality is a key to uncovering some suggestive evidence that inventory risks and asymmetric information both affect intertemporal changes in liquidity.

Journal ArticleDOI
TL;DR: A mechanism for stabilizing the size of the extra dimension in the Randall-Sundrum scenario is proposed and the minimum of this potential yields a compactification scale that solves the hierarchy problem without fine-tuning of parameters.
Abstract: We propose a mechanism for stabilizing the size of the extra dimension in the Randall-Sundrum scenario. The potential for the modulus field that sets the size of the fifth dimension is generated by a bulk scalar with quartic interactions localized on the two 3-branes. The minimum of this potential yields a compactification scale that solves the hierarchy problem without fine-tuning of parameters.

Journal ArticleDOI
TL;DR: In this paper, a three-dimensional finite deformation cohesive element and a class of irreversible cohesive laws are proposed to track dynamic growing cracks in a drop-weight dynamic fracture test.
Abstract: SUMMARY We develop a three-dimensional nite-deformation cohesive element and a class of irreversible cohesive laws which enable the accurate and ecient tracking of dynamically growing cracks. The cohesive element governs the separation of the crack anks in accordance with an irreversible cohesive law, eventually leading to the formation of free surfaces, and is compatible with a conventional nite element discretization of the bulk material. The versatility and predictive ability of the method is demonstrated through the simulation of a drop-weight dynamic fracture test similar to those reported by Zehnder and Rosakis. 1 The ability of the method to approximate the experimentally observed crack-tip trajectory is particularly noteworthy. Copyright ? 1999 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The major organic components of smoke particles from biomass burning are monosaccharide derivatives from the breakdown of cellulose, accompanied by generally lesser amounts of straight-chain, aliphatic and oxygenated compounds and terpenoids from vegetation waxes, resins/gums, and other biopolymers.

Journal ArticleDOI
TL;DR: This review is focused on a conserved ubiquitin ligase complex known as SCF that plays a key role in marking a variety of regulatory proteins for destruction by the 26S proteasome.
Abstract: Protein degradation is deployed to modulate the steady-state abundance of proteins and to switch cellular regulatory circuits from one state to another by abrupt elimination of control proteins. In eukaryotes, the bulk of the protein degradation that occurs in the cytoplasm and nucleus is carried out by the 26S proteasome. In turn, most proteins are thought to be targeted to the 26S proteasome by covalent attachment of a multiubiquitin chain. Ubiquitination of proteins requires a multienzyme system. A key component of ubiquitination pathways, the ubiquitin ligase, controls both the specificity and timing of substrate ubiquitination. This review is focused on a conserved ubiquitin ligase complex known as SCF that plays a key role in marking a variety of regulatory proteins for destruction by the 26S proteasome.

Journal ArticleDOI
19 Mar 1999-Science
TL;DR: Genetic analysis indicates that CLV1, which encodes a receptor kinase, acts with CLV3 to control the balance between meristem cell proliferation and differentiation in Arabidopsis thaliana plants with loss-of-function mutations in the CLAVATA genes.
Abstract: In higher plants, organogenesis occurs continuously from self-renewing apical meristems. Arabidopsis thaliana plants with loss-of-function mutations in the CLAVATA (CLV1,2, and 3) genes have enlarged meristems and generate extra floral organs. Genetic analysis indicates that CLV1, which encodes a receptor kinase, acts with CLV3 to control the balance between meristem cell proliferation and differentiation.CLV3 encodes a small, predicted extracellular protein.CLV3 acts nonautonomously in meristems and is expressed at the meristem surface overlying the CLV1 domain. These proteins may act as a ligand-receptor pair in a signal transduction pathway, coordinating growth between adjacent meristematic regions.

Journal ArticleDOI
26 Aug 1999-Nature
TL;DR: Evidence is provided that ngn3 acts as pro-endocrine gene and that Notch signalling is critical for the decision between theendocrine and progenitor/exocrine fates in the developing pancreas.
Abstract: The pancreas contains both exocrine and endocrine cells, but the molecular mechanisms controlling the differentiation of these cell types are largely unknown. Despite their endodermal origin, pancreatic endocrine cells share several molecular characteristics with neurons, and, like neurons in the central nervous system, differentiating endocrine cells in the pancreas appear in a scattered fashion within a field of progenitor cells. This indicates that they may be generated by lateral specification through Notch signalling. Here, to test this idea, we analysed pancreas development in mice genetically altered at several steps in the Notch signalling pathway. Mice deficient for Delta-like gene 1 (Dll1) or the intracellular mediator RBP-JK showed accelerated differentiation of pancreatic endocrine cells. A similar phenotype was observed in mice over-expressing neurogenin 3(ngn 3) or the intracellular form of Notch3 (ref. 13) (a repressor of Notch signalling). These data provide evidence that ngn3 acts as pro-endocrine gene and that Notch signalling is critical for the decision between theendocrine and progenitor/exocrine fates in the developing pancreas.

Journal ArticleDOI
TL;DR: A disposable microfabricated fluorescence-activated cell sorter (μFACS) for sorting various biological entities and it is shown that the bacteria are viable after extraction from the sorting device.
Abstract: We have demonstrated a disposable microfabricated fluorescence-activated cell sorter (µFACS) for sorting various biological entities Compared with conventional FACS machines, the µFACS provides higher sensitivity, no cross-contamination, and lower cost We have used µFACS chips to obtain substantial enrichment of micron-sized fluorescent bead populations of differing colors Furthermore, we have separated Escherichia coli cells expressing green fluorescent protein from a background of nonfluorescent E coli cells and shown that the bacteria are viable after extraction from the sorting device These sorters can function as stand-alone devices or as components of an integrated microanalytical chip

Journal ArticleDOI
TL;DR: In this paper, a review of nucleosynthesis in AGB stars outlining the development of theoretical models and their relationship to observations is presented, focusing on the new high-resolution codes with high accuracy.
Abstract: ▪ Abstract We present a review of nucleosynthesis in AGB stars outlining the development of theoretical models and their relationship to observations. We focus on the new high resolution codes with...

Journal ArticleDOI
Ian Dunham1, Nobuyoshi Shimizu1, Bruce A. Roe1, S. Chissoe1  +220 moreInstitutions (15)
02 Dec 1999-Nature
TL;DR: The sequence of the euchromatic part of human chromosome 22 is reported, which consists of 12 contiguous segments spanning 33.4 megabases, contains at least 545 genes and 134 pseudogenes, and provides the first view of the complex chromosomal landscapes that will be found in the rest of the genome.
Abstract: Knowledge of the complete genomic DNA sequence of an organism allows a systematic approach to defining its genetic components. The genomic sequence provides access to the complete structures of all genes, including those without known function, their control elements, and, by inference, the proteins they encode, as well as all other biologically important sequences. Furthermore, the sequence is a rich and permanent source of information for the design of further biological studies of the organism and for the study of evolution through cross-species sequence comparison. The power of this approach has been amply demonstrated by the determination of the sequences of a number of microbial and model organisms. The next step is to obtain the complete sequence of the entire human genome. Here we report the sequence of the euchromatic part of human chromosome 22. The sequence obtained consists of 12 contiguous segments spanning 33.4 megabases, contains at least 545 genes and 134 pseudogenes, and provides the first view of the complex chromosomal landscapes that will be found in the rest of the genome.

Journal ArticleDOI
TL;DR: A companion analysis of clock jitter and phase noise of single-ended and differential ring oscillators is presented in this paper, where the impulse sensitivity functions are used to derive expressions for the jitter.
Abstract: A companion analysis of clock jitter and phase noise of single-ended and differential ring oscillators is presented. The impulse sensitivity functions are used to derive expressions for the jitter and phase noise of ring oscillators. The effect of the number of stages, power dissipation, frequency of oscillation, and short-channel effects on the jitter and phase noise of ring oscillators is analyzed. Jitter and phase noise due to substrate and supply noise is discussed, and the effect of symmetry on the upconversion of 1/f noise is demonstrated. Several new design insights are given for low jitter/phase-noise design. Good agreement between theory and measurements is observed.

Journal ArticleDOI
TL;DR: In the first 371 sq. deg. of actual 2MASS survey data, they identified another twenty objects spectroscopically confirmed using the Low Resolution Imaging Spectrograph (LRIS) at the W.M. Keck Observatory.
Abstract: Before the 2-Micron All-Sky Survey (2MASS) began, only six objects were known with spectral types later than M9.5 V. In the first 371 sq. deg. of actual 2MASS survey data, we have identified another twenty such objects spectroscopically confirmed using the Low Resolution Imaging Spectrograph (LRIS) at the W.M. Keck Observatory.

Journal ArticleDOI
TL;DR: In this paper, the Shannon capacity of adaptive transmission techniques in conjunction with diversity-combining was studied. And the authors obtained closed-form solutions for the Rayleigh fading channel capacity under three adaptive policies: optimal power and rate adaptation, constant power with optimal rate adaptation and channel inversion with fixed rate.
Abstract: We study the Shannon capacity of adaptive transmission techniques in conjunction with diversity-combining. This capacity provides an upper bound on spectral efficiency using these techniques. We obtain closed-form solutions for the Rayleigh fading channel capacity under three adaptive policies: optimal power and rate adaptation, constant power with optimal rate adaptation, and channel inversion with fixed rate. Optimal power and rate adaptation yields a small increase in capacity over just rate adaptation, and this increase diminishes as the average received carrier-to-noise ratio (CNR) or the number of diversity branches increases. Channel inversion suffers the largest capacity penalty relative to the optimal technique, however, the penalty diminishes with increased diversity. Although diversity yields large capacity gains for all the techniques, the gain is most pronounced with channel inversion. For example, the capacity using channel inversion with two-branch diversity exceeds that of a single-branch system using optimal rate and power adaptation. Since channel inversion is the least complex scheme to implement, there is a tradeoff between complexity and capacity for the various adaptation methods and diversity-combining techniques.

Journal ArticleDOI
TL;DR: In the afterglows of several gamma-ray bursts (GRBs), rapid temporal decay, which is inconsistent with spherical (isotropic) blastwave models, is observed as mentioned in this paper.
Abstract: In the afterglows of several gamma-ray bursts (GRBs), rapid temporal decay, which is inconsistent with spherical (isotropic) blast-wave models, is observed. In particular, GRB 980519 had the most rapidly fading of the well-documented GRB afterglows, with t(sup -2.05 +/- 0.04) in optical as well as in X-rays. We show that such temporal decay is more consistent with the evolution of a jet after it slows down and spreads laterally, for which t(sup -P) decay is expected (where p is the index of the electron energy distribution). Such a beaming model would relax the energy requirements on some of the more extreme GRBs by a factor of several hundred. It is likely that a large fraction of the weak- (or no-) afterglow observations are also due to the common occurrence of beaming in GRBs and that their jets have already transitioned. to the spreading phase before the first afterglow observations were made. With this interpretation, a universal value of p approx. = 2.4 is consistent with all data.

Journal ArticleDOI
TL;DR: In this article, a two-stage dilution source sampling system was used to quantify gas and particle-phase tailpipe emissions from late-model medium duty diesel trucks using a two stage dilution sampling system.
Abstract: Gas- and particle-phase tailpipe emissions from late-model medium duty diesel trucks are quantified using a two-stage dilution source sampling system. The diesel trucks are driven through the hot-start Federal Test Procedure (FTP) urban driving cycle on a transient chassis dynamometer. Emission rates of 52 gas-phase volatile hydrocarbons, 67 semivolatile and 28 particle-phase organic compounds, and 26 carbonyls are quantified along with fine particle mass and chemical composition. When all C_1−C_(13) carbonyls are combined, they account for 60% of the gas-phase organic compound mass emissions. Fine particulate matter emission rates and chemical composition are quantified simultaneously by two methods: a denuder/filter/PUF sampler and a traditional filter sampler. Both sampling techniques yield the same elemental carbon emission rate of 56 mg km^(-1) driven, but the particulate organic carbon emission rate determined by the denuder-based sampling technique is found to be 35% lower than the organic carbon mass collected by the traditional filter-based sampling technique due to a positive vapor-phase sorption artifact that affects the traditional filter sampling technique. The distribution of organic compounds in the diesel fuel used in this study is compared to the distribution of these compounds in the vehicle exhaust. Significant enrichment in the ratio of unsubstituted polycyclic aromatic hydrocarbons (PAH) to their methyl- and dimethyl-substituted homologues is observed in the tailpipe emissions relative to the fuel. Isoprenoids and tricyclic terpanes are quantified in the semivolatile organics emitted from diesel vehicles. When used in conjunction with data on the hopanes, steranes, and elemental carbon emitted, the isoprenoids and the tricyclic terpanes may help trace the presence of diesel exhaust in atmospheric samples.

Journal ArticleDOI
TL;DR: In this article, an analysis of phase noise in differential cross-coupled inductance-capacitance (LC) oscillators is presented, and the effect of tail current and tank power dissipation on the voltage amplitude is shown.
Abstract: An analysis of phase noise in differential cross-coupled inductance-capacitance (LC) oscillators is presented. The effect of tail current and tank power dissipation on the voltage amplitude is shown. Various noise sources in the complementary cross-coupled pair are identified, and their effect on phase noise is analyzed. The predictions are in good agreement with measurements over a large range of tail currents and supply voltages. A 1.8 GHz LC oscillator with a phase noise of -121 dBc/Hz at 600 kHz is demonstrated, dissipating 6 mW of power using on-chip spiral inductors.

Journal ArticleDOI
TL;DR: This poster presents a probabilistic procedure to constrain the number of particles in the response of the immune system to the presence of Tau.
Abstract: Reference LPI-ARTICLE-1999-017View record in Web of Science Record created on 2006-02-21, modified on 2017-05-12