scispace - formally typeset
Search or ask a question

Showing papers by "California Institute of Technology published in 1995"


Book
17 Aug 1995
TL;DR: This paper reviewed the history of the relationship between robust control and optimal control and H-infinity theory and concluded that robust control has become thoroughly mainstream, and robust control methods permeate robust control theory.
Abstract: This paper will very briefly review the history of the relationship between modern optimal control and robust control. The latter is commonly viewed as having arisen in reaction to certain perceived inadequacies of the former. More recently, the distinction has effectively disappeared. Once-controversial notions of robust control have become thoroughly mainstream, and optimal control methods permeate robust control theory. This has been especially true in H-infinity theory, the primary focus of this paper.

6,945 citations


Journal ArticleDOI
TL;DR: The generalized least squares approach of Parks produces standard errors that lead to extreme overconfidence, often underestimating variability by 50% or more, and a new method is offered that is both easier to implement and produces accurate standard errors.
Abstract: We examine some issues in the estimation of time-series cross-section models, calling into question the conclusions of many published studies, particularly in the field of comparative political economy. We show that the generalized least squares approach of Parks produces standard errors that lead to extreme overconfidence, often underestimating variability by 50% or more. We also provide an alternative estimator of the standard errors that is correct when the error structures show complications found in this type of model. Monte Carlo analysis shows that these “panel-corrected standard errors” perform well. The utility of our approach is demonstrated via a reanalysis of one “social democratic corporatist” model.

5,670 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the use of standard statistical models for quantal choice in a game theoretic setting and define a quantal response equilibrium (ORE) as a fixed point of this process and establish existence.

2,469 citations


Journal ArticleDOI
17 Mar 1995-Science
TL;DR: Long-term potentiation could still be elicited in slices previously potentiated by exposure to the neurotrophic factors, which implies that these two forms of plasticity may use at least partially independent cellular mechanisms.
Abstract: The neurotrophins are signaling factors important for the differentiation and survival of distinct neuronal populations during development. To test whether the neurotrophins also function in the mature nervous system, the effects of brain-derived neurotrophic factor (BDNF), nerve growth factor (NGF), and neurotrophic factor 3 (NT-3) on the strength of synaptic transmission in hippocampal slices were determined. Application of BDNF or NT-3 produced a dramatic and sustained (2 to 3 hours) enhancement of synaptic strength at the Schaffer collateral-CA1 synapses; NGF was without significant effect. The enhancement was blocked by K252a, an inhibitor of receptor tyrosine kinases. BDNF and NT-3 decreased paired-pulse facilitation, which is consistent with a possible presynaptic modification. Long-term potentiation could still be elicited in slices previously potentiated by exposure to the neurotrophic factors, which implies that these two forms of plasticity may use at least partially independent cellular mechanisms.

1,371 citations



Journal ArticleDOI
TL;DR: Measurements of the birefringence of a single atom strongly coupled to a high-finesse optical resonator are reported, with nonlinear phase shifts observed for an intracavity photon number much less than one.
Abstract: Measurements of the birefringence of a single atom strongly coupled to a high-finesse optical resonator are reported, with nonlinear phase shifts observed for an intracavity photon number much less than one. A proposal to utilize the measured conditional phase shifts for implementing quantum logic via a quantum-phase gate (QPG) is considered. Within the context of a simple model for the field transformation, the parameters of the "truth table" for the QPG are determined.

1,189 citations


Journal ArticleDOI
06 Jul 1995-Nature
TL;DR: In this paper, a computational model is described in which the sizes of variables are represented by the explicit times at which action potentials occur, rather than by the more usual 'firing rate' of neurons.
Abstract: A computational model is described in which the sizes of variables are represented by the explicit times at which action potentials occur, rather than by the more usual 'firing rate' of neurons. The comparison of patterns over sets of analogue variables is done by a network using different delays for different information paths. This mode of computation explains how one scheme of neuroarchitecture can be used for very different sensory modalities and seemingly different computations. The oscillations and anatomy of the mammalian olfactory systems have a simple interpretation in terms of this representation, and relate to processing in the auditory system. Single-electrode recording would not detect such neural computing. Recognition 'units' in this style respond more like radial basis function units than elementary sigmoid units.

1,141 citations


Journal ArticleDOI
03 Mar 1995-Science
TL;DR: NRSF represents the first example of a vertebrate silencer protein that potentially regulates a large battery of cell type-specific genes, and therefore may function as a master negative regulator of neurogenesis.
Abstract: The neuron-restrictive silencer factor (NRSF) binds a DNA sequence element, called the neuron-restrictive silencer element (NRSE), that represses neuronal gene transcription in nonneuronal cells. Consensus NRSEs have been identified in 18 neuron-specific genes. Complementary DNA clones encoding a functional fragment of NRSF were isolated and found to encode a novel protein containing eight noncanonical zinc fingers. Expression of NRSF mRNA was detected in most nonneuronal tissues at several developmental stages. In the nervous system, NRSF mRNA was detected in undifferentiated neuronal progenitors, but not in differentiated neurons. NRSF represents the first example of a vertebrate silencer protein that potentially regulates a large battery of cell type-specific genes, and therefore may function as a master negative regulator of neurogenesis.

1,061 citations


Journal ArticleDOI
TL;DR: An empirical algorithm for the retrieval of soil moisture content and surface root mean square (RMS) height from remotely sensed radar data was developed using scatterometer data and inversion results indicate that significant amounts of vegetation cause the algorithm to underestimate soil moisture and overestimate RMS height.
Abstract: An empirical algorithm for the retrieval of soil moisture content and surface root mean square (RMS) height from remotely sensed radar data was developed using scatterometer data. The algorithm is optimized for bare surfaces and requires two copolarized channels at a frequency between 1.5 and 11 GHz. It gives best results for kh/spl les/2.5, /spl mu//sub /spl upsi///spl les/35%, and /spl theta//spl ges/30/spl deg/. Omitting the usually weaker hv-polarized returns makes the algorithm less sensitive to system cross-talk and system noise, simplifies the calibration process and adds robustness to the algorithm in the presence of vegetation. However, inversion results indicate that significant amounts of vegetation (NDVI>0.4) cause the algorithm to underestimate soil moisture and overestimate RMS height. A simple criteria based on the /spl sigma//sub hv//sup 0///spl sigma//sub vv//sup 0/ ratio is developed to select the areas where the inversion is not impaired by the vegetation. The inversion accuracy is assessed on the original scatterometer data sets but also on several SAR data sets by comparing the derived soil moisture values with in-situ measurements collected over a variety of scenes between 1991 and 1994. Both spaceborne (SIR-C) and airborne (AIRSAR) data are used in the test. Over this large sample of conditions, the RMS error in the soil moisture estimate is found to be less than 4.2% soil moisture. >

1,054 citations


Journal ArticleDOI
TL;DR: Recombinant polymers that combine the beneficial aspects of natural polymers with many of the desirable features of synthetic polymers have been designed and produced and described.
Abstract: Biomaterials play a pivotal role in field of tissue engineering. Biomimetic synthetic polymers have been created to elicit specific cellular functions and to direct cell-cell interactions both in implants that are initially cell-free, which may serve as matrices to conduct tissue regeneration, and in implants to support cell transplantation. Biomimetic approaches have been based on polymers endowed with bioadhesive receptor-binding peptides and mono- and oligosaccharides. These materials have been patterned in two- and three-dimensions to generate model multicellular tissue architectures, and this approach may be useful in future efforts to generate complex organizations of multiple cell types. Natural polymers have also played an important role in these efforts, and recombinant polymers that combine the beneficial aspects of natural polymers with many of the desirable features of synthetic polymers have been designed and produced. Biomaterials have been employed to conduct and accelerate otherwise naturally occurring phenomena, such as tissue regeneration in wound healing in the otherwise healthy subject; to induce cellular responses that might not be normally present, such as healing in a diseased subject or the generation of a new vascular bed to receive a subsequent cell transplant; and to block natural phenomena, such as the immune rejection of cell transplants from other species or the transmission of growth factor signals that stimulate scar formation. This review introduces the biomaterials and describes their application in the engineering of new tissues and the manipulation of tissue responses.

1,041 citations


23 Jan 1995
TL;DR: In this article, the authors proposed a method to map apparent target abundances in the presence of an arbitrary and unknown spectrally mixed background, which allows the target materials to be present in abundances that drive significant portions of scene covariance.
Abstract: A complete spectral unmixing of a complicated AVIRIS scene may not always be possible or even desired. High quality data of spectrally complex areas are very high dimensional and are consequently difficult to fully unravel. Partial unmixing provides a method of solving only that fraction of the data inversion problem that directly relates to the specific goals of the investigation. Many applications of imaging spectrometry can be cast in the form of the following question: 'Are my target signatures present in the scene, and if so, how much of each target material is present in each pixel?' This is a partial unmixing problem. The number of unmixing endmembers is one greater than the number of spectrally defined target materials. The one additional endmember can be thought of as the composite of all the other scene materials, or 'everything else'. Several workers have proposed partial unmixing schemes for imaging spectrometry data, but each has significant limitations for operational application. The low probability detection methods described by Farrand and Harsanyi and the foreground-background method of Smith et al are both examples of such partial unmixing strategies. The new method presented here builds on these innovative analysis concepts, combining their different positive attributes while attempting to circumvent their limitations. This new method partially unmixes AVIRIS data, mapping apparent target abundances, in the presence of an arbitrary and unknown spectrally mixed background. It permits the target materials to be present in abundances that drive significant portions of the scene covariance. Furthermore it does not require a priori knowledge of the background material spectral signatures. The challenge is to find the proper projection of the data that hides the background variance while simultaneously maximizing the variance amongst the targets.

Proceedings ArticleDOI
01 Oct 1995
TL;DR: This paper provides a plausible physical explanation for the occurrence of self-similarity in high-speed network traffic based on convergence results for processes that exhibit high variability and is supported by detailed statistical analyses of real-time traffic measurements from Ethernet LAN's at the level of individual sources.
Abstract: A number of recent empirical studies of traffic measurements from a variety of working packet networks have convincingly demonstrated that actual network traffic is self-similar or long-range dependent in nature (i.e., bursty over a wide range of time scales) - in sharp contrast to commonly made traffic modeling assumptions. In this paper, we provide a plausible physical explanation for the occurrence of self-similarity in high-speed network traffic. Our explanation is based on convergence results for processes that exhibit high variability (i.e., infinite variance) and is supported by detailed statistical analyses of real-time traffic measurements from Ethernet LAN's at the level of individual sources.Our key mathematical result states that the superposition of many ON/OFF sources (also known as packet trains) whose ON-periods and OFF-periods exhibit the Noah Effect (i.e., have high variability or infinite variance) produces aggregate network traffic that features the Joseph Effect (i.e., is self-similar or long-range dependent). There is, moreover, a simple relation between the parameters describing the intensities of the Noah Effect (high variability) and the Joseph Effect (self-similarity). An extensive statistical analysis of two sets of high time-resolution traffic measurements from two Ethernet LAN's (involving a few hundred active source-destination pairs) confirms that the data at the level of individual sources or source-destination pairs are consistent with the Noah Effect. We also discuss implications of this simple physical explanation for the presence of self-similar traffic patterns in modern high-speed network traffic for (i) parsimonious traffic modeling (ii) efficient synthetic generation of realistic traffic patterns, and (iii) relevant network performance and protocol analysis.

Journal ArticleDOI
TL;DR: Analysis of predicted amino acid sequences of these genes revealed strong conservation of both primary and secondary structure, suggesting that the particulate methane monooxygenase and ammonia mono Oxygenase are evolutionarily related enzymes despite their different physiological roles in these bacteria.
Abstract: Genes encoding paniculate methane monooxygenase and ammonia monooxygenase share high sequence identity. Degenerate oligonucleotide primers were designed, based on regions of shared amino acid sequence between the 27-kDa polypeptides, which are believed to contain the active sites, of particulate methane monooxygenase and ammonia monooxygenase. A 525-bp internal DNA fragment of the genes encoding these polypeptides ( pmoA and amoA ) from a variety of methanotrophic and nitrifying bacteria was amplified by PCR, cloned and sequenced. Representatives of each of the phylogenetic groups of both methanotrophs ( α- and γ-Proteobacteria) and ammonia-oxidizing nitrifying bacteria ( β-and y-Proteobacteria) were included. Analysis of the predicted amino acid sequences of these genes revealed strong conservation of both primary and secondary structure. Nitrosococcus oceanus AmoA showed higher identity to PmoA sequences from other members of the γ-Proteobacteria than to AmoA sequences. These results suggest that the particulate methane monooxygenase and ammonia monooxygenase are evolutionarily related enzymes despite their different physiological roles in these bacteria.

Journal ArticleDOI
11 May 1995-Nature
TL;DR: The neuroanatomy of the macaque monkey suggests that, although primates may be aware of neural activity in other visual cortical areas, they are not directly aware of that in area V1 of the neocortex.
Abstract: It is usually assumed that people are visually aware of at least some of the neuronal activity in the primary visual area, V1, of the neocortex. But the neuroanatomy of the macaque monkey suggests that, although primates may be aware of neural activity in other visual cortical areas, they are not directly aware of that in area V1. There is some psychophysical evidence in humans that supports this hypothesis.

Journal ArticleDOI
TL;DR: It is shown that a Lys Arg conversion at either position 29 or position 48 in the fusion's Ub moiety greatly reduces ubiquitination and degradation of Ub fusions to β-galactosidase and that structurally different multi-Ub chains have distinct functions in Ub-dependent protein degradation.

Journal ArticleDOI
TL;DR: A novel method for tolerating up to two disk failures in RAID architectures based on Reed-Solomon error-correcting codes, which can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording.
Abstract: We present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (i.e., two extra disks) is based on Reed-Solomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error. >

Journal ArticleDOI
30 Nov 1995-Nature
TL;DR: In this article, the authors reported the discovery of a probable companion to the nearby star G1229, with no more than one-tenth the luminosity of the least luminous hydrogen-burning star.
Abstract: Brown dwarfs are star-like objects with masses less than 0.08 times that of the Sun, which are unable to sustain hydrogen fusion in their interiors. They are very hard to detect, as most of the energy of gravitational contraction is radiated away within ~10^8 yr, leaving only a very low residual luminosity. Accordingly, almost all searches for brown dwarfs have been directed towards clusters of young stars—a strategy that has recently proved successful. But there are only modest observable differences between young brown dwarfs and very low-mass stars, making it difficult to identify the former without appealing to sophisticated models. Older brown dwarfs should have a more distinctive appearance, and if they are companions to nearby stars, their luminosity can be determined unambiguously. Here we report the discovery of a probable companion to the nearby star G1229, with no more than one-tenth the luminosity of the least luminous hydrogen-burning star. We conclude that the companion, G1229B, is a brown dwarf with a temperature of less than 1,200 K, and a mass ~20–50 times that of Jupiter.

Journal ArticleDOI
TL;DR: In this article, the authors simulated a Mw7.0 earthquake on a blind-thrust fault and found that flexible frame and base-isolated buildings would experience severe nonlinear behavior including the possibility of collapse at some locations.
Abstract: Occurrence of large earthquakes close to cities in California is inevitable. The resulting ground shaking will subject buildings in the near-source region to large, rapid displacement pulses which are not represented in design codes. The simulated Mw7.0 earthquake on a blind-thrust fault used in this study produces peak ground displacement and velocity of 200 cm and 180 cm/sec, respectively. Over an area of several hundred square kilometers in the near-source region, flexible frame and base-isolated buildings would experience severe nonlinear behavior including the possibility of collapse at some locations. The susceptibility of welded connections to fracture significantly increases the collapse potential of steel-frame buildings under strong ground motions of the type resulting from the Mw7.0 simulation. Because collapse of a building depends on many factors which are poorly understood, the results presented here regarding collapse should be interpreted carefully.

Journal ArticleDOI
TL;DR: In this article, the effects on neutrino fluxes of each change in the input physics are evaluated separately by constructing a series of solar models with one additional improvement added at each stage.
Abstract: Helium and heavy-element diffusion are both included in precise calculations of solar models. In addition, improvements in the input data for solar interior models are described for nuclear reaction rates, the solar luminosity, the solar age, heavy-element abundances, radiative opacities, helium and metal diffusion rates, and neutrino interaction cross sections. The effects on the neutrino fluxes of each change in the input physics are evaluated separately by constructing a series of solar models with one additional improvement added at each stage. The effective 1 σ uncertainties in the individual input quantities are estimated and used to evaluate the uncertainties in the calculated neutrino fluxes and the calculated event rates for solar neutrino experiments. The calculated neutrino event rates, including all of the improvements, are 9.3-1.4+1.2 SNU for the 37Cl experiment and 137-7+8 SNU for the 71Ga experiments. The calculated flux of 7Be neutrinos is 5.1 (1.00-0.07+0.06)×10^9 cm^-2 s^-1 and the flux of 8B neutrinos is 6.6(1.00-0.17+0.14)×10^6 cm^-2 s^-1. The primordial helium abundance found for this model is Y=0.278. The present-day surface abundance of the model is Ys=0.247, in agreement with the helioseismological measurement of Ys=0.242±0.003 determined by Hernandez and Christensen-Dalsgaard (1994). The computed depth of the convective zone is R=0.712R⊙, in agreement with the observed value determined from p-mode oscillation data of R=0.713±0.003R⊙ found by Christensen-Dalsgaard et al. (1991). Although the present results increase the predicted event rate in the four operating solar neutrino experiments by almost 1 σ (theoretical uncertainty), they only slightly increase the difficulty of explaining the existing experiments with standard physics (i.e., by assuming that nothing happens to the neutrinos after they are created in the center of the sun). For an extreme model in which all diffusion (helium and heavy-element diffusion) is neglected, the event rates are 7.0-1.0+0.9 SNU for the 37Cl experiment and 126-6+6 SNU for the 71Ga experiments, while the 7Be and 8B neutrino fluxes are, respectively, 4.5(1.00-0.07+0.06)×10^9 cm^-2 s^-1 and 4.9(1.00-0.17+0.14)×10^6 cm^-2 s^-1. For the no-diffusion model, the computed value of the depth of the convective zone is R=0.726R⊙, which disagrees with the observed helioseismological value. The calculated surface abundance of helium, Ys=0.268, is also in disagreement with the p-mode measurement. The authors conclude that helioseismology provides strong evidence for element diffusion and therefore for the somewhat larger solar neutrino event rates calculated in this paper.

Journal ArticleDOI
06 Oct 1995-Science
TL;DR: Immunodepletion studies suggested that Myt1 is the predominant threonine-14-specific kinase in Xenopus egg extracts, suggesting that this relative of Wee1 plays a role in mitotic control.
Abstract: Cdc2 is the cyclin-dependent kinase that controls entry of cells into mitosis. Phosphorylation of Cdc2 on threonine-14 and tyrosine-15 inhibits the activity of the enzyme and prevents premature initiation of mitosis. Although Wee1 has been identified as the kinase that phosphorylates tyrosine-15 in various organisms, the threonine-14-specific kinase has not been isolated. A complementary DNA was cloned from Xenopus that encodes Myt1, a member of the Wee1 family that was discovered to phosphorylate Cdc2 efficiently on both threonine-14 and tyrosine-15. Myt1 is a membrane-associated protein that contains a putative transmembrane segment. Immunodepletion studies suggested that Myt1 is the predominant threonine-14-specific kinase in Xenopus egg extracts. Myt1 activity is highly regulated during the cell cycle, suggesting that this relative of Wee1 plays a role in mitotic control.

Journal ArticleDOI
TL;DR: An SL(2, Z) family of string solutions of type IIB supergravity in ten dimensions is constructed in this paper, where the solutions are labeled by a pair of relatively prime integers, which characterize charges of the three-form field strengths.

Journal ArticleDOI
22 Sep 1995-Science
TL;DR: Double-mutant analysis indicates that ERS acts upstream of the CTR1 protein kinase gene in the ethylene-response pathway.
Abstract: ERS (ethylene response sensor), a gene in the Arabidopsis thaliana ethylene hormone-response pathway, was uncovered by cross-hybridization with the Arabidopsis ETR1 gene. The deduced ERS protein has sequence similarity with the amino-terminal domain and putative histidine protein kinase domain of ETR1, but it does not have a receiver domain as found in ETR1. A missense mutation identical to the dominant etr1-4 mutation was introduced into the ERS gene. The altered ERS gene conferred dominant ethylene insensitivity to wild-type Arabidopsis. Double-mutant analysis indicates that ERS acts upstream of the CTR1 protein kinase gene in the ethylene-response pathway.

Journal ArticleDOI
09 Nov 1995-Nature
TL;DR: The involvement of neurotrophins in the dynamic elaboration of axon terminals is demonstrated, and a direct role for target-derived BDNF during synaptic patterning in the developing central nervous system is suggested.
Abstract: Neurotrophins are thought to be important for the survival and differentiation of vertebrate neurons. Roles have been suggested for target-derived neurotrophins, based both on their expression in target tissues at the time of neuron innervation, and on their effects on axonal sprouting. However, direct in vivo evidence of their involvement in axon arborization has remained elusive. We have used in vivo microscopy to follow individual optic axons over time, and have examined the role of the neurotrophin brain-derived neurotrophic factor (BDNF) in their development. Here we show that injection of BDNF into the optic tec turn of live Xenopus laevis tadpoles increased the branching and complexity of optic axon terminal arbors. In contrast, injection of specific neutralizing antibodies to BDNF reduced axon arborization and complexity. The onset of these effects was rapid (within 2 hours) and persisted throughout the 24-hour observation period. Other neurotrophins had little or no significant effects. These results demonstrate the involvement of neurotrophins in the dynamic elaboration of axon terminals, and suggest a direct role for target-derived BDNF during synaptic patterning in the developing central nervous system.

Journal ArticleDOI
TL;DR: It is argued that nonperturbative gravitational effects in the axion theory lead to a strong violation of {ital CP} invariance unless they are suppressed by an extremely small factor, and that in string theory there exists an additional suppression of topology change by the factor.
Abstract: There exists a widely held notion that gravitational effects can strongly violate global symmetries. If this is correct, it may lead to many important consequences. We argue, in particular, that nonperturbative gravitational effects in the axion theory lead to a strong violation of CP invariance unless they are suppressed by an extremely small factor g\ensuremath{\lesssim}${10}^{\mathrm{\ensuremath{-}}82}$. One could hope that this problem disappears if one represents the global symmetry of a pseudoscalar axion field as a gauge symmetry of the Ogievetsky-Polubarinov-Kalb-Ramond antisymmetric tensor field. We show, however, that this gauge symmetry does not protect the axion mass from quantum corrections. The amplitude of gravitational effects violating global symmetries could be strongly suppressed by ${\mathit{e}}^{\mathrm{\ensuremath{-}}\mathit{S}}$, where S is the action of a wormhole which may absorb the global charge. Unfortunately, in a wide variety of theories based on the Einstein theory of gravity the action appears to be fairly small, S\ensuremath{\sim}10. However, we find that the existence of wormholes and the value of their action are extremely sensitive to the structure of space on the nearly Planckian scale. We consider several examples (Kaluza-Klein theory, conformal anomaly, ${\mathit{R}}^{2}$ terms) which show that modifications of the Einstein theory on the length scale l\ensuremath{\lesssim}10${\mathit{M}}_{\mathit{P}}^{\mathrm{\ensuremath{-}}1}$ may strongly suppress violation of global symmetries. We find also that in string theory there exists an additional suppression of topology change by the factor ${\mathit{e}}^{\mathrm{\ensuremath{-}}8\mathrm{\ensuremath{\pi}}2}$/${\mathit{g}}^{2}$. This effect is strong enough to save the axion theory for the natural values of the stringy gauge coupling constant.

Journal ArticleDOI
TL;DR: These data demonstrate that Gα15 and Gα16 are unique in that they can be activated by a wide variety of G-protein-coupled receptors and can be a useful tool to understand the mechanism of receptor-induced G- protein activation.

Patent
24 May 1995
TL;DR: In this article, a digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided, where the image file and the digital signature are stored in suitable recording means so they will be available together.
Abstract: A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

Journal ArticleDOI
TL;DR: In this article, a finite element multigrid scheme was employed for large viscosity variations and convection with up to 1014 contrasts was systematically investigated in a 2D square cell with free slip boundaries.
Abstract: Previous experimental studies of convection in fluids with temperature‐dependent viscosity reached viscosity contrasts of the order of 105. Although this value seems large, it still might not be large enough for understanding convection in the interiors of Earth and other planets whose viscosity is a much stronger function of temperature. The reason is that, according to theory, above 104–105 viscosity contrasts, convection must undergo a major transition—to stagnant lid convection. This is an asymptotic regime in which a stagnant lid is formed on the top of the layer and convection is driven by the intrinsic, rheological, temperature scale, rather than by the entire temperature drop in the layer. A finite element multigrid scheme appropriate for large viscosity variations is employed and convection with up to 1014 viscosity contrasts has been systematically investigated in a 2D square cell with free‐slip boundaries. We reached the asymptotic regime in the limit of large viscosity contrasts and obtained s...

Journal ArticleDOI
M. S. Alam1, I. J. Kim1, Z. Ling1, A. H. Mahmood1  +195 moreInstitutions (22)
TL;DR: Upper and lower limits on the branching ratio, each at 95% C.L., are {ital B}({ital b}{r_arrow}{ital s}{gamma}){gt}1.0{times}10{sup {minus}4}.
Abstract: We have measured the inclusive {ital b}{r_arrow}{ital s}{gamma} branching ratio to be (2.32{plus_minus}0.57{plus_minus}0.35){times}10{sup {minus}4}, where the first error is statistical and the second is systematic. Upper and lower limits on the branching ratio, each at 95% C.L., are {ital B}({ital b}{r_arrow}{ital s}{gamma}){lt}4.2{times}10{sup {minus}4} and {ital B}({ital b}{r_arrow}{ital s}{gamma}){gt}1.0{times}10{sup {minus}4}. These limits restrict the parameters of extensions of the standard model.

Journal ArticleDOI
TL;DR: This paper gives a straight forward, highly efficient, scalable implementation of common matrix multiplication operations that are much simpler than previously published methods, yield better performance, and require less work space.
Abstract: In this paper, we give a straight forward, highly efficient, scalable implementation of common matrix multiplication operations. The algorithms are much simpler than previously published methods, yield better performance, and require less work space. MPI implementations are given, as are performance results on the Intel Paragon system.

Journal ArticleDOI
09 Nov 1995-Nature
TL;DR: It is shown that floating head is the zebrafish homologue of Xnot, a homeobox gene expressed in the amphibian organizer and notochord, and it is proposed that flh regulates notochords precursor cell fate.
Abstract: The notochord is a midline mesodermal structure with an essential patterning function in all vertebrate embryos. Zebrafish floating head (flh) mutants lack a notochord, but develop with prechordal plate and other mesodermal derivatives, indicating that flh functions specifically in notochord development. We show that floating head is the zebrafish homologue of Xnot, a homeobox gene expressed in the amphibian organizer and notochord. We propose that flh regulates notochord precursor cell fate.