scispace - formally typeset
Search or ask a question

Showing papers by "Chalmers University of Technology published in 1997"


Journal ArticleDOI
09 May 1997-Science
TL;DR: The compound N -(17-hydroxylinolenoyl)-l-glutamine (named here volicitin) was isolated from oral secretions of beet armyworm caterpillars and induced the seedlings to emit volatile compounds that attract parasitic wasps and natural enemies of the herbivores.
Abstract: The compound N -(17-hydroxylinolenoyl)-l-glutamine (named here volicitin) was isolated from oral secretions of beet armyworm caterpillars. When applied to damaged leaves of corn seedlings, volicitin induces the seedlings to emit volatile compounds that attract parasitic wasps, natural enemies of the caterpillars. Mechanical damage of the leaves, without application of this compound, did not trigger release of the same blend of volatiles. Volicitin is a key component in a chain of chemical signals and biochemical processes that regulate tritrophic interactions among plants, insect herbivores, and natural enemies of the herbivores.

927 citations


Journal ArticleDOI
TL;DR: A clean algorithm for determining whether a ray intersects a triangle that needs not be computed on the fly nor be stored, which can amount to significant memory savings for triangle meshes.
Abstract: We present a clean algorithm for determining whether a ray intersects a triangle. The algorithm translates the origin of the ray and then changes the base to yield a vector (t u v) T , where t is the distance to the plane in which the triangle lies and (u, v) represents the coordinates inside the triangle. One advantage of this method is that the plane equation need not be computed on the fly nor be stored, which can amount to significant memory savings for triangle meshes. As we found our method to be comparable in speed to previous methods, we believe it is the fastest ray-triangle intersection routine for triangles that do not have precomputed plane equations.

794 citations


Journal ArticleDOI
TL;DR: It is shown theoretically that viscoelastic layers with thicknesses comparable to the biofilms studied in this work can induce energy dissipation of the same magnitude as the measured ones.
Abstract: We have measured the energy dissipation of the quartz crystal microbalance(QCM), operating in the liquid phase, when mono- or multi-layers of bio-molecules and biofilms form on the QCM electrode (with a time resolution of ca. 1 s). Examples are taken from protein adsorption, lipid vesicle adsorption and cell adhesion studies. Our results show that even very thin (a few nm) biofilms dissipate a significant amount of energy owing to the QCM oscillation. Various mechanisms for this energy dissipation are discussed. Three main contributions to the measured increase in energy dissipation are considered. (i) A viscoelastic porous structure (the biofilm) that is strained during oscillation, (ii) trapped liquid that moves between or in and out of the pores due to the deformation of the film and (iii) the load from the bulk liquid which increases the strain of the film. These mechanisms are, in reality, not entirely separable, rather, they constitute an effective viscoelastic load. The biofilms can therefore not be considered rigidly coupled to the QCM oscillation. It is further shown theoretically that viscoelastic layers with thicknesses comparable to the biofilms studied in this work can induce energy dissipation of the same magnitude as the measured ones.

667 citations


Journal ArticleDOI
TL;DR: In this paper, the tunneling current from a scanning tunneling microscope was used to image and dissociate single molecules on the Pt(111) surface in the temperature range of 40 to 150 K. After dissociation, the two oxygen atoms are found one to three lattice constants apart.
Abstract: The tunneling current from a scanning tunneling microscope was used to image and dissociate single ${\mathrm{O}}_{2}$ molecules on the Pt(111) surface in the temperature range of 40 to 150 K. After dissociation, the two oxygen atoms are found one to three lattice constants apart. The dissociation rate as a function of current was found to vary as ${I}^{0.8\ifmmode\pm\else\textpm\fi{}0.2}$, ${I}^{1.8\ifmmode\pm\else\textpm\fi{}0.2}$, and ${I}^{2.9\ifmmode\pm\else\textpm\fi{}0.3}$ for sample biases of 0.4, 0.3, and 0.2 V, respectively. These rates are explained using a general model for dissociation induced by intramolecular vibrational excitations via resonant inelastic electron tunneling.

511 citations


Journal ArticleDOI
TL;DR: In this paper, the full supersymmetric and kappa-symmetric actions for Dirichlet p-branes were given, including their coupling to background superfields of ten-dimensional type IIA and IIB supergravity.

495 citations


Journal ArticleDOI
TL;DR: In this paper, a quantitative comparison between both the conventional resistive and neoclassical theories, and the experimental results of several machines, which have all observed these low-min nonideal modes.
Abstract: The maximum normalized beta achieved in long-pulse tokamak discharges at low collisionality falls significantly below both that observed in short pulse discharges and that predicted by the ideal MHD theory. Recent long-pulse experiments, in particular those simulating the International Thermonuclear Experimental Reactor (ITER) [M. Rosenbluth et al., Plasma Physics and Controlled Nuclear Fusion (International Atomic Energy Agency, Vienna, 1995), Vol. 2, p. 517] scenarios with low collisionality nu(e)*, are often limited by low-m/n nonideal magnetohydrodynamic (MHD) modes. The effect of saturated MHD modes is a reduction of the confinement time by 10%-20%, depending on the island size and location, and can lead to a disruption. Recent theories on neoclassical destabilization of tearing modes, including the effects of a perturbed helical bootstrap current, are successful in explaining the qualitative behavior of the resistive modes and recent results are consistent with the size of the saturated islands. Also, a strong correlation is observed between the onset of these low-m/n modes with sawteeth, edge localized modes (ELM), or fishbone events. consistent with the seed island required by the theory. We will focus on a quantitative comparison between both the conventional resistive and neoclassical theories, and the experimental results of several machines, which have all observed these low-min nonideal modes. This enables us to single out the key issues in projecting the long-pulse beta limits of ITER-size tokamaks and also to discuss possible plasma control methods that can increase the soft beta limit, decrease the seed perturbations, and/or diminish the effects on confinement. (C) 1997 American Institute of Physics.

414 citations


Journal ArticleDOI
TL;DR: The binding of a mixed-sequence pentadecamer PNA (peptide nucleic acid) containing all four nucleobases to the fully complementary as well as various singly mismatched RNA and DNA oligonucleotides has been systematically investigated using thermal denaturation and BIAcore surface-interaction techniques.
Abstract: The binding of a mixed-sequence pentadecamer PNA (peptide nucleic acid) containing all four nucleobases to the fully complementary as well as various singly mismatched RNA and DNA oligonucleotides has been systematically investigated using thermal denaturation and BIAcore surface-interaction techniques. The rate constants for association (k(a)) and dissociation (k(d)) of the duplex formation as well as the thermal stability (melting temperature, T-m) of the duplexes have been determined. Upon binding to PNA tethered via a biotin-linker to streptavidin at the dextran/gold surface, DNA and RNA sequences containing single mismatches at various positions in the center resulted in increased dissociation and decreased association rate constants. T-m values for PNA.RNA duplexes are on average 4 degrees C higher than for PNA.DNA duplexes and follow quantitatively the same variation with mismatches as do the PNA.RNA duplexes. Also a faster k(a) and a slower k(d) are found for PNA.RNA duplexes compared to the PNA.DNA duplexes. An overall fair correlation between T-m, k(a), and k(d) is found for a series of PNA.DNA and PNA.RNA duplexes although the determination of k(a) seemed to be prone to artifacts of the method and was not considered capable of providing absolute values representing the association rate constant in bulk solution.

412 citations


Journal ArticleDOI
TL;DR: A method, along with some optimizations, for computing whether or not two triangles intersect is presented, which is shown to be fast and can be used, for example, in collision detection algorithms.
Abstract: This paper presents a method, along with some optimizations, for computing whether or not two triangles intersect. The code, which is shown to be fast, can be used, for example, in collision detection algorithms.

402 citations


Journal ArticleDOI
01 Nov 1997-Stroke
TL;DR: The new automated analyzing system will not only greatly increase the speed of measurements but also reduce the variability between readers if the same analyzing program is used, and will probably prevent the problem with drift in measurements over time.
Abstract: Background and Purpose A computerized analyzing system with manual tracing of echo interfaces for measurement of intima-media thickness and lumen diameter in carotid and femoral arteries was previously developed by our research group and has been used for many years in several laboratories. However, manual measurements are not only time consuming, but the results from these readings are also dependent on training and subjective judgement. A further problem is the observed drift in measurements over time. A new computerized technique for automatic detection of echo interfaces was therefore developed. The aim of this study was to evaluate the new automated computerized analyzing system. Methods The new system is based on dynamic programming and includes optional interactive modification by the human operator. Local measurements of vessel echo intensity, intensity gradient, and boundary continuity are extracted by image analysis techniques and included as weighed terms in a cost function. The dynamic programming procedure is used for determining the optimal location of the vessel interfaces in a way that the cost function is minimized. Results With the new automated computerized analyzing system the measurement results were less dependent on the reader’s experience, and the variability between readers was less compared with the old manual analyzing system. The measurements were also less time consuming. Conclusions The new automated analyzing system will not only greatly increase the speed of measurements but also reduce the variability between readers. It should also reduce the variability between different laboratories if the same analyzing program is used. Furthermore, the new system will probably prevent the problem with drift in measurements over time.

396 citations


Journal ArticleDOI
TL;DR: In this paper, the full supersymmetric and kappa-symmetric action for the Dirichlet three-brane, including its coupling to background superfields of ten-dimensional type IIB supergravity, was given.

372 citations


Journal ArticleDOI
TL;DR: In this article, the density matrix renormalization group (DMRG) was investigated and it was shown that the ground state can be derived through a simple variational ansatz making no reference to the DMRG construction.
Abstract: We investigate the density matrix renormalization group (DMRG) discovered by White and show that in the case where the renormalization eventually converges to a fixed point the DMRG ground state can be simply written as a matrix-product form. This ground state can also be rederived through a simple variational ansatz making no reference to the DMRG construction. We also show how to construct the matrix-product states and how to calculate their properties, including the excitation spectrum. This paper provides details of many results announced earlier.

Journal ArticleDOI
TL;DR: The authors have developed a model to relate basic forest properties to INSAR observations and show that the coherence and interferometric effective height of a forested area change between image pairs.
Abstract: Properties of ERS-1 C-band repeat pass interferometric SAR information for a forested area are studied. The intensity information is rather limited but, including coherence and effective interferometric SAR (INSAR) height, more information about the forest parameters can be obtained via satellite. Such information is also important for correction of INSAR derived topographic maps. Coherence properties have been used to identify forested/nonforested areas and the interferometric effective height of the forest determined by comparison to a DEM of the area. The authors have developed a model to relate basic forest properties to INSAR observations. These show that the coherence and interferometric effective height of a forested area change between image pairs. The model demonstrates how these properties are related to the temporal decorrelation and the scattering from the vegetation canopy and the ground surface. Gaps in the vegetation are found to be important in the characterization of boreal forests.

Journal ArticleDOI
TL;DR: In this article, dilute-acid hydrolyzates from alder, aspen, birch, willow, pine, and spruce were prepared by a one-stage hydrolysis process using sulfuric acid at temperatures between 188 and 234 C and with a holding time of 7 min.
Abstract: Dilute-acid hydrolyzates from alder, aspen, birch, willow, pine, and spruce were fermented without prior detoxification. The hydrolyzates were prepared by a one-stage hydrolysis process using sulfuric acid (5 g/L) at temperatures between 188 and 234 C and with a holding time of 7 min. The fermentations were carried out anaerobically by Saccharomyces cerevisiae (10 g of d.w./L) at a temperature of 30 C and an initial pH of 5.5. The fermentabilities were quite different for the different wood species, and only hydrolyzates of spruce produced at 188 and 198 C, hydrolyzates of pine produced at 188 C, and hydrolyzates of willow produced at 198 C could be completely fermented within 24 h. From the sum of the concentrations of the known inhibitors furfural and 5-(hydroxymethyl)furfural (HMF), a good prediction of the maximum ethanol production rate could be obtained, regardless of the origin of the hydrolyzate. Furthermore, in hydrolyzates that fermented well, furfural and HMF were found to be taken up and converted by the yeast, concomitant with the uptake of glucose.

Journal ArticleDOI
TL;DR: The collected data indicates that the breaches during the standard attack phase are statistically equivalent and that the times between breaches are exponentially distributed, which would actually imply that traditional methods for reliability modeling could be applicable.
Abstract: The paper is based on a conceptual framework in which security can be split into two generic types of characteristics, behavioral and preventive. Here, preventive security denotes the system's ability to protect itself from external attacks. One way to describe the preventive security of a system is in terms of its interaction with the alleged attacker, i.e., by describing the intrusion process. To our knowledge, very little is done to model this process in quantitative terms. Therefore, based on empirical data collected from intrusion experiments, we have worked out a hypothesis on typical attacker behavior. The hypothesis suggests that the attacking process can be split into three phases: the learning phase, the standard attack phase, and the innovative attack phase. The probability for successful attacks during the learning and innovative phases is expected to be small, although for different reasons. During the standard attack phase it is expected to be considerably higher. The collected data indicates that the breaches during the standard attack phase are statistically equivalent and that the times between breaches are exponentially distributed. This would actually imply that traditional methods for reliability modeling could be applicable.

Proceedings ArticleDOI
04 May 1997
TL;DR: The classification of intrusion techniques is based on a scheme proposed by Neumann and Parker (1989) and to further refine relevant parts of their scheme and is derived from the traditional three aspects of computer security: confidentiality, availability and integrity.
Abstract: This paper presents a classification of intrusions with respect to the technique as well the result. The taxonomy is intended to be a step on the road to an established taxonomy of intrusions for use in incident reporting, statistics, warning bulletins, intrusion detection systems etc. Unlike previous schemes, it takes the viewpoint of the system owner and should therefore be suitable to a wider community than that of system developers and vendors only. It is based on data from a realistic intrusion experiment, a fact that supports the practical applicability of the scheme. The paper also discusses general aspects of classification, and introduces a concept called dimension. After having made a broad survey of previous work in the field, we decided to base our classification of intrusion techniques on a scheme proposed by Neumann and Parker (1989) and to further refine relevant parts of their scheme. Our classification of intrusion results is derived from the traditional three aspects of computer security: confidentiality, availability and integrity.

Journal ArticleDOI
TL;DR: The results showed that the concentration of the undissociated form of acetic acid should not exceed 5 gl−1 in the medium for growth to occur, which led to an increased ethanol yield on glucose and the biomass and glycerol yields decreased by 45 and 33%, respectively.


Journal ArticleDOI
TL;DR: In this paper, the authors propose a range of feasible allocation procedures that reflect the consequences of inflows and outflows of cascade materials, and apply them to LCA to make LCA more efficient tool for decision support.
Abstract: If the aim of an LCA is to support decisions or to generate and evaluate ideas for future decisions, the allocation procedure should generally be effect-oriented rather than cause-oriented. It is important that the procedure be acceptable to decision makers expected to use the LCA results. It is also an advantage if the procedure is easy to apply. Applicability appears to be in conflict with accurate reflection of effect-oriented causalities. To make LCA a more efficient tool for decision support, a range of feasible allocation procedures that reflect the consequences of inflows and outflows of cascade materials is required.

Proceedings ArticleDOI
01 Jan 1997
TL;DR: This paper extends a functional language with a construct for writing polytypic functions, and infers the types of all other expressions using an extension of Jones' theories of qualified types and higher-order polymorphism.
Abstract: Many functions have to be written over and over again for different datatypes, either because datatypes change during the development of programs, or because functions with similar functionality are needed on different datatypes. Examples of such functions are pretty printers, debuggers, equality functions, unifiers, pattern matchers, rewriting functions, etc. Such functions are called polytypic functions. A polytypic function is a function that is defined by induction on the structure of user-defined datatypes. This paper extends a functional language (a subset of Haskell) with a construct for writing polytypic functions. The extended language type checks definitions of polytypic functions, and infers the types of all other expressions using an extension of Jones' theories of qualified types and higher-order polymorphism. The semantics of the programs in the extended language is obtained by adding type arguments to functions in a dictionary passing style. Programs in the extended language are translated to Haskell.

Journal ArticleDOI
TL;DR: In this paper, the authors outline principles for teaching based on the body of empirical phenomenographic research and, on the other hand, on an emerging picture of the nature of human awareness.
Abstract: Phenomenographic research has tackled questions concerning the variation in ways in which people experience the phenomena they meet in the world around them. The empirical work directly addressing educational issues has to a large extent focused on describing qualitatively different ways in which particular sorts of students understand a phenomenon, or experience some aspect of the world, which is central to their education, and setting the results into the educational context of interest. Learning is viewed as being a change in the ways in which one is capable of experiencing some aspect of the world and other research has been linked to attempts to bring about such changes by utilising certain approaches to teaching. This article will outline principles for teaching based, on the one hand, on the body of empirical phenomenographic research and, on the other hand, on an emerging picture of the nature of human awareness. The principles will first be drawn, explicated with the help of a number of ...

Journal ArticleDOI
TL;DR: In this paper, the authors focus on technology-based spin-off firms that have had their initial product idea originated in the previous employment of the founder and find that after an initial ten-year period, the spin-offs were growing significantly faster than the non-spin-offs.

Journal ArticleDOI
TL;DR: This note describes a simple technique, the Gamma (or Near Neighbour) test, which in many cases can be used to considerably simplify the design process of constructing a smooth data model such as a neural network.
Abstract: This note describes a simple technique, the Gamma (or Near Neighbour) test, which in many cases can be used to considerably simplify the design process of constructing a smooth data model such as a neural network The Gamma test is a data analysis routine, that (in an optimal implementation) runs in time O(MlogM)as M→∞,where Mis the number of sample data points, and which aims to estimate the best Mean Squared Error (MSError) that can be achieved by any continuous or smooth (bounded first partial derivatives) data model constructed using the data

Journal ArticleDOI
TL;DR: In this article, a set of core principles from the Japanese kaizen concept and illustrate the contingent nature of the design and organization of continuous improvement (CI) processes, especially with respect to product/process standardization and work design are delineated.
Abstract: Proposes to delineate a set of core principles from the Japanese kaizen concept and illustrate the contingent nature of the design and organization of continuous improvement (CI) processes, especially with respect to product/process standardization and work design. Given differences in the overall degree of standardization related to product design and process choice, two types of standards to reduce variability at operator work process level should be considered: indirect system standards, e.g. for skills, organization, information and communication; and direct standard operating procedures (SOPs). It is proposed that two team‐based organizational designs for CI (organic CI and wide‐focus CI) are functionally equivalent to the Japanese kaizen model, particularly when combining indirect system standards of skills with a group task design and low degree of product/process standardization. Expert task forces and suggestion systems are complementary organizational designs for improvement processes, particularly when work design is based on individual tasks and direct SOPs.

Journal ArticleDOI
TL;DR: Comparisons with corresponding analyses of commercial implants and electropolished and/or anodically oxidized samples shows that the plasma treatment offers superior control of the surface status, but it is also shown that improper control ofThe plasma process can produce unwanted and irreproducible results.
Abstract: Glow discharge plasma treatment is a frequently used method for cleaning, preparation, and modification of biomaterial and implant surfaces. The merits of such treatments are, however, strongly dependent on the process parameters. In the present work the possibilities, limitations, and risks of plasma treatment for surface preparation of metallic materials are investigated experimentally using titanium as a model system, and also discussed in more general terms. Samples were treated by different low-pressure direct current plasmas and analyzed using Auger electron spectroscopy (AES), x-ray photoelectron spectroscopy (XPS), atomic force microscopy, scanning electron microscopy, and light microscopy. The plasma system is a home-built, ultra-high vacuum-compatible system that allows sample introduction via a load-lock, and precise control of pressure, gas composition and flow rate, etc. This system allows uniform treatment of cylindrical and screw-shaped samples. With appropriate plasma parameters, argon plasma remove all chemical traces from former treatments (adsorbed contaminants and other impurities, and native oxide layers), in effect producing cleaner and more well-controlled surfaces than with conventional preparation methods. Removal (sputtering) rates up to 30 nm/min are possible. However, when inappropriate plasma parameters are used, the result may be increased contamination and formation of unintentional or undesired surface layers (e.g., carbides and nitrides). Plasma-cleaned surfaces provide a clean and reproducible starting condition for further plasma treatments to form well-controlled surface layers. Oxidation in pure O2 (thermally or in oxygen plasmas) results in uniform and stoichiometric TiO2 surface oxide layers of reproducible composition and thicknesses in the range 0.5-150 nm, as revealed by AES and XPS analyses. Titanium nitride layers were prepared by using N2 plasmas. While mild plasma treatments leave the surface microstructure unaffected, heavy plasma treatment can give rise to dramatic morphologic changes. Comparison of these results with corresponding analyses of commercial implants and electropolished and/or anodically oxidized samples shows that the plasma treatment offers superior control of the surface status. However, it is also shown that improper control of the plasma process can produce unwanted and irreproducible results.

Journal ArticleDOI
TL;DR: In this paper, the authors present the System Conditions, a set of first-order principles for sustainability, which are complementary, i.e., they do not overlap, they are all necessary and applicable at different scales and activities.
Abstract: SUMMARY The enlargement of complexity and effects of environmental problems has increased the need for a ‘compass’ to point us in the direction of sustainability. The four principles—System Conditions—which we have earlier described, along with a step-by-step approach to meet them, is such a compass. The System Conditions are first order principles for Sustainability: • they do not cover the whole area of Sustainability; • they are complementary, i.e. they do not overlap; • they are all necessary; • they are applicable at different scales and activities. The compass provides a model that does not only imply restrictions to business and policy-making, but also opportunities from a self-interest point of view. The model makes it possible to foresee changes regarding demands and costs on the future market. A number of business corporations and municipalities apply the compass as a guiding tool to the future market, asking the following strategic questions for each of the System Conditions: Does this measure ...


Journal ArticleDOI
TL;DR: In this article, the structure of the first azimuthal stationary state for a nonlinear medium presenting simultaneously a cubic (focusing) and a quintic (defocusing) dependence with the light intensity in the refractive index was analyzed.
Abstract: We analyze the structure of the first azimuthal stationary state for a nonlinear medium presenting simultaneously a cubic (focusing) and a quintic (defocusing) dependence with the light intensity in the refractive index. This solution takes the form of a dark vortex of light hosted in a compact light beam. The existence of these modes is guaranteed if the flux exceeds a certain minimum threshold and if the modes are extremely stable for fluxes larger than a critical value that we calculated. We verified the robust nature of this solution inducing internal oscillations by an initial phase chirp. Using the variational method, we obtain an approximate picture for the beam internal dynamics. We also studied numerically the interactions between two near-vortex solutions, finding that for a wide combination of beam parameters they show elastic collisions.

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of the evolution of the NO-CO reaction on rh and present the data for the elementary reaction steps, obtained primarily on Rh(1 1 1) at UHV conditions.


Journal ArticleDOI
TL;DR: This work isolated the DNA segments in the mouse genome that form the most stable nucleosomes yet characterized and selected sequences are shown to be localized at the centromeric regions of mouse metaphase chromosomes.