scispace - formally typeset
Search or ask a question

Showing papers by "Delft University of Technology published in 1992"


Journal ArticleDOI
01 Jul 1992-Yeast
TL;DR: The effect of benzoate on respiration was dependent on the dilution rate: at high dilution rates respiration increased proportionally with increasing Benzoate concentration as mentioned in this paper.
Abstract: Addition of benzoate to the medium reservoir of glucose-limited chemostat cultures of Saccharomyces cerevisiae CBS 8066 growing at a dilution rate (D) of 0.10 h-1 resulted in a decrease in the biomass yield, and an increase in the specific oxygen uptake rate (qO2) from 2.5 to as high as 19.5 mmol g-1 h-1. Above a critical concentration, the presence of benzoate led to alcoholic fermentation and a reduction in qO2 to 13 mmol g-1 h-1. The stimulatory effect of benzoate on respiration was dependent on the dilution rate: at high dilution rates respiration was not enhanced by benzoate. Cells could only gradually adapt to growth in the presence of benzoate: a pulse of benzoate given directly to the culture resulted in wash-out. As the presence of benzoate in cultures growing at low dilution rates resulted in large changes in the catabolic glucose flux, it was of interest to study the effect of benzoate on the residual glucose concentration in the fermenter as well as on the level of some selected enzymes. At D = 0.10 h-1, the residual glucose concentration increased proportionally with increasing benzoate concentration. This suggests that modulation of the glucose flux mainly occurs via a change in the extracellular glucose concentration rather than by synthesis of an additional amount of carriers. Also various intracellular enzyme levels were not positively correlated with the rate of respiration. A notable exception was citrate synthase: its level increased with increasing respiration rate. Growth of S. cerevisiae in ethanol-limited cultures in the presence of benzoate also led to very high qO2 levels of 19-21 mmol g-1 h-1. During growth on glucose as well as on ethanol, the presence of benzoate coincided with an increase in the mitochondrial volume up to one quarter of the total cellular volume. Also with the Crabtree-negative yeasts Candida utilis, Kluyveromyces marxianus and Hansenula polymorpha, growth in the presence of benzoate resulted in an increase in qO2 and, at high concentrations of benzoate, in aerobic fermentation. In contrast to S. cerevisiae, the highest qO2 of these yeasts when growing at D = 0.10 h-1 in the presence of benzoate was equal to, or lower than the qO2 attainable at mu(max) without benzoate. Enzyme activities that were repressed by glucose in S. cerevisiae also declined in K. marxianus when the glucose flux was increased by the presence of benzoate.(ABSTRACT TRUNCATED AT 400 WORDS)

1,444 citations


Journal ArticleDOI
TL;DR: The Magic Formula model as mentioned in this paper provides a set of mathematical formulae from which the forces and moment acting from road to tyre can be calculated at longitudinal, lateral and camber slip conditions, which may occur simultaneously.
Abstract: An account is given of the latest version 3 of the Magic Formula tyre model. The model provides a set of mathematical formulae from which the forces and moment acting from road to tyre can be calculated at longitudinal, lateral and camber slip conditions, which may occur simultaneously. The model aims at an accurate description of measured steady-state tyre behaviour. The coefficients of the basic formula represent typifying quantities of the tyre characteristic. By selecting proper values, the characteristics for either side force, aligning torque or fore and aft force can be obtained. The new version of the model contains physically based formulations to avoid the introduction of correction factors. Double-sided, possibly non-symmetric pure slip curves are employed as the basis for combined slip calculations. Suggestions are given to estimate the driving part of the longitudinal slip curve and to represent the characteristic at rolling backwards.

941 citations


Journal ArticleDOI
TL;DR: In this paper, the yield strength not only depends on an equivalent plastic strain measure (hardening parameter), but also on the Laplacian thereof, and the consistency condition now results in a differential equation instead of an algebraic equation as in conventional plasticity.
Abstract: A plasticity theory is proposed in which the yield strength not only depends on an equivalent plastic strain measure (hardening parameter), but also on the Laplacian thereof. The consistency condition now results in a differential equation instead of an algebraic equation as in conventional plasticity. To properly solve the set of non-linear differential equations the plastic multiplier is discretized in addition to the usual discretization of the displacements. For appropriate boundary conditions this formulation can also be derived from a variational principle. Accordingly, the theory is complete

924 citations


Journal ArticleDOI
TL;DR: In this paper, a method for the elimination of all surface-related multiples by means of a process that removes the influence of the surface reflectivity from the data is proposed.
Abstract: The major amount of multiple energy in seismic data is related to the large reflectivity of the surface. A method is proposed for the elimination of all surface-related multiples by means of a process that removes the influence of the surface reflectivity from the data. An important property of the proposed multiple elimination process is that no knowledge of the subsurface is required. On the other hand, the source signature and the surface reflectivity do need to be provided. As a consequence, the proposed process has been implemented adaptively, meaning that multiple elimination is designed as an inversion process where the source and surface reflectivity properties are estimated and where the multiple-free data equals the inversion residue. Results on simulated data and field data show that the proposed multiple elimination process should be considered as one of the key inversion steps in stepwise seismic inversion.

740 citations


Book
31 Jul 1992
TL;DR: In this paper, the authors proposed a nonparametric maximum likelihood estimator for interval censoring, which is based on the Van der Vaart Differentiability Theorem (VDT).
Abstract: I. Information Bounds.- 1 Models, scores, and tangent spaces.- 1.1 Introduction.- 1.2 Models P.- 1.3 Scores: Differentiability of the Model.- 1.4 Tangent Sets P0 and Tangent Spaces P.- 1.5 Score Operators.- 1.6 Exercises.- 2 Convolution and asymptotic minimax theorems.- 2.1 Introduction.- 2.2 Finite-dimensional Parameter Spaces.- 2.3 Infinite-dimensional Parameter Spaces.- 2.4 Exercises.- 3 Van der Vaart's Differentiability Theorem.- 3.1 Differentiability of Implicitly Defined Functions.- 3.2 Some Applications of the Differentiability Theorem.- 3.3 Exercises.- II. Nonparametric Maximum Likelihood Estimation.- 1 The interval censoring problem.- 1.1 Characterization of the non-parametric maximum likelihood estimators.- 1.2Exercises.- 2 The deconvolution problem.- 2.1 Decreasing densities and non-negative random variables.- 2.2 Convolution with symmetric densities.- 2.3 Exercises.- 3 Algorithms.- 3.1 The EM algorithm.- 3.2 The iterative convex minorant algorithm.- 3.3 Exercises.- 4 Consistency.- 4.1 Interval censoring, Case 1.- 4.2 Convolution with a symmetric density.- 4.3 Interval censoring, Case 2.- 4.4 Exercises.- 5 Distribution theory.- 5.1 Interval censoring, Case 1.- 5.2 Interval censoring, Case 2.- 5.3 Deconvolution with a decreasing density.- 5.4 Estimation of the mean.- 5.5 Exercises.- References.

638 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present two algorithms to realize a finite dimensional, linear time-invariant state-space model from input-output data, which are classified as one of the subspace model identification schemes.
Abstract: In this paper, we present two novel algorithms to realize a finite dimensional, linear time-invariant state-space model from input-output data. The algorithms have a number of common features. They are classified as one of the subspace model identification schemes, in that a major part of the identification problem consists of calculating specially structured subspaces of spaces defined by the input-output data. This structure is then exploited in the calculation of a realization. Another common feature is their algorithmic organization: an RQ factorization followed by a singular value decomposition and the solution of an overdetermined set (or sets) of equations. The schemes assume that the underlying system has an output-error structure and that a measurable input sequence is available. The latter characteristic indicates that both schemes are versions of the MIMO Output-Error State Space model identification (MOESP) approach. The first algorithm is denoted in particular as the (elementary MOESP scheme)...

624 citations


Proceedings ArticleDOI
30 Aug 1992
TL;DR: It is shown that a large fraction of the parameters (the weights of neural networks) are of less importance and do not need to be measured with high accuracy and therefore the reported experiments seem to be more realistic from a classical point of view.
Abstract: In the field of neural network research a number of experiments described seem to be in contradiction with the classical pattern recognition or statistical estimation theory. The authors attempt to give some experimental understanding why this could be possible by showing that a large fraction of the parameters (the weights of neural networks) are of less importance and do not need to be measured with high accuracy. The remaining part is capable to implement the desired classifier and because this is only a small fraction of the total number of weights, the reported experiments seem to be more realistic from a classical point of view. >

524 citations


Journal ArticleDOI
TL;DR: In this paper, a numerical model is presented for simulating fracture in heterogeneous materials such as concrete and rock, and the typical failure mechanism, crack face bridging, found in concrete and other materials is simulated by use of a lattice model.
Abstract: In this paper a numerical model is presented for simulating fracture in heterogeneous materials such as concrete and rock. The typical failure mechanism, crack face bridging, found in concrete and other materials is simulated by use of a lattice model. The model can be used at a small scale, where the particles in the grain structure are generated and aggregate, matrix and bond properties are assigned to the lattice elements. Simulations at this scale are useful for studying the influence of material composition. In addition the model seems a promising tool for simulating fracture in structures. In this case the microstructure of the material is not mimicked in detail but rather the lattice elements are given tensile strengths which are randomly chosen out of a certain distribution. Realistic crack patterns are found compared with experiments on laboratory-scale specimens. The present results indicate that fracture mechanisms are simulated realistically. This is very important because it simplifies the tuning of the model.

392 citations


Journal ArticleDOI
TL;DR: In this paper, a common basis of reaction-rate theories is discussed for isothermal and non-isothermal transformations, and compatible recipes have been obtained for the extraction of kinetic parameters from isothermally and nonsmooth conducted experiments, which can also be applied to heterogeneous reactions, and that these methods constitute (only) a special case of the general analysis proposed.
Abstract: A common basis of reaction-rate theories is discussed for isothermal and non-isothermal transformations As a result compatible recipes have been obtained for the extraction of kinetic parameters from isothermally and non-isothermally conducted experiments It follows that so-called Kissinger-like methods for non-isothermal kinetic analysis, originally derived for homogeneous reactions, can also be applied to heterogeneous reactions, and that these methods constitute (only) a special case of the general analysis proposed The recipes presented are illustrated by a series of examples taken from recent research on solid-state transformations; among other things, isothermal and non-isothermal analyses of the same transformation are compared and use of the notion of “effective activation energy”, that varies during the course of an overall transformation, is discussed

372 citations


Journal ArticleDOI
TL;DR: In this paper, the crack face bridges are flexural ligaments between ovelapping crack tips, and the failure of these ligaments occurs in a stable and controlled manner because the two overlapping crack tips shield each other.
Abstract: In this paper experimental evidence of fracture toughening of concrete and mortar through a mechanism called crack face bridging is presented. The classical explanation for softening of concrete, viz. the formation of a zone of discontinuous microcracking ahead of a continuous macrocrack seems only partially true. Instead, crack face bridging in the wake of the macrocrack tip seems a physically sounder explanation. The crack face bridges are flexural ligaments between ovelapping crack tips. The failure of the flexural ligaments occurs in a stable and controlled manner because the two overlapping crack tips shield each other. The cohesive stress over the macrocrack is directly related to the size of the crack face bridges, which depends on the heterogeneity of the material. The typical failure mechanism can be simulated using a simple numerical lattice model. First the grain structure of the material is generated either by manual methods or by adopting a random generator. Secondly a tringular lattice of brittle breaking beam elements is projected on the grain structure. Aggregate, matrix and bond properties are assigned to the lattice elements at the respective locations, and a simple algorithm allows for crack growth simulation. The main conclusion is that the crack patterns and the associated load-deformation response are largely governed by the properties of the constituents. The bond between aggregates and matrix is the weakest link in the system, and variation of this parameter leads to profoundly different crack patterns.

365 citations


Journal ArticleDOI
TL;DR: It is shown that nonexponential decay can be parametrized by HSVD (and LPSVD) and under certain conditions the computation time of SVD can be reduced very significantly.

Journal ArticleDOI
TL;DR: In this paper, a method for reconstructing the complex index of refraction of a bounded two-dimensional inhomogeneous object of known geometric configuration from measured scattered field data is presented, which is an extension of recent results on the direct scattering problem wherein the governing domain integral equation was solved iteratively by a successive over-relaxation technique.

Book
01 Jan 1992
TL;DR: Unified predictive controller design analysis of design parameters predictive control with controller output constraints applications conclusions and suggestions.
Abstract: Unified predictive controller design analysis of design parameters predictive control with controller output constraints applications conclusions and suggestions.

Journal ArticleDOI
TL;DR: The elementary MOESP algorithm presented in the first part of this series of papers is analysed and the asymptotic properties of the estimated state-space model when only considering zero-mean white noise perturbations on the output sequence are studied.
Abstract: The elementary MOESP algorithm presented in the first part of this series of papers is analysed in this paper. This is done in three different ways. First, we study the asymptotic properties of the estimated state-space model when only considering zero-mean white noise perturbations on the output sequence. It is shown that, in this case, the MOESPl implementation yields asymptotically unbiased estimates. An important constraint to this result is that the underlying system must have a finite impulse response and subsequently the size of the Hankel matrices, constructed from the input and output data at the beginning of the computations, depends on the number of non-zero Markov parameters. This analysis, however, leads to a second implementation of the elementary MOESP scheme, namely MOESP2. The latter implementation has the same asymptotic properties without the finite impulse response constraint. Secondly, we compare the MOESP2 algorithm with a classical state space model identification scheme. The latter...

Journal ArticleDOI
01 Jan 1992
TL;DR: In this paper, a multipath nested structure which overcomes the bandwidth reduction in the conventional nested structure by appending an independent feedforward path for high frequencies is presented, and the two regions of operation can be independently and closely matched, unaffected by variations in the parameters of the IC process in which the amplifier is fabricated.
Abstract: A multipath nested structure which overcomes the bandwidth reduction in the conventional nested structure by appending an independent feedforward path for high frequencies is presented. At low frequencies, the opamp behaves as a three-stage nested Miller-compensated amplifier, while at high frequencies the opamp has the nature and bandwidth of a two-stage amplifier with simple pole splitting. In this fashion, the concept circumvents the classical DC-gain/high-frequency-performance dilemma. No pole-zero doubles occur, because the two regions of operation can be independently and closely matched, unaffected by variations in the parameters of the IC process in which the amplifier is fabricated. >

Journal ArticleDOI
TL;DR: In this paper, a weak form of the singular Green's function has been used by introducing its spherical mean, and the spatial convolution can be carried out numerically using a trapezoidal integration rule.
Abstract: The problem of electromagnetic scattering by a three-dimensional dielectric object can be formulated in terms of a hypersingular integral equation, in which a grad-div operator acts on a vector potential. The vector potential is a spatial convolution of the free space Green's function and the contrast source over the domain of interest. A weak form of the integral equation for the relevant unknown quantity is obtained by testing it with appropriate testing functions. The vector potential is then expanded in a sequence of the appropriate expansion functions and the grad-div operator is integrated analytically over the scattering object domain only. A weak form of the singular Green's function has been used by introducing its spherical mean. As a result, the spatial convolution can be carried out numerically using a trapezoidal integration rule. This method shows excellent numerical performance. >

Journal ArticleDOI
TL;DR: In this article, the authors consider the non-linear inversion of marine seismic refraction waveforms and show that genetic algorithms are inherently superior to random search techniques and can also perform better than iterative matrix inversion which requires a good starting model.
Abstract: SUMMARY Recently a new class of methods, to solve non-linear optimization problems, has generated considerable interest in the field of Artificial Intelligence. These methods, known as genetic algorithms, are able to solve highly non-linear and non-local optimization problems and belong to the class of global optimization techniques, which includes Monte Carlo and Simulated Annealing methods. Unlike local techniques, such as damped least squares or conjugate gradients, genetic algorithms avoid all use of curvature information on the objective function. This means that they do not require any derivative information and therefore one can use any type of misfit function equally well. Most iterative methods work with a single model and find improvements by perturbing it in some fashion. Genetic algorithms, however, work with a group of models simultaneously and use stochastic processes to guide the search for an optimal solution. Both Simulated Annealing and genetic algorithms are modelled on natural optimization systems. Simulated Annealing uses an analogy with thermodynamics; genetic algorithms have an analogy with biological evolution. This evolution leads to an efficient exchange of information between all models encountered, and allows the algorithm to rapidly assimilate and exploit the information gained to find better data fitting models. To illustrate the power of genetic algorithms compared to Monte Carlo, we consider a simple multidimensional quadratic optimization problem and show that its relative efficiency increases dramatically as the number of unknowns is increased. As an example of their use in a geophysical problem with real data we consider the non-linear inversion of marine seismic refraction waveforms. The results show that genetic algorithms are inherently superior to random search techniques and can also perform better than iterative matrix inversion which requires a good starting model. This is primarily because genetic algorithms are able to combine both local and global search mechanisms into a single efficient method. Since many forward and inverse problems involve solving an optimization problem, we expect that the genetic approach will find applications in many other geophysical problems; these include seismic ray tracing, earthquake location, non-linear data fitting and, possibly seismic tomography.

Journal ArticleDOI
TL;DR: It is argued that the synergistic results of integration can best be understood as a within-role increase of uncertainty reduction, and a between-role convergence of functional uncertainty reduction.
Abstract: Technological innovation within the firm can be modelled as a process of uncertainty reduction. The four major sources of uncertainty are user needs, technological environments, competitive environments, and organizational resources. Reducing these uncertainties is the responsibility of the marketing and R&D functions within the firm. Because these functions are reciprocally interdependent, their success in reducing uncertainty requires integration and collaboration between them. A contingency framework is developed which shows the effect and the determinants of interfunctional information transfer. It is argued that the synergistic results of integration can best be understood as a within-role increase of uncertainty reduction, and a between-role convergence of functional uncertainty reduction. The implications of the model are discussed.

Journal ArticleDOI
TL;DR: A simple correlation is found which provides the Gibbs energy dissipation/C‐mol biomass as a function of the nature of the C‐source (expressed as the carbon chain length and the degree of reduction), which is much more useful than heat production/C-mol biomass, which is strongly dependent on the electron acceptor used.
Abstract: Correlations for the prediction of biomass yields are valuable, and many proposals based on a number of parameters (Y(ATP), Y(Ave), eta(o), Y(c), Gibbs energy efficiencies, and enthalpy efficiencies) have been published. This article critically examines the properties of the proposed parameters with respect to the general applicability to chemotrophic growth systems, a clear relation to the Second Law of Thermodynamics, the absence of intrinsic problems, and a requirement of only black box information. It appears that none of the proposed parameters satisfies all these requirements. Particularly, the various energetic efficiency parameters suffer from major intrinsic problems. However, this article will show that the Gibbs energy dissipation per amount of produced biomass (kJ/C-mod) is a parameter which satisfies the requirements without having intrinsic problems. A simple correlation is found which provides the Gibbs energy dissipation/C-mol biomass as a function of the nature of the C-source (expressed as the carbon chain length and the degree of reduction). This dissipation appears to be nearly independent of the nature of the electron acceptor (e.g., O(2), No(3) (-), fermentation). Hence, a single correlation can describe a very wide range of microbial growth systems. In this respect, Gibbs energy dissipation is much more useful than heat production/C-mol biomass, which is strongly dependent on the electron acceptor used. Evidence is presented that even a net heat-uptake can occur in certain growth systems.The correlation of Gibbs energy dissipation thus obtained shows that dissipation/C-mol biomass increases for C-sources with smaller chain length (C(6) --> C(1)), and increases for both higher and lower degrees of reduction than 4. It appears that the dissipation/C-mol biomass can be regarded as a simple thermodynamic measure of the amount of biochemical "work" required to convert the carbon source into biomass by the proper irreversible carbon-carbon coupling and oxidation/reduction reactions. This is supported by the good correlation between the theoretical ATP requirement for biomass formation on different C-sources and the dissipation values (kJ/C-mol biomass) found. The established correlation for the Gibbs energy dissipation allows the prediction of the chemotrophic biomass yield on substrate with an error of 13% in the yield range 0.01 to 0.80 C-mol biomass/(C)-mol substrate for aerobic/anaerobic/denitrifying growth systems.

Journal ArticleDOI
TL;DR: In this paper, a combination of statistical and discharge parameters was used to discriminate between different discharge sources and showed that several parameters are characteristic for different types of discharges and offer good discrimination between different defects.
Abstract: Making use of a computer-aided discharge analyzer, a combination of statistical and discharge parameters was studied to discriminate between different discharge sources. Tests on samples with different discharge sources revealed that several parameters are characteristic for different types of discharges and offer good discrimination between different defects. >

Journal ArticleDOI
30 Aug 1992
TL;DR: Experiments indicate that the performance of the Kohonen projection method is comparable or better than Sammon's method for the purpose of classifying clustered data.
Abstract: A nonlinear projection method is presented to visualize high-dimensional data as a 2D image. The proposed method is based on the topology preserving mapping algorithm of Kohonen. The topology preserving mapping algorithm is used to train a 2D network structure. Then the interpoint distances in the feature space between the units in the network are graphically displayed to show the underlying structure of the data. Furthermore, we present and discuss a new method to quantify how well a topology preserving mapping algorithm maps the high-dimensional input data onto the network structure. This is used to compare our projection method with a well-known method of Sammon (1969). Experiments indicate that the performance of the Kohonen projection method is comparable or better than Sammon's method for the purpose of classifying clustered data. Its time-complexity only depends on the resolution of the output image, and not on the size of the dataset. A disadvantage, however, is the large amount of CPU time required. >

Journal ArticleDOI
TL;DR: A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus, and a set of parameters for each cadaver, describing very precisely the geometry of the shoulders mechanism, which allows positioning of muscle force vectors a posteriori, and recalculation of position coordinates and moment arms for any position.

Journal ArticleDOI
TL;DR: In this paper, a frequency-response identification technique and a robust control design method are used to set up such an iterative scheme, where each identification step uses the previously designed controller to obtain new data from the plant and the associated identification problem has been solved by means of a coprime factorization of the unknown plant.
Abstract: If approximate identification and model-based control design are used to accomplish a high-performance control system, then the two procedures must be treated as a joint problem. Solving this joint problem by means of separate identification and control design procedures practically entails an iterative scheme. A frequency-response identification technique and a robust control design method are used to set up such an iterative scheme. Each identification step uses the previously designed controller to obtain new data from the plant. The associated identification problem has been solved by means of a coprime factorization of the unknown plant. The technique's utility is illustrated by an example. >

Journal ArticleDOI
TL;DR: In this article, five technologies for in-situ product recovery have been compared on the basis of design parameters and energy efficiency: stripping, adsorption, liquid-liquid extraction, pervaporation and membrane solvent extraction.

Journal ArticleDOI
TL;DR: In this article, the self-imaging property of a homogeneous multimoded planar optical waveguide has been applied in the design of passive planar monomode optical couplers based on multimode interference (MMI).
Abstract: The self-imaging property of a homogeneous multimoded planar optical waveguide has been applied in the design of passive planar monomode optical couplers based on multimode interference (MMI). Based on these designs, 3 dB and cross couplers were fabricated in SiO/sub 2//Al/sub 2/O/sub 3//SiO/sub 2/ channel waveguides on Si substrates. Theoretical predictions and experimental results at 1.52- mu m wavelength are presented which demonstrate that MMI couplers offer high performance: on-chip excess loss better than 0.5 dB, high reproducibility, low polarization dependence and small device size. >

Journal ArticleDOI
TL;DR: The genusTrichosporon was found to be fairly homogeneous on the basis of a phylogenetic analysis of partial 26S rRNA sequences, with the exception of T. pullulans which proved to be unrelated.
Abstract: The genus Trichosporon was revised using characters of morphology, ultrastructure, physiology, ubiquinone systems, mol% G + C of DNA, DNA/DNA reassociations and 26S ribosomal RNA partial sequences. A total of 101 strains was used, including all available type and authentic cultures of previously described taxa. Nineteen taxa could be distinguished, 15 of which having Q-9 coenzyme systems and 4 having Q-10. Sixteen previously described names were reduced to synonymy. One new species was described. The genus is characterized by the presence of arthroconidia. Few species possess further diagnostic morphological characters, such as the presence of appressoria, macroconidia or meristematic conidiation. The septa of two species were found to be non-perforate, while those of the remaining species contained dolipores at variable degrees of differentiation, with or without vesicular or tubular parenthesomes. All species were able to assimilate a large number of carbon compounds; visible CO2 production was absent. The genus was found to be fairly homogeneous on the basis of a phylogenetic analysis of partial 26S rRNA sequences, with the exception of T. pullulans which proved to be unrelated. Most taxa were found to occupy well-defined ecological niches. Within the group of taxa isolated from humans, a distinction could be made between those involved in systemic mycoses and those which mainly caused pubic or non-pubic white piedras, respectively. One species was consistently associated with animals, while others came mainly from soil or water. One species was mesophilic and another psychrophilic.

Journal ArticleDOI
TL;DR: The results suggest that ferric iron may be an important electron acceptor for the oxidation of sulfur compounds in acidic environments.
Abstract: The obligately autotrophic acidophile Thiobacillus ferrooxidans was grown on elemental sulfur in anaerobic batch cultures, using ferric iron as an electron acceptor. During anaerobic growth, ferric iron present in the growth media was quantitatively reduced to ferrous iron. The doubling time in anaerobic cultures was approximately 24 h. Anaerobic growth did not occur in the absence of elemental sulfur or ferric iron. During growth, a linear relationship existed between the concentration of ferrous iron accumulated in the cultures and the cell density. The results suggest that ferric iron may be an important electron acceptor for the oxidation of sulfur compounds in acidic environments.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the influence of modification of the chemical structure of the carbon by oxidation or high-temperature treatment and found that the maximum adsorption capacity cannot be attributed to a single property of the activated carbon.
Abstract: Heteropolyacids (HPAs) such as dodecatungstophosphoric and -silicic acid, adsorb strongly onto activated carbon. The adsorption is believed to involve proton transfer from the HPA to the carbon. Adsorption isotherms of heteropolyacids from aqueous and organic solutions are given and the influence of modification of the chemical structure of the carbon by oxidation or high-temperature treatment is examined. Some types of carbon give rise to a shell-type adsorption as shown by SEM-EDX photographs. The maximum adsorption capacity cannot be attributed to a single property of the activated carbon. Probably a complex of factors determines the adsorption capacity.

Journal ArticleDOI
TL;DR: A number of high-resolution direction finding methods for determining the two-dimensional DOA (directions of arrival) of a number of plane waves impinging on a sensor array are discussed.
Abstract: A number of high-resolution direction finding methods for determining the two-dimensional DOA (directions of arrival) of a number of plane waves impinging on a sensor array are discussed. The array consists of triplets of sensors that are identical, as an extension of the one-dimensional ESPRIT scenario to two dimensions. Algorithms that yield the correct parameter pairs while avoiding an extensive search over the two separate one-dimensional parameter sets are devised. >

Journal ArticleDOI
TL;DR: This paper presents an approach in which aspect models are used to store view specific information in the form of a building reference model, which consists of a general kernel and view dependent aspect models.