scispace - formally typeset
Search or ask a question

Showing papers by "Eindhoven University of Technology published in 1990"


Journal ArticleDOI
TL;DR: A force field based on both microscopic and macroscopic data is derived, which combines accuracy with transferability to other polymorphs and the possibility of parametrizing other elements is demonstrated.
Abstract: We address the problem of finding interatomic force fields for silicas from ab initio calculations on small clusters. It is shown that the force field cannot be determined from cluster data alone; incorporation of bulk-system information into the force field remains essential. Bearing this in mind, we derive a force field based on both microscopic (ab initio) and macroscopic (experimental) data. This force field combines accuracy with transferability to other polymorphs. The possibility of parametrizing other elements is also demonstrated.

1,385 citations


Journal ArticleDOI
TL;DR: It is proved that no polynomial algorithm can achieve a worst-case ratio less than 3/2 unlessP = NP, and a complexity classification for all special cases with a fixed number of processing times is obtained.
Abstract: We consider the following scheduling problem. There arem parallel machines andn independent jobs. Each job is to be assigned to one of the machines. The processing of jobj on machinei requires timep ij . The objective is to find a schedule that minimizes the makespan. Our main result is a polynomial algorithm which constructs a schedule that is guaranteed to be no longer than twice the optimum. We also present a polynomial approximation scheme for the case that the number of machines is fixed. Both approximation results are corollaries of a theorem about the relationship of a class of integer programming problems and their linear programming relaxations. In particular, we give a polynomial method to round the fractional extreme points of the linear program to integral points that nearly satisfy the constraints. In contrast to our main result, we prove that no polynomial algorithm can achieve a worst-case ratio less than 3/2 unlessP = NP. We finally obtain a complexity classification for all special cases with a fixed number of processing times.

953 citations


Journal ArticleDOI
TL;DR: In this paper, simulated thermal desorption spectra have been used to test the performance of ten commonly applied analysis procedures for obtaining the correct activation energy, preexponential factor and order of desorptions, as a function of adsorbate coverage.

353 citations


Journal ArticleDOI
TL;DR: A submitted manuscript is the version of the article upon submission and before peer-review as discussed by the authors, while a published version is the final layout of the paper including the volume, issue and page numbers.

349 citations


Journal ArticleDOI
TL;DR: In this article, the authors follow and continue the line of Bibermanl, Fujimoto, and Seaton by trying to find a classification in these results and find a (analytical) relation between the atomic state distribution function and the underlying plasma properties.

308 citations


Journal ArticleDOI
TL;DR: In this article, a new criterion is introduced for the onset of shear thickening in a concentrated dispersion, which follows from the assumption that shear forces overrule the interparticle forces.
Abstract: A new criterion is introduced for the onset of shear thickening in a concentrated dispersion. The criterion follows from the assumption that shear thickening occurs when the shear forces overrule the interparticle forces. A force balance at small interparticle distances is used to predict the dependence of this critical shear rate on the volume fraction. It is deduced that the critical shear rate is proportional to the interparticle distance, has a linear dependence on the magnitude of the stabilizing force, an inverse linear dependence on the dispersion medium viscosity, and an inverse linear dependence on the particle radius. The model is confirmed experimentally with viscosity measurements on various electrostatically stabilized dispersions. The validity of the model is also checked with data obtained from the literature. Experiments indicate that polydisperse dispersions exhibit pronounced shear thickening but with a less dramatic increase in viscosity than monodisperse dispersions.

292 citations


Journal ArticleDOI
TL;DR: The use and usefulness of stated preference and choice models in recreation/leisure research is discussed in this article. But, the use of such models is restricted to real-world settings.
Abstract: This article discusses the use and usefulness of stated preference and choice models in recreation/leisure research. Stated preference and choice models require one to design decision experiments to study recreational and leisure decisions made in hypothetical or simulated markets. Historically, such experiments were uncommon in recreation and leisure research; therefore we pay particular attention to comparisons of the stated preference modeling approaches with modeling approaches based on observations of choices made in real markets, such as the analysis of discrete choices using conditional and nested multinomial logit models. The conceptual and theoretical bases of stated preference and choice models are discussed; and procedures for developing such models, including different design strategies, are outlined. Potential uses of these models in recreation research are illustrated with reference to several recent empirical applications.

243 citations


Book ChapterDOI
01 Oct 1990
TL;DR: In this article, an abstract stochastic algorithm for combinatorial optimization problems is proposed, which generalizes and unifies genetic algorithms and simulated annealing, such that any GA or SA algorithm at hand is an instance of the abstract algorithm.
Abstract: In this paper we are trying to make a step towards a concise theory of genetic algorithms (GAs) and simulated annealing (SA). First, we set up an abstract stochastic algorithm for treating combinatorial optimization problems. This algorithm generalizes and unifies genetic algorithms and simulated annealing, such that any GA or SA algorithm at hand is an instance of our abstract algorithm. Secondly, we define the evolution belonging to the abstract algorithm as a Markov chain and find conditions implying that the evolution finds an optimum with probability 1. The results obtained can be applied when designing the components of a genetic algorithm.

213 citations


Book ChapterDOI
01 Oct 1990
TL;DR: In this paper, the authors review previous attempts to generate near-optimal solutions of the Traveling Salesman Problem by applying Genetic Algorithms and discuss some possibilities for speeding up classical Local Search algorithms by casting them into a genetic frame.
Abstract: We briefly review previous attempts to generate near-optimal solutions of the Traveling Salesman Problem by applying Genetic Algorithms. Following the lines of Johnson [1990] we discuss some possibilities for speeding up classical Local Search algorithms by casting them into a genetic frame. In an experimental study two such approaches, viz. Genetic Local Search with 2-Opt neighbourhoods and Lin-Kernighan neighbourhoods, respectively, are compared with the corresponding classical multi-start Local Search algorithms, as well as with Simulated Annealing and Threshold Accepting, using 2-Opt neighbourhoods. As to be expected a genetic organization of Local Search algorithms can considerably improve upon performance though the genetic components alone can hardly counterbalance a poor choice of the neighbourhoods.

199 citations


Proceedings ArticleDOI
12 Mar 1990
TL;DR: Results show that circuits can be speeded up by a factor of 2 at a cost of only 10 to 30% of extra power, and has proven feasible for circuits of up to several thousand cells.
Abstract: In this paper a solution is presented to tune the delay of a circuit composed of cells to a prescribed value, while minimizing power consumption. The tuning is performed by adapting the load drive capabilities of the cells. This optimization problem is mapped onto a linear program, which is then solved by the Simplex algorithm. This approach guarantees to find the global optimum, and has proven feasible for circuits of up to several thousand cells. The method can be used with any convex delay model. Results show that circuits can be speeded up by a factor of 2 at a cost of only 10 to 30% of extra power.

170 citations


Journal ArticleDOI
TL;DR: In this article, the 1/f noise in normally-on MODFETs biased at low drain voltages is investigated, and the experimentally observed relative noise in the drain current S/sub I/I/sup 2/ versus the effective gate voltage V/sub G/=V/sub GS/-V/Sub off/ shows three regions which are explained.
Abstract: The 1/f noise in normally-on MODFETs biased at low drain voltages is investigated. The experimentally observed relative noise in the drain current S/sub I//I/sup 2/ versus the effective gate voltage V/sub G/=V/sub GS/-V/sub off/ shows three regions which are explained. The observed dependencies are S/sub I//I/sup 2/ varies as V/sub G//sup m/ with the exponents m=-1, -3, 0 with increasing values of V/sub G/. The model explains m=-1 as the region where the resistance and the 1/f noise stem from the 2-D electron gas under the gate electrode; the region with m=0 at large V/sub G/ or V/sub GS/ equivalent to 0 is due to the dominant contribution of the series resistance. In the region at intermediate V/sub G/, m=-3, the 1/f noise stems from the channel under the gate electrode, and the drain-source resistance is already dominated by the series resistance. >

Journal ArticleDOI
TL;DR: In this article, a classification scheme is proposed for a class of models that arise in the area of vehicle routing and scheduling and illustrated on a number of problems that have been considered in the literature.

Journal ArticleDOI
01 Dec 1990
TL;DR: In this article, the authors define a model that allows for communication delays between precedence-related tasks, and propose a classification of various submodels to address certain types of scheduling problems that arise when a parallel computation is to be executed on a multiprocessor.
Abstract: This paper adresses certain types of scheduling problems that arise when a parallel computation is to be executed on a multiprocessor. We define a model that allows for communication delays between precedence-related tasks, and propose a classification of various submodels. We also review complexity results and optimization and approximation algorithms that have been presented in the literature.

Book
01 Jan 1990
TL;DR: This chapter discusses predicate Calculus, the Guarded Command Language, and general programming techniques, as well as some of the techniques used in designing Efficient Programs.
Abstract: * Predicate Calculus * The Guarded Command Language * Quantifications * General Programming Techniques * Designing Efficient Programs * Searching * Segment Problems * Slope Search * Mixed Problems * Array Manipulations * Sorting * Auxiliary Arrays

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of thermal comfort in buildings and examine what variations in indoor temperatures may be acceptable in the presence of man's thermoregulatory system and some experimental findings on periodic and on ramp (or drift) variation in room temperature.

Journal ArticleDOI
TL;DR: The spatial distribution of platinum (Pt) in the kidney was studied by an autoradiographic technique, in which cisplatin was replaced by195mPt-labeled CDDP, and by proton-induced X-ray emission (PIXE), showing the highest Pt concentrations were always found on the periphery of tumors, indicating that the periphery was exposed to a higher drug concentration than the center.
Abstract: The spatial distribution of platinum (Pt) in the kidney was studied by an autoradiographic technique, in which cisplatin (CDDP) was replaced by 195mPt-labeled CDDP, and by proton-induced X-ray emission (PIXE). Although both studies demonstrated comparable spatial distribution patterns, PIXE had the advantage that Pt concentrations could be determined quantitatively, in contrast to the relative information obtained by the autoradiographic technique. Using PIXE, the distribution of Pt in i.p. tumors was studied after i.p. administration of CDDP. The highest Pt concentrations were always found on the periphery of tumors, indicating that the periphery was exposed to a higher drug concentration than the center. Dose was correlated to the concentration of CDDP at both the center and the periphery (r = 0.99). The Pt concentration in the periphery was usually higher by a factor of 2-3 after i.p. administration than after i.v. treatment, whereas in the center of the tumor no concentration difference could be detected. The penetration depth of CDDP lay between 1 and 2 mm and was calculated from the differences in Pt concentration after i.p. and i.v. treatment. This indicates that the effective advantage of i.p. chemotherapy with CDDP in cases of cancers limited to the peritoneal cavity is accentuated at the periphery of the tumor.

Journal ArticleDOI
TL;DR: By means of finite element analysis, calculations, it was concluded that a more uniform stress was obtained around the implant with a low E-modulus of the stress-absorbing element and the bone surrounding the natural tooth showed a decrease in the height of the peak stresses.
Abstract: By means of finite element analysis, calculations, were made of the stress-distribution in bone around implants with and without stress-absorbing elements. A freestanding implant and an implant, it was concluded the variation in the E-modulus of the stress-absorbing element had no effect on the stresses in bone. Changing the shape of the stress-absorbing element had little effect on the stresses in cortical bone. For the implant connected with a natural tooth, it was concluded that a more uniform stress was obtained around the implant with a low E-modulus of the stress-absorbing element. It was also concluded that the bone surrounding the natural tooth showed a decrease in the height of the peak stresses.

Journal ArticleDOI
TL;DR: In this article, a partial ordering in the class of observables (∼ positive operator-valued measures, introduced by Davies and by Ludwig) is explored, and the ordering is interpreted as a form of nonideality, and it allows one to compare ideal and nonideal versions of the same observable.
Abstract: A partial ordering in the class of observables (∼ positive operator-valued measures, introduced by Davies and by Ludwig) is explored. The ordering is interpreted as a form of nonideality, and it allows one to compare ideal and nonideal versions of the same observable. Optimality is defined as maximality in the sense of the ordering. The framework gives a generalization of the usual (implicit) definition of self-adjoint operators as optimal observables (von Neumann), but it can, in contrast to this latter definition, be justified operationally. The nonideality notion is compared to other quantum estimation theoretic methods. Measures for the amount of nonideality are derived from information theory.

Journal ArticleDOI
TL;DR: In this paper, the elasticity modulus of the bed structure as a consequence of interparticle forces has been investigated for homogeneous fluidized beds and shown to correlate bed expansion and bubble size during heterogeneous fluidization.

01 Jan 1990
TL;DR: In this paper, a generalization of the compensation procedure was proposed for the asymmetric shortest queue problem, where the equilibrium distribution of the lengths of the two queues can be exactly represented by an infinite sum of product form solutions by using an elementary compensation procedure.
Abstract: In this paper we study a system consisting of two parallel servers with different service rates. Jobs arrive according to a Poisson stream and generate an exponentially distributed workload. On arrival a job joins the shortest queue and in case both queues have equal lengths, he joins the first queue with probability q and the second one with probability 1- q, where q is an arbitrary number between 0 and 1. In a previous paper we showed for the symmetric problem, that is for equal service rates and q = ½, that the equilibrium distribution of the lengths of the two queues can be exactly represented by an infinite sum of product form solutions by using an elementary compensation procedure. The main purpose of the present paper is to prove a similar product form result for the asymmetric problem by using a generalization of the compensation procedure. Furthermore, it is shown that the product form representation leads to a numerically efficient algorithm. Essentially, the method exploits the convergence properties of the series of product forms. Because of the fast convergence an efficient method is obtained with upper and lower bounds for the exact solution. For states further away from the origin the convergence is faster. This aspect is also exploited in the paper. Keywords: bounds, difference equation, product form, queues in parallel, stationary queue length distribution, shortest queue problem.

Journal ArticleDOI
TL;DR: Estimates of bit error rates feasible with a standard radio receiver as a function of the separation distance are presented and results of experimental eavesdropping are presented.

Journal ArticleDOI
TL;DR: In this paper, a toughening concept for structural composites based on the hybridization of carbon fibres with high performance polyethylene (HP-PE) fibres is presented.

Journal ArticleDOI
TL;DR: In this article, the authors studied the properties of polypyrrole films as a function of the anion, incorporated in the polymer during electrochemical polymerization, and found that the reduction and re-oxidation of thin supported polypolymer films is a slow process for TOS-and BS-containing films.

Journal ArticleDOI
TL;DR: In this article, the amount of hydrogen peroxide in the reaction product can be detected by using the rotating ring disc technique, which can be explained in terms of the availability of the Pt surface for dioxygen.

Book ChapterDOI
11 Aug 1990
TL;DR: A polynomial time protocol is presented in which a participant can convince (with an exponentially small error probability) any other participant that his signature is valid, without interaction with the original signer.
Abstract: All known digital signature schemes can be forged by anyone having enough computing power. For a finite set of participants, we can overcome this weakness.We present a polynomial time protocol in which a participant can convince (with an exponentially small error probability) any other participant that his signature is valid. Moreover, such a convinced participant can convince any other participant of the signature's validity, without interaction with the original signer.An extension allows, in most cases, a participant who receives a signature from any source to convince each other participant of its validity. If a participant cannot use the signature to convince others, he knows so when he receives it.

Journal ArticleDOI
TL;DR: A finite element approximation of steady flow in a rigid three-dimensional model of the carotid artery bifurcation is presented, and the numerical results of axial and secondary flow compare favorably with those obtained from previously performed laser-Doppler velocity measurements.

Journal ArticleDOI
TL;DR: The Hermite polynomials give rise to orthonormal bases in Bagmann-like Hilbert spaces XA, 0 < A < 1, consisting of entire functions as mentioned in this paper.

Journal ArticleDOI
TL;DR: In this article, the problem of balancing inventory levels and service degrees in multi-echelon production/distribution chains has been studied, where material is delivered by outside suppliers, proceeds through a number of manufacturing stages, and is distributed finally among local warehouses.
Abstract: In this paper, we study inventory control problems arising in multi-echelon production/distribution chains. In these chains, material is delivered by outside suppliers, proceeds through a number of manufacturing stages, and is distributed finally among a number of local warehouses in order to meet market demand. Each stage requires a fixed leadtime; furthermore, we assume a stochastic, stationary end-item demand process. The problem to balance inventory levels and service degrees can be modelled and analyzed by defining appropriate cost functions. Under an average cost criterion, we study the three most important structures arising in multi-echelon systems: assembly systems, serial systems and distribution systems. For all three systems, it is possible to prove exact decomposition results which reduce complex multi-dimensional control problems to simple one-dimensional problems. In addition, we establish the optimality of base-stock control policies.

Journal ArticleDOI
TL;DR: The selective oxidation of lactose by molecular oxygen has been studied in a batch reactor containing an aqueous slurry of 0.5 kmol m-1 reactant and 1.0 kg m-3 catalyst without significant loss in initial activity or selectivity.

Journal ArticleDOI
01 Aug 1990-Polymer
TL;DR: In this paper, the experimental timescale of memory effects related to restoring equilibrium viscoelastic properties in solution-cast linear and long-chain branched polyethylenes has been investigated.