scispace - formally typeset
Search or ask a question

Showing papers by "Technion – Israel Institute of Technology published in 1988"


Journal ArticleDOI
TL;DR: In this paper, the Mellor-Yamada hierarchy of turbulent closure models is reexamined to show that the elimination of a slight inconsistency in their analysis leads to a quasi-equilibrium model that is somewhat simpler than their level 2½ model.
Abstract: The Mellor-Yamada hierarchy of turbulent closure models is reexamined to show that the elimination of a slight inconsistency in their analysis leads to a quasi-equilibrium model that is somewhat simpler than their level 2½ model. Also the need to impose realizability conditions restricting the dependence of exchange coefficients on shearing rates is eliminated. The model is therefore more robust while the principal advantage of the level 2½ model, namely the solution of a prognostic equation for turbulent kinetic energy is retained. Its performance is shown to be not much different from that of level 2½.

933 citations


Journal ArticleDOI
TL;DR: The concept and measurement of commitment to goals, a key aspect of goal-setting theory, are discussed in this paper. The strength of the relationship between commitment and performance is asserted to depend on the amount of variance in commitment.
Abstract: The concept and measurement of commitment to goals, a key aspect of goal-setting theory, are discussed. The strength of the relationship between commitment and performance is asserted to depend on the amount of variance in commitment. Three major categories of determinants of commitment are discussed: external factors (authority, peer influence, external rewards), interactive factors (participation and competition), and internal factors (expectancy, internal rewards). Applications of these ideas are made and new research directions are suggested.

810 citations



Journal ArticleDOI
TL;DR: The controlled environment vitrification system (CEVS) is described and the value of the technique is illustrated with TEM micrographs of surfactant microstructures in which specimen preparation artifacts were avoided.
Abstract: The controlled environment vitrification system (CEVS) permits cryofixation of hydrated biological and colloidal dispersions and aggregates from a temperature- and saturation-controlled environment. Otherwise, specimens prepared in an uncontrolled laboratory atmosphere are subject to evaporation and heat transfer, which may introduce artifacts caused by concentration, pH, ionic strength, and temperature changes. Moreover, it is difficult to fix and examine the microstructure of systems at temperatures other than ambient (e.g., biological systems at in vivo conditions and colloidal systems above room temperature). A system has been developed that ensures that a liquid or partially liquid specimen is maintained in its original state while it is being prepared before vitrification and, once prepared, is vitrified with little alteration of its microstructure. A controlled environment is provided within a chamber where temperature and chemical activity of volatile components can be controlled while the specimen is being prepared. The specimen grid is mounted on a plunger, and a synchronous shutter is opened almost simultaneously with the release of the plunger, so that the specimen is propelled abruptly through the shutter opening into a cryogenic bath. We describe the system and its use and illustrate the value of the technique with TEM micrographs of surfactant microstructures in which specimen preparation artifacts were avoided. We also discuss applications to other instruments like SEM, to other techniques like freeze-fracture, and to novel "on the grid" experiments that make it possible to freeze successive instants of dynamic processes such as membrane fusion, chemical reactions, and phase transitions.

590 citations


Journal ArticleDOI
TL;DR: An infinite sequence of generalized measurements, recently proposed by Ivanovic, can be performed in a single step as mentioned in this paper, and it is shown that the final result is the optimal one, regardless of the number of measurements.

520 citations


Journal ArticleDOI
TL;DR: It is shown that there exists a tradeoff between the number of frequency components used per position and thenumber of such clusters (sampling rate) utilized along the spatial coordinate.
Abstract: A scheme suitable for visual information representation in a combined frequency-position space is investigated through image decomposition into a finite set of two-dimensional Gabor elementary functions (GEF) The scheme is generalized to account for the position-dependent Gabor-sampling rate, oversampling, logarithmic frequency scaling and phase-quantization characteristic of the visual system Comparison of reconstructed signal highlights the advantages of the generalized Gabor scheme in coding typical bandlimited images It is shown that there exists a tradeoff between the number of frequency components used per position and the number of such clusters (sampling rate) utilized along the spatial coordinate >

486 citations


Journal ArticleDOI
TL;DR: The objective of this Minireview is to summarize briefly what the authors have learned and to point out what they yet have to learn about the mode of action of the ubiquitin-mediated protein breakdown system.

424 citations


Book ChapterDOI
01 Jan 1988
TL;DR: In this paper, the authors derived a sharp form of Milman's inequality for k-codimensional subspaces which miss S and discussed some applications to Banach spaces, where S is a subset in the Euclidean space ℝ n and 1 <- k < n.
Abstract: Let S be a subset in the Euclidean space ℝ n and 1 <- k < n. We find sufficient conditions which guarantee the existence and even with probability close to 1, of k-codimensional subspaces which miss S. As a consequence we derive a sharp form of Milman's inequality and discuss some applications to Banach spaces.

404 citations


Journal ArticleDOI
TL;DR: The RSA and Rabin encryption functions are computationally equivalent, which implies that an adversary, given the RSA/Rabin ciphertext, cannot have a non-negligible advantage in guessing the least-significant bit of the plaintext, unless he can invert RSA/factor N.
Abstract: The RSA and Rabin encryption functions $E_N ( \cdot )$ are respectively defined by raising $x \in Z_N $ to the power e (where e is relatively prime to $\varphi (N)$) and squaring modulo N (i.e., $E_N (x) = x^e (\bmod N)$, $E_N (x) = x^2 (\bmod N)$, respectively). We prove that for both functions, the following problems are computationally equivalent (each is probabilistic polynomial-time reducible to the other): (1) Given $E_N (x)$, find x. (2) Given $E_N (x)$, guess the least-significant bit of x with success probability $\tfrac{1}{2} + {1 {{\operatorname{poly}}(n)}}$ (where n is the length of the modulus N). This equivalence implies that an adversary, given the RSA/Rabin ciphertext, cannot have a non-negligible advantage (over a random coin flip) in guessing the least-significant bit of the plaintext, unless he can invert RSA/factor N. The proof techniques also yield the simultaneous security of the $\log n$ least-significant bits. Our results improve the efficiency of pseudorandom number generation and...

328 citations


Journal ArticleDOI
TL;DR: A new, interactive multi-objective linear-programming procedure to aid decision-makers in setting up goals for desired outputs that relies on empirical production functions generated by the use of data envelopment analysis.
Abstract: This paper presents a new, interactive multi-objective linear-programming procedure to aid decisionmakers in setting up goals for desired outputs. The procedure relies on empirical production functions generated by the use of data envelopment analysis. It presents the decision-maker with a set of alternative efficient points in order either to compare different sets of inputs in terms of their effectiveness for goal achievement, or to set goals against which future management performance may be measured. With each iteration the new information provided by the decision-maker is used to adjust the procedure, leading to points which have greater effectiveness utility for the decision-maker. A numerical example is provided along with guidelines for future applications.

306 citations


Journal ArticleDOI
TL;DR: A communication protocol is described which uses k rooted spanning trees having the property that for every vertex ν the paths from ν to the root are edge-disjoint, and an algorithm to find two such trees in a 2-edge connected graph that runs in time proportional in the number of edges in the graph.
Abstract: Consider a network of asynchronous processors communicating by sending messages over unreliable lines. There are many advantages to restricting all communications to a spanning tree. To overcome the possible failure of k′ < k edges, we describe a communication protocol which uses k rooted spanning trees having the property that for every vertex ν the paths from ν to the root are edge-disjoint. An algorithm to find two such trees in a 2-edge connected graph is described that runs in time proportional in the number of edges in the graph. This algorithm has a distributed version which finds the two trees even when a single edge fails during their construction. The two trees then may be used to transform certain centralized algorithms to distributed, reliable, and efficient ones.

Journal ArticleDOI
TL;DR: In this article, the analysis of inertial navigation systems (INS) is approached from a control theory point of view, and the relationship between system observability and quality of estima- tion is presented.
Abstract: In this work, the analysis of inertial navigation systems (INS) is approached from a control theory point of view. Linear error models are presented and discussed and their eigenvalues are computed in several special cases. It is shown that the exact expressions derived for the eigenvalues differ slightly from the commonly used expressions. The observability of INS during initial alignment and calibration at rest is analyzed. A transformation that is based on physical insight is introduced that enables us to determine the unobservable subspace and states rather easily by inspection of the new dynamics matrix. Finally, the relationship between system observability and quality of estima- tion is presented. ever, as will be shown in the sequel, the inclusion of the vertical channel does alter the eigenvalues slightly. In the examination of system observability, we use a straightforward transforma- tion into observable and unobservable subsystems that, in turn, expose the states that hamper the estimation of INS errors during the initial alignment and calibration phase of operation. This approach was adopted successfully in the past by Kor- tum3 who considered the problem of INS platform alignment in which the measurements were the horizontal accelerometer outputs, whereas in the present case the measurements are the INS horizontal velocity components. In addition, the compari- son of this approach to the classical one that is presented in the present paper, as well as the discussion of uniqueness and the relationship between observability and quality of estimation, provide additional insight into the observability issue. It is hoped that the examination of INS as a unified system from a control theory point of view will shed more light on the system and contribute additional insight into the analysis of INS. In the next section, we describe the INS linear error model that will be the investigated plant. In Sec. Ill, we investigate the eigenvalues of INS in various phases of operation, and in Sec. IV, the issues of controllability and observability of the system are discussed. The relation between system observability and the ability to estimate its states during initial alignment is discussed in Sec. V. Finally, in Sec. VI, the conclusions are presented.

Journal ArticleDOI
TL;DR: Results to indicate that the added risk of an injury accident in rainy conditions can be substantial: two to three times greater than in dry weather.

Journal ArticleDOI
TL;DR: An analytical performance evaluation of the errors of the direction-of-arrival estimates obtained by the MUSic algorithm for uncorrelated sources confirms empirical evidence to the excellent performance of the MUSIC algorithm for narrowband signals.
Abstract: An analytical performance evaluation of the errors of the direction-of-arrival estimates obtained by the MUSIC algorithm for uncorrelated sources is provided. Explicit asymptotic formulas are derived for the means and the covariance of the estimates. The covariances are then compared to the Cramer-Rao lower bound. It is shown that for a single course, the MUSIC algorithm is asymptotically efficient. For multiple sources, the algorithm is not efficient in general. However, it approaches asymptotic efficiency when the SNRs (signal-to-noise ratios) of all sources tend to infinity. It is illustrated by several test cases that the relative efficiency of the MUSIC algorithm is nearly one under a wide range of parameter variations. The analytic performance evaluation thus confirms empirical evidence to the excellent performance of the MUSIC algorithm for narrowband signals. >

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the effect of parameter uncertainty on equilibrium asset prices and found that symmetric estimation risk affects equilibrium values of relative asset prices, expected returns, market weights, and betas.

Journal ArticleDOI
TL;DR: High-accuracy numerical quadrature methods for integrals of singular periodic functions are proposed based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations for singular and weakly singular Fredholm integral equations.
Abstract: High-accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are subsequently used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Throughout the development the periodic nature of the problem plays a crucial role. Such periodic equations are used in the solution of planar elliptic boundary value problems such as those that arise in elasticity, potential theory, conformal mapping, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

Journal ArticleDOI
TL;DR: The effect of hGH on marrow myeloid progenitors thus appears to be mediated by paracrine IGF-I/Sm-C, which is similarly demonstrable in serum-free cultures stimulated with human recombinant granulocyte/macrophage colony-stimulating factor.
Abstract: The effect of biosynthetic recombinant insulin-like growth factor I/somatomedin C (IGF-I/Sm-C) and human growth hormone (hGH) on the in vitro growth and maturation of human marrow myeloid progenitors was investigated. Myeloid colony formation was maximally enhanced by 60 ng/ml IGF-I/Sm-C and by 250 ng/ml hGH, resulting in an increase in colony numbers of 41 +/- 7 and 38 +/- 4%, respectively (P less than 0.001). Both peptides induced a 1.5-2.5-fold increase in the frequency of colonies composed of granulocytes alone, but did not alter the numbers of monocyte/macrophage or mixed granulocyte/macrophage colonies. IGF-I/Sm-C and hGH were also found to enhance myeloid maturation towards mature granulocytes in suspension cultures of human marrow cells. The effect of both peptides on human marrow granulopoiesis was similarly demonstrable in serum-free cultures stimulated with human recombinant granulocyte/macrophage colony-stimulating factor. Enhancement of human marrow granulopoiesis in vitro by hGH required the presence of marrow adherent cells and was abrogated by specific monoclonal antibodies directed against IGF-I/Sm-C receptors. The effect of hGH on marrow myeloid progenitors thus appears to be mediated by paracrine IGF-I/Sm-C.

Journal ArticleDOI
TL;DR: Protein contents of crude extracts from plant and animal tissues can be rapidly assayed using a Coomassie blue dye-binding procedure combined with scanning densitometry, providing measurements of protein concentration below 100 and 200 ng, respectively.

Journal ArticleDOI
TL;DR: It is demonstrated that the normalized least mean square (NLMS) algorithm can be viewed as a modification of the widely used LMS algorithm and is shown to have an important advantage over the LMS, which is that its convergence is independent of environmental changes.
Abstract: It is demonstrated that the normalized least mean square (NLMS) algorithm can be viewed as a modification of the widely used LMS algorithm. The NLMS is shown to have an important advantage over the LMS, which is that its convergence is independent of environmental changes. In addition, the authors present a comprehensive study of the first and second-order behavior in the NLMS algorithm. They show that the NLMS algorithm exhibits significant improvement over the LMS algorithm in convergence rate, while its steady-state performance is considerably worse. >

Journal ArticleDOI
01 Jan 1988-Biomass
TL;DR: The reduced effectiveness of cationic polymers to induce microalgae flocculation in sea and brackish water is primarily attributed to the effect of medium ionic strength on the configuration and dimension of the polymer, as indicated by changes in the intrinsic viscosity.

Journal ArticleDOI
TL;DR: Three factors are resolved from reticulocyte extracts that are required for the ATP-dependent breakdown of 125I-lysozyme-ubiquitin conjugates, and one role of ATP in conjugate breakdown is the formation of an active multienzyme complex.

Journal ArticleDOI
TL;DR: On considere des systemes non lineaires discrets and on etend le formalisme quasi continu pour inclure des effets discrets d'ordre eleve as discussed by the authors.
Abstract: On considere des systemes non lineaires discrets et on etend le formalisme quasi continu pour inclure des effets discrets d'ordre eleve

Journal ArticleDOI
TL;DR: It is shown that this new class properly contains the union of the two former classes, and that trapezoid graphs are equivalent to the incomparability graphs of partially ordered sets having interval order dimension at most two.

Journal ArticleDOI
TL;DR: In this article, the adiabatic quantum transport in multiply-connected systems is examined, where the main theme is the differential-geometric interpretation of Kubo's formulas as curvatures.
Abstract: The adiabatic quantum transport in multiply connected systems is examined. The systems considered have several holes, usually three or more, threaded by independent flux tubes, the transport properties of which are described by matrix-valued functions of the fluxes. The main theme is the differential-geometric interpretation of Kubo's formulas as curvatures. Because of this interpretation, and because flux space can be identified with the multitorus, the adiabatic conductances have topological significance, related to the first Chern character. In particular, they have quantized averages. The authors describe various classes of quantum Hamiltonians that describe multiply connected systems and investigate their basic properties. They concentrate on models that reduce to the study of finite-dimensional matrices. In particular, the reduction of the "free-electron" Schr\"odinger operator, on a network of thin wires, to a matrix problem is described in detail. The authors define "loop currents" and investigate their properties and their dependence on the choice of flux tubes. They introduce a method of topological classification of networks according to their transport. This leads to the analysis of level crossings and to the association of "charges" with crossing points. Networks made with three equilateral triangles are investigated and classified, both numerically and analytically. Many of these networks turn out to have nontrivial topological transport properties for both the free-electron and the tight-binding models. The authors conclude with some open problems and questions.

Journal ArticleDOI
TL;DR: It is demonstrated that if one of the probability measure of the two classes is not known, it is still possible to define a universal discrimination function which performs as the optimal (likelihood ratio) discriminant function (which can be evaluated only if the probability measures of theTwo classes are available).
Abstract: Classification with empirically observed statistics is studied for finite alphabet sources. Efficient universal discriminant functions are described and shown to be related to universal data compression. It is demonstrated that if one of the probability measure of the two classes is not known, it is still possible to define a universal discrimination function which performs as the optimal (likelihood ratio) discriminant function (which can be evaluated only if the probability measures of the two classes are available). If both of the probability measures are not available but training vectors from at least one of the two classes are available, it is demonstrated that no discriminant function can perform efficiency of the length of the training vectors does not grow at least linearly with the length of the classified vector. A universal discriminant function is introduced and shown to perform efficiently when the length of the training vectors grows linearly with the length of the classified sequence, in the sense that it yields an error exponent that is arbitrarily close to that of the optimal discriminant function. >

Journal ArticleDOI
TL;DR: In this article, a procedure for delineating directional and isotropic hardening from uniaxial hardening data has been developed for the Bodner-Partom model and applied to a nickel-base superalloy, B1900 + Hf.
Abstract: Modeling of hardening and thermal recovery in metals is considered within the context of unified elastic-viscoplastic theories. Specifically, the choices of internal variables and hardening measures, and the resulting hardening response obtained by incorporating saturation-type evolution equations into two general forms of the flow law are examined. Based on the analytical considerations, a procedure for delineating directional and isotropic hardening from uniaxial hardening data has been developed for the Bodner-Partom model and applied to a nickel-base superalloy, B1900 + Hf. Predictions based on the directional hardening properties deduced from the monotonic loading data are shown to be in good agreement with results of cyclic tests.

Journal ArticleDOI
TL;DR: In this paper, the relations among various languages and models for distributed computation and various possible definitions of fairness are considered, and natural semantic criteria are presented which an acceptable notion of fairness should satisfy.
Abstract: The relations among various languages and models for distributed computation and various possible definitions of fairness are considered. Natural semantic criteria are presented which an acceptable notion of fairness should satisfy. These are then used to demonstrate differences among the basic models, the added power of the fairness notion, and the sensitivity of the fairness notion to irrelevant semantic interleavings of independent operations. These results are used to show that from the considerable variety of commonly used possibilities, only strong process fairness is appropriate for CSP if these criteria are adopted. We also show that under these criteria, none of the commonly used notions of fairness are fully acceptable for a model with an n-way synchronization mechanism. The notion of fairness most often mentioned for Ada is shown to be fully acceptable. For a model with nonblocking send operations, some variants of common fairness definitions are appraised, and two are shown to satisfy the suggested criteria.

Journal ArticleDOI
01 Jan 1988-Biomass
TL;DR: In this article, high salinity was found to inhibit flocculation with polyelectrolytes, which are quite effective in freshwater algae floculation, and this inhibition was diminished at reduced salinity levels.

Journal ArticleDOI
TL;DR: This study used simple derivatives of amino acids, such as methyl esters, hydroxamates, or dipeptides, to examine the question of whether the protein binding site of the ligase is able to distinguish between different NH2-terminal residues of proteins.

Journal ArticleDOI
TL;DR: The in vitro growth‐promoting effect of hGH upon human erythroid precursors thus appears to be mediated by paracrine IGF‐I.
Abstract: Insulin-like growth factor I (IGF-I) is the presumed paracrine or autocrine growth-promoting mediator of growth hormone in peripheral tissues. In order to evaluate the role of IGF-I as mediator of human growth hormone (hGH) in erythropoiesis, we compared the effects of both peptides upon in vitro colony formation by primitive (BFU-E) and relatively mature (CFU-E) human erythroid precursors. Biosynthetic IGF-I (2 ng/ml) and hGH (25 ng/ml) induced a significant increase in the growth of both BFU-E and CFU-E. BFU-E growth was maximally enhanced by 6 ng/ml IGF-I and by 50 ng/ml hGH, resulting in an increase in burst numbers of 62 +/- 12% and 52 +/- 12%, respectively. Maximal enhancement of CFU-E growth was detected at higher concentrations of IGF-I (20 ng/ml) and hGH (150 ng/ml), with respective increases of 121 +/- 35% and 137 +/- 18% in colony numbers. Enhancement of bone marrow and peripheral blood erythroid progenitor cell growth by hGH required the presence of monocytes and was abrogated by specific monoclonal antibodies directed against IGF-I membrane receptors. The in vitro growth-promoting effect of hGH upon human erythroid precursors thus appears to be mediated by paracrine IGF-I.