scispace - formally typeset
Search or ask a question

Showing papers by "Technion – Israel Institute of Technology published in 1989"


Journal ArticleDOI
13 Jul 1989-Nature
TL;DR: In this paper, it was pointed out that neutron-star collisions should synthesize neutron-rich heavy elements, thought to be formed by rapid neutron capture (the r-process), and these collisions should produce neutrino bursts and resultant bursts of gamma rays; the latter should comprise a subclass of observable gamma-ray bursts.
Abstract: It is pointed out here that neutron-star collisions should synthesize neutron-rich heavy elements, thought to be formed by rapid neutron capture (the r-process). Furthermore, these collisions should produce neutrino bursts and resultant bursts of gamma rays; the latter should comprise a subclass of observable gamma-ray bursts. It is argued that observed r-process abundances and gamma-ray burst rates predict rates for these collisions that are both significant and consistent with other estimates.

1,986 citations


Journal ArticleDOI
TL;DR: The unsupervised fuzzy partition-optimal number of classes algorithm performs well in situations of large variability of cluster shapes, densities, and number of data points in each cluster.
Abstract: This study reports on a method for carrying out fuzzy classification without a priori assumptions on the number of clusters in the data set. Assessment of cluster validity is based on performance measures using hypervolume and density criteria. An algorithm is derived from a combination of the fuzzy K-means algorithm and fuzzy maximum-likelihood estimation. The unsupervised fuzzy partition-optimal number of classes algorithm performs well in situations of large variability of cluster shapes, densities, and number of data points in each cluster. The algorithm was tested on different classes of simulated data, and on a real data set derived from sleep EEG signal. >

1,691 citations


Proceedings ArticleDOI
01 Feb 1989
TL;DR: This paper proves a conjecture of [Levin 87, sec. 5.6.2] that the scalar product of Boolean vectors p, g, x is a hard-core of every one-way function ƒ, and extends to multiple (up to the logarithm of security) such bits and to any distribution on the x.
Abstract: A central tool in constructing pseudorandom generators, secure encryption functions, and in other areas are “hard-core” predicates b of functions (permutations) ƒ, discovered in [Blum Micali 82]. Such b(x) cannot be efficiently guessed (substantially better than 50-50) given only ƒ(x). Both b, ƒ are computable in polynomial time.[Yao 82] transforms any one-way function ƒ into a more complicated one, ƒ*, which has a hard-core predicate. The construction applies the original ƒ to many small pieces of the input to ƒ* just to get one “hard-core” bit. The security of this bit may be smaller than any constant positive power of the security of ƒ. In fact, for inputs (to ƒ*) of practical size, the pieces effected by ƒ are so small that ƒ can be inverted (and the “hard-core” bit computed) by exhaustive search.In this paper we show that every one-way function, padded to the form ƒ(p, x) = (p, g(x)), VVpVV = VxV, has by itself a hard-core predicate of the same (within a polynomial) security. Namely, we prove a conjecture of [Levin 87, sec. 5.6.2] that the scalar product of Boolean vectors p, x is a hard-core of every one-way function ƒ(p, x) = (p, g(x)). The result extends to multiple (up to the logarithm of security) such bits and to any distribution on the x's for which ƒ is hard to invert.

1,419 citations


Journal ArticleDOI
TL;DR: In this paper, a nonzero Berry phase is shown to exist in a one-dimensional parameter space and a connection between the latter and the Wyckoff positions in the crystal in the framework of band representations of space groups is established.
Abstract: Berry's phase is defined for the dynamics of electrons in periodic solids and an explicit formula is derived for it. Because of the special torus topology of the Brillouin zone a nonzero Berry phase is shown to exist in a one-dimensional parameter space. Symmetry of the Bloch functions in the Brillouin zone leads to the quantization of Berry's phase. A connection is established between the latter and the Wyckoff positions in the crystal in the framework of band representations of space groups. Berry's phase can therefore be used for labeling energy bands in solids.

1,386 citations


Journal ArticleDOI
TL;DR: A systematic application procedure of the DEA methodology in its various stages is suggested, focused on the selection of 'decision making units' (DMUs) to enter the analysis as well as the choice and screening of factors.
Abstract: Data Envelopment Analysis (DEA) has become an accepted approach for assessing efficiency in a wide range of cases. The present paper suggests a systematic application procedure of the DEA methodology in its various stages. Attention is focused on the selection of ‘decision making units’ (DMUs) to enter the analysis as well as the choice and screening of factors. The application of several DEA models (in different versions and formulations) is demonstrated, in the process of determining relative efficiencies within the compared DMUs.

1,280 citations


Journal ArticleDOI
TL;DR: In this paper, a study experimentale de la variation de la durete en fonction de la grosseur de grain for des materiaux nanocristallins is presented.

1,025 citations


Journal ArticleDOI
TL;DR: In this paper, the interaction between solid particles and the turbulence of the carrier fluid is considered and theoretical considerations suggest that particles with low Reynolds number cause suppression of the turbulence, while particles with higher Reynolds number caused enhancement of turbulence due to wake shedding.

458 citations


Journal ArticleDOI
TL;DR: A review of the state of the art in the subject of interlaminar fracture toughness (ift ), its relation to structural performance and the damage tolerance of polymeric composite materials is presented in this paper.

377 citations


Journal ArticleDOI
TL;DR: It seems that the 20S protease is the "catalytic core" of the 26S complex of the ubiquitin proteolytic pathway, a complex of low molecular weight subunits widely distributed in eukaryotic cells.
Abstract: Previous studies have indicated that the ATP-dependent 26S protease complex that degrades proteins conjugated to ubiquitin is formed by the assembly of three factors in an ATP-requiring process. We now identify one of the factors as the 20S "multicatalytic" protease, a complex of low molecular weight subunits widely distributed in eukaryotic cells. Comparison of the subunit compositions of purified 20S and 26S complexes indicates that the former is an integral part of the latter. By the use of detergent treatment to activate latent protease activity, we show that the 20S protease becomes incorporated into the 26S complex in the ATP-dependent assembly process. It thus seems that the 20S protease is the "catalytic core" of the 26S complex of the ubiquitin proteolytic pathway.

362 citations


Journal ArticleDOI
TL;DR: A new method for image segmentation via adaptive thresholding is presented, where the threshold surface is determined by interpolating the image gray levels at points where the gradient is high, indicating probable object edges.
Abstract: In applications involving visual inspection, it is often required to separate objects from background, in conditions of poor and nonuniform illumination In such cases one has to rely on adaptive methods that learn the illumination from the given images and base the object/background decision on this information We here present a new method for image segmentation via adaptive thresholding The threshold surface is determined by interpolating the image gray levels at points where the gradient is high, indicating probable object edges Several methods of data interpolation to levels given at scattered points in the image plane are discussed One method is tested on several examples and the segmentation results are compared to previously proposed adaptive thresholding algorithms

346 citations


Journal ArticleDOI
TL;DR: This pragmatic approach to all coding applications permits the use of a single basic coder and decoder to achieve respectable coding (power) gains for bandwidth efficiencies from 1 b/s/Hz to 6 b/S/Hz.
Abstract: Since the early 1970s, for power-limited applications, the convolutional code constraint length K=7 and rate 1/2, optimum in the sense of maximum free distance and minimum number of bit errors caused by remerging paths at the free distance, has become the de facto standard for coded digital communication. This was reinforced when punctured versions of this code became the standard for rate 3/4 and 7/8 codes for moderately bandlimited channels. Methods are described for using the same K=7, rate 1/2 convolutional code with signal phase constellations of 8-PSK and 160PSK and quadrature amplitude constellations of 16-QASK, 64-QASK, and 256-QASK to achieve, respectively, 2 and 3, and 2, 4, and 6 b/s/Hz bandwidth efficiencies while providing power efficiency that in most cases is virtually equivalent to that of the best Ungerboeck codes for constraint length 7 or 64 states. This pragmatic approach to all coding applications permits the use of a single basic coder and decoder to achieve respectable coding (power) gains for bandwidth efficiencies from 1 b/s/Hz to 6 b/s/Hz. >

Journal ArticleDOI
TL;DR: Theorems concerning the entropy of a stationary ergodic information source are derived and used to obtain insight into the workings of certain data-compression coding schemes, in particular the Lempel-Siv data compression algorithm.
Abstract: Theorems concerning the entropy of a stationary ergodic information source are derived and used to obtain insight into the workings of certain data-compression coding schemes, in particular the Lempel-Siv data compression algorithm. >

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the theoretical foundations and methodological rationale of a novel approach to the training of complex skills based on the introduction of multiple emphasis changes on subcomponents of a complex task.

Book ChapterDOI
01 Jul 1989
TL;DR: In this article, a general construction which transforms any ordinary digital signature scheme to an on-line/off-line signature scheme is presented, entailing a small overhead, and it can be performed leisurely, before the message to be signed is even known.
Abstract: We introduce and exemplify the new concept of ON-LINE/OFF-LINE digital signature schemes. In these schemes the signing of a message is broken into two phases. The first phase is off -line. Though it requires a moderate amount of computation, it presents the advantage that it can be performed leisurely, before the message to be signed is even known. The second phase is on-line. It starts after the message becomes known, it utilizes the precomputation of the first phase and is much faster.A general construction which transforms any (ordinary) digital signature scheme to an on-line/off-line signature scheme is presented, entailing a small overhead. For each message to be signed, the time required for the off-line phase is essentially the same as in the underlying signature scheme; the time required for the on-line phase is essentially negligible. The time required for the verification is essentially the same as in the underlying signature scheme.In a practical implementation of our general construction, we use a variant of Rabin's signature scheme (based on factoring) and DES. In the on-line phase, all we use is a moderate amount of DES computation. This implementation is ideally suited for electronic wallets or smart cards.On-line/Off-line digital schemes may also become useful in case substantial progress is made on, say, factoring. In this case, the length of the composite numbers used in signature schemes may need to be increased and signing may become impractical even for the legitimate user. In our scheme, all costly computations are performed in the off-line stage while the time for the on-line stage remains essentially unchanged.An additional advantage of our method is that in some cases the transformed signature scheme is invulnerable to chosen message attack even if the underlying (ordinary) digital signature scheme is not. In particular, it allows us to prove that the existence of signature schemes which are unforgeable by known message attack is a (necessary and) sufficient condition for the existence of signature schemes which are unforgeable by chosen message attack.

Journal ArticleDOI
TL;DR: A detailed description of the instrumented drop-weight impact machine is presented in this article, where the instrumentation, the calibration, the inertial loading correction, and the dynamic analysis of a concrete beam specimen undergoing three-point impact flexural loading are described.
Abstract: A detailed description of the instrumented dropweight impact machine is presented. The instrumentation, the calibration, the inertial loading correction, and the dynamic analysis of a concrete beam specimen undergoing three-point impact flexural loading are described. Some results, using such an impact testing machine, obtained from tests done on plain concrete, fiber-reinforced concrete, and conventionally reinforced concrete are presented. It is concluded that the use of such a testing machine may be successfully made in order to test cementitious materials under impact.

Journal ArticleDOI
TL;DR: The decision problem of testing M hypotheses when the source is Kth-order Markov and there are M (or fewer) training sequences of length N and a single test sequence of length n is considered and it is shown that, for this configuration, only one training sequence is needed to achieve an asymptotically optimal test.
Abstract: The decision problem of testing M hypotheses when the source is Kth-order Markov and there are M (or fewer) training sequences of length N and a single test sequence of length n is considered. K, M, n, N are all given. It is shown what the requirements are on M, n, N to achieve vanishing (exponential) error probabilities and how to determine or bound the exponent. A likelihood ratio test that is allowed to produce a no-match decision is shown to provide asymptotically optimal error probabilities and minimum no-match decisions. As an important serial case, the binary hypotheses problem without rejection is discussed. It is shown that, for this configuration, only one training sequence is needed to achieve an asymptotically optimal test. >

Journal ArticleDOI
TL;DR: This is the first implementation of a relaxation algorithm for edge detection in echocardiograms that compounds spatial and temporal information along with a physical model in its decision rule, whereas most other algorithms base their decisions on spatial data alone.
Abstract: An automatic algorithm has been developed for high-speed detection of cavity boundaries in sequential 2-D echocardiograms using an optimization algorithm called simulated annealing (SA). The algorithm has three stages. (1) A predetermined window of size n*m is decimated to size n'*m' after low-pass filtering. (2) An iterative radial gradient algorithm is employed to determine the center of gravity (CG) of the cavity. (3) 64 radii which originate from the CG defined in stage 2 are bounded by the high-probability region. Each bounded radius is defined as a link in a 1-D, 64-member cyclic Markov random field. This algorithm is unique in that it compounds spatial and temporal information along with a physical model in its decision rule, whereas most other algorithms base their decisions on spatial data alone. This is the first implementation of a relaxation algorithm for edge detection in echocardiograms. Results attained using this algorithm on real data have been highly encouraging. >

Journal ArticleDOI
TL;DR: An animal model of nutritional iron deficiency is described that demonstrates a reduction of brain nonheme iron resulting from the decreased number of dopamine D2 receptors in the caudate nucleus, nucleus accumbens, pituitary, and in all probability the frontal cortex.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of scheduling tasks on a single machine to minimize the flowtime and show that deciding whether there is a schedule with flowtime less than or equal to a given value is NP-complete.
Abstract: We consider the problem of scheduling tasks on a single machine to minimize the flowtime. The machine is subject to breakdowns during the processing of the tasks. The breakdowns occur at a random times and the machine is unavailable until it is repaired. The times for repair are random and independent of each other and of the breakdown process. A task that is preempted due to a breakdown must be restarted and otherwise preemptions are not allowed. We show in the case of a single breakdown that if the distribution function of the time to breakdown is concave then Shortest Processing Time (SPT) first scheduling stochastically minimizes the flowtime. For the case of multiple breakdowns we show that SPT minimizes the expected flowtime when the times to breakdown are exponentially distributed. If the time for a single breakdown is known before scheduling begins, and the processing times of the tasks are also known, then we show that the problem of deciding whether there is a schedule with flowtime less than or equal to a given value is NP-complete. Finally, we bound the performance of SPT scheduling in the deterministic case when there is a single breakdown.

Journal ArticleDOI
TL;DR: A method for texture discrimination and image segmentation using local features based on the Gabor approach is introduced and the results show the insensitivity of the discrimination to relatively high noise levels, comparable to the performances of the human observer.
Abstract: Recent studies of cortical simple cell function suggest that the primitives of image representation in vision have a wavelet form similar to Gabor elementary functions (EFs). It is shown that textures and fully textured images can be practically decomposed into, and synthesized from, a finite set of EFs. Textured-images can be synthesized from a set of EFs using an image coefficient library. Alternatively, texturing of contoured (cartoonlike) images is analogous to adding chromaticity information to contoured images. A method for texture discrimination and image segmentation using local features based on the Gabor approach is introduced. Features related to the EF's parameters provide efficient means for texture discrimination and classification. This method is invariant under rotation and translation. The performance of the classification appears to be robust with respect to noisy conditions. The results show the insensitivity of the discrimination to relatively high noise levels, comparable to the performances of the human observer. >

Journal ArticleDOI
TL;DR: During ageing the fixed charge density was found to increase at all sites of the femoral head provided fibrillation was absent: osmotic pressure increased accordingly whilst loss of fluid under the effect of externally applied compression diminished; in cartilage from osteoarthritic joints the opposite changes were found.
Abstract: We have investigated the changes in some of the biochemical and biophysical properties of human femoral head cartilage on the one hand during ageing and on the other hand in osteoarthritis. Topographical variations were also investigated.The parameters studied were those relevant to cartilage function, viz., proteoglycan concentration (as expressed by the concentration of negatively charged groups), the rate of glycosaminoglycan synthesis, water content, osmotic pressure and fluid loss during compression.During ageing the tixed charge density was found to increase at all sites of the femoral head provided fibrillation was absent: osmotic pressure increased accordingly whilst loss of fluid under the effect of externally applied compression diminished. In cartilage from osteoarthritic joints the opposite changes were found.The rate of GAG synthesis varied considerably with site on the femoral head. It decreased somewhat with age on the superior surface, but increased on the inferior surface. When the same s...

Journal ArticleDOI
TL;DR: A maximum-a-posteriori approach for enhancing speech signals which have been degraded by statistically independent additive noise is proposed, based on statistical modeling of the clean speech signal and the noise process using long training sequences from the two processes.
Abstract: A maximum-a-posteriori approach for enhancing speech signals which have been degraded by statistically independent additive noise is proposed. The approach is based on statistical modeling of the clean speech signal and the noise process using long training sequences from the two processes. Hidden Markov models (HMMs) with mixtures of Gaussian autoregressive (AR) output probability distributions (PDs) are used to model the clean speech signal. The model for the noise process depends on its nature. The parameter set of the HMM model is estimated using the Baum or the EM (estimation-maximization) algorithm. The noisy speech is enhanced by reestimating the clean speech waveform using the EM algorithm. Efficient approximations of the training and enhancement procedures are examined. This results in the segmental k-means approach for hidden Markov modeling, in which the state sequence and the parameter set of the model are alternately estimated. Similarly, the enhancement is done by alternate estimation of the state and observation sequences. An approximate improvement of 4.0-6.0 dB in signal-to-noise ratio (SNR) is achieved at 10-dB input SNR. >

Proceedings ArticleDOI
30 Oct 1989
TL;DR: A complete combinatorial characterization of privately computable functions is given and this characterization is used to derive tight bounds on the rounds complexity of any privately Computable function and to design optimal private protocols that compute these functions.
Abstract: Each of two parties P/sub 1/ and P/sub 2/ holds an n-bit input, x and y, respectively They wish to compute privately the value of f(x,y) Two questions are considered: (1) Which functions can be privately computed? (2) What is the communication complexity of protocols that privately compute a function f (in the case in which such protocols exist)? A complete combinatorial characterization of privately computable functions is given This characterization is used to derive tight bounds on the rounds complexity of any privately computable function and to design optimal private protocols that compute these functions It is shown that for every 1 >

Journal ArticleDOI
TL;DR: In this paper, a calculus of convex subdifferentials and generalized gradients of Clarke (henceforth sometimes abbreviated C.G.C) is presented, and it is shown that approximate sub-differentials are minimal (as sets) among all possible subdifferential satisfying one or another set of conditions.
Abstract: This is the final paper of the series of three papers under the same title. The finite dimensional theory developed in the first of them 7 contains first of all:(a) a calculus having among its consequences the calculi of convex subdifferentials and generalized gradients of Clarke (henceforth sometimes abbreviated C.g.g.) in the most general form which is partly due to the fact that in a finite dimensional spacefor any convex function f andfor any SX (A means approximate, C means Clarke); and (b) a theorem stating that approximate subdifferentials are minimal (as sets) among all possible subdifferentials satisfying one or another set of conditions (usually very natural).

Journal ArticleDOI
TL;DR: In this article, a block-iterative version of the Agmon-Motzkin-Schoenberg relaxation method for solving systems of linear inequalities is derived, and it is shown that any sequence of iterations generated by the algorithm converges if the intersection of the given family of convex sets is nonempty and that the limit point of the sequence belongs to this intersection under mild conditions on the sequence of weight functions.

Journal ArticleDOI
TL;DR: A regularization procedure of the Chapman-Enskog expansion is introduced, which extends the hydrodynamical description into higher gradients domains and reduces in the long-wavelength limit to the Burnett equations.
Abstract: A regularization procedure of the Chapman-Enskog expansion is introduced. It extends the hydrodynamical description into higher gradients domains. This was intended but never accomplished satisfactorily by the Burnett equations. The resulting macroscopic system formally has the same form (and complexity) as the Navier-Stokes equations, with the new transport coefficients being wavelength dependent. It reduces in the long-wavelength limit to the Burnett equations. In agreement with experimental evidence, strong shock layers are narrower in comparison with the predictions of the Navier-Stokes theory.

Proceedings ArticleDOI
01 Feb 1989
TL;DR: The equivalence of search and decision problems in the context of average case complexity; an initial analysis of the structure of distributional-NP under reductions which preserve average polynomial-time; definitions and basic theorems regarding other complexity classes such as average log-space.
Abstract: This paper takes the next step in developing the theory of average case complexity initiated by Leonid A. Levin. Previous works [Levin 84, Gurevich 87, Venkatesan and Levin 88] have focused on the existence of complete problems. We widen the scope to other basic questions in computational complexity. Our results include: the equivalence of search and decision problems in the context of average case complexity;an initial analysis of the structure of distributional-NP under reductions which preserve average polynomial-time;a proof that if all distributional-NP is in average polynomial-time then non-deterministic exponential-time equals deterministic exponential time (i.e., a collapse in the worst case hierarchy);definitions and basic theorems regarding other complexity classes such as average log-space.

Proceedings ArticleDOI
01 Feb 1989
TL;DR: An interesting “zero-one” law for private distributed computation of Boolean functions is derived, which implies that for Boolean functions, the strong and the weak notions of privacy are equivalent.
Abstract: A Boolean function ƒ: A1 X A2 X … X An → {0,1} is t - private if there exists a protocol for computing ƒ so that no coalition of size ≤ t can infer any additional information from the execution, other than the value of the function. We show that ƒ is ⌈n/2⌉ - private if and only if it can be represented as ƒ (x1, x2, …, xn) = ƒ (x1) ⊕ ƒ2(x2) ⊕ … ⊕ ƒn (xn, where the ƒi are arbitrary Boolean functions. It follows that if ƒ is ⌈n/2⌉ - private, then it is also n - private. Combining this with a result of Ben-Or, Goldwasser, and Wigderson, we derive an interesting “zero-one” law for private distributed computation of Boolean functions: Every Boolean function defined over a finite domain is either n - private, or it is ⌈n-1/2⌉ - private but not ⌈n/2⌉ - private.We also investigate a weaker notion of privacy, where (a) coalitions are allowed to infer a limited amount of additional information, and (b) there is a probability of error in the final output of the protocol. We show that the same characterization of ⌈n/2⌉ - private Boolean functions holds, even under these weaker requirements. In particular, this implies that for Boolean functions, the strong and the weak notions of privacy are equivalent.

Journal ArticleDOI
TL;DR: It is shown that both the sensitivities (derivatives, gradients, Hessians, etc.) and the performance measure can be estimated simultaneously from the same simulation.
Abstract: We present a method for deriving sensitivities of performance measures for computer simulation models. We show that both the sensitivities (derivatives, gradients, Hessians, etc.) and the performance measure can be estimated simultaneously from the same simulation. Our method is based on probability measure transformations derived from the efficient score. We also present a rather general procedure from which perturbation analysis and our method can be viewed as particular cases. Applications to reliability models and stochastic shortest path networks are given.

Journal ArticleDOI
M. Z. Akrawy1, Gideon Alexander2, J. Allison, Phillip Allport3  +274 moreInstitutions (23)
TL;DR: In this paper, the cross section for e+e− → hadrons from a scan around the Z0 pole was determined, based on 4350 hadronic events collected over seven energy points between 89.26 and 93.26 GeV, and a total decay width of 2.60±0.13 GeV.