scispace - formally typeset
Search or ask a question

Showing papers by "Technion – Israel Institute of Technology published in 1990"


Journal ArticleDOI
TL;DR: Shadow arrays are introduced which keep track of the incremental changes to the synaptic weights during a single pass of back-propagating learning and are ordered by decreasing sensitivity numbers so that the network can be efficiently pruned by discarding the last items of the sorted list.
Abstract: The sensitivity of the global error (cost) function to the inclusion/exclusion of each synapse in the artificial neural network is estimated. Introduced are shadow arrays which keep track of the incremental changes to the synaptic weights during a single pass of back-propagating learning. The synapses are then ordered by decreasing sensitivity numbers so that the network can be efficiently pruned by discarding the last items of the sorted list. Unlike previous approaches, this simple procedure does not require a modification of the cost function, does not interfere with the learning process, and demands a negligible computational overhead. >

684 citations


Journal ArticleDOI
TL;DR: An integrated strategy is described which utilizes the distinct advantages of each scheme and shows that, in hard problems, the average improvement realized by the integrated scheme is 20–25% higher than any of the individual schemes.

553 citations


Journal ArticleDOI
TL;DR: The shortest-path problem in networks in which the delay (or weight) of the edges changes with time according to arbitrary functions is considered and algorithms for finding the shortest path and minimum delay under various waiting constraints are presented.
Abstract: In this paper the shortest-path problem in networks in which the delay (or weight) of the edges changes with time according to arbitrary functions is considered. Algorithms for finding the shortest path and minimum delay under various waiting constraints are presented and the properties of the derived path are investigated. It is shown that if departure time from the source node is unrestricted, then a shortest path can be found that is simple and achieves a delay as short as the most unrestricted path. In the case of restricted transit, it is shown that there exist cases in which the minimum delay is finite, but the path that achieves it is infinite.

550 citations


Book
23 Apr 1990
TL;DR: Approximate formulae for conflict-free access protocols aloha protocols carrier sensing protocols collision resolution protocols and additional topics are provided.
Abstract: Conflict-free access protocols aloha protocols carrier sensing protocols collision resolution protocols additional topics. Appendix: mathematical formulae and background.

549 citations


Journal ArticleDOI
TL;DR: The pathogenesis of shock and acute renal failure associated with traumatic rhabdomyolysis is reviewed and guidelines for the early management ofShock and the prophylaxis of acute renal Failure due to the crush syndrome are suggested.
Abstract: SEISMIC catastrophes leave in their wake survivors trapped under the rubble who suffer from extensive muscle damage and its devastating sequelae of hemodynamic and metabolic disturbances and acute renal failure.1 We review here the pathogenesis of shock and acute renal failure associated with traumatic rhabdomyolysis and suggest guidelines for the early management of shock and the prophylaxis of acute renal failure due to the crush syndrome. Rhabdomyolysis, myoglobinuria, and renal failure have been known to follow massive crush injury.2 3 4 5 6 Indeed, it is now 50 years after the classic description of the crush syndrome in patients injured during the bombing of . . .

478 citations


Journal ArticleDOI
TL;DR: This work seeks to convert the Gabor representation into a discrete and finite format that is directly suitable for numerical implementation, facilitating the selection of arbitrary window functions as well as arbitrary oversampling rates.

474 citations


Journal ArticleDOI
TL;DR: Using isospin relations, it is shown that it is possible to remove uncertainty for the decays of neutral-{ital B} decays to {ital CP} eigenstates due to the existence of penguin diagrams.
Abstract: There is some theoretical uncertainty in the predictions for CP-violating hadronic asymmetries in neutral-B decays to CP eigenstates due to the existence of penguin diagrams. Using isospin relations, we show that it is possible to remove this uncertainty for the decays ${\mathit{B}}_{\mathit{d}}^{0}$\ensuremath{\rightarrow}\ensuremath{\pi}\ensuremath{\pi}.

431 citations


Journal ArticleDOI
01 Jan 1990
TL;DR: The results indicate a significant selective increase of Fe3+ and ferritin in substantia nigra zona compacta but not in zona reticulata of Parkinsonian brains, confirming the biochemical estimation of iron.
Abstract: Semiquantitative histological evaluation of brain iron and ferritin in Parkinson's (PD) and Alzheimer's disease (DAT) have been performed in paraffin sections of brain regions which included frontal cortex, hippocampus, basal ganglia and brain stem. The results indicate a significant selective increase of Fe3+ and ferritin in substantia nigra zona compacta but not in zona reticulata of Parkinsonian brains, confirming the biochemical estimation of iron. No such changes were observed in the same regions of DAT brains. The increase of iron is evident in astrocytes, macrophages, reactive microglia and non-pigmented neurons, and in damaged areas devoid of pigmented neurons. In substantia nigra of PD and PD/DAT, strong ferritin reactivity was also associated with proliferated microglia. A faint iron staining was seen occasionally in peripheral halo of Lewy bodies. By contrast, in DAT and PD/DAT, strong ferritin immunoreactivity was observed in and around senile plaques and neurofibrillary tangles. The interrelationship between selective increase of iron and ferritin in PD requires further investigation, because both changes could participate in the induction of oxidative stress and neuronal death, due to their ability to promote formation of oxygen radicals.

431 citations


Book
06 Apr 1990
TL;DR: This chapter discusses the implementation of the Ada Emulations, a distributed version of Concurrent Programming, and the problem of the Mutal Exclusion Problem.
Abstract: I CONCURRENT PROGRAMMING: 1. What is Concurrent Programming? 2. The Concurrent Programming Abstraction. 3. The Mutal Exclusion Problem. 4. Semaphores. 5. Monitors. 6. the Problem of Dining Philosophers. II DISTRIBUTED PROGRAMMING. 7. Distributed Programming Models. 8. Ada. 9. occam. 10. Linda. 11. Distributed Mutual Exclusion. 12. Distributed Termination. 13. The Byzantine Generals Problem. III. IMPLEMENTATION PRINCIPLES: 14. Single Processor Implementation. 15. Multi-processor Implementation. 16. Real-Time Programming. Appendix A: Ada Overview. B: Concurrent Programs in Ada. C: Implementation of the Ada Emulations. D: Distributed Algoriths in Ada. Biblography. Index.

400 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the result of the measurement of an operator depends solely on the system being measured and not on the operator itself, and that if operators A and B commute, the result is the product of the results of separate measurements of A and of B.

389 citations


Journal ArticleDOI
TL;DR: In this paper, a theory of nonexpansive iterations in more general infinite-dimensional manifolds has been developed, which includes all normed linear spaces and Hadamard manifolds, as well as the Hilbert ball and the Cartesian product of Hilbert balls.
Abstract: ONE OF THE most active research areas in nonlinear functional analysis is the asymptotics of nonexpansive mappings. Most of the results, however, have been obtained in normed linear spaces. It is natural, therefore, to try to develop a theory of nonexpansive iterations in more general infinite-dimensional manifolds. This is the purpose of the present paper. More specifically, we propose the class of hyperbolic spaces as an appropriate background for the study of operator theory in general, and of iterative processes for nonexpansive mappings in particular. This class of metric spaces, which is defined in Section 2, includes all normed linear spaces and Hadamard manifolds, as well as the Hilbert ball and the Cartesian product of Hilbert balls. In Section 3 we introduce co-accretive operators and their resolvents, and present some of their properties. In the fourth section we discuss the concept of uniform convexity for hyperbolic spaces. Section 5 is devoted to two new geometric properties of (infinite-dimensional) Banach spaces. Theorem 5.6 provides a characterization of Banach spaces having these properties in terms of nonlinear accretive operators. In Sections 6, 7 and 8 we study explicit, implict and continuous iterations, repectively, using the same approach in all three sections. We illustrate this common approach with the following special case. Let C be a closed convex subset of a hyperbolic space (X, p), let T: C --f C be a nonexpansive mapping, and let x be a point in C. In order to study the iteration (T”x: n = 0, 1,2, . . .), we set z,, = (1 (l/n))x 0 (l/n)T”x, K = clco(zj;j I l), and d = inf(p(y, Ty): y E C). The first step is to show that p(x, K) = lim p(x, T”x)/n = d. This leads to the convergence “+m of lz,) when X is uniformly convex and to the weak convergence of (z,,] when X is a Banach space which is reflexive and strictly convex. When T is an averaged mapping we are also able to establish the following triple equality. For all k 2 1,

Journal ArticleDOI
TL;DR: In this paper, the authors introduced normalized helicity and helicity density for the graphical representation of three-dimensional flow fields that contain concentrated vortices, which can be used to identify and accentuate the concentrated vortex-core streamlines and mark their separation and reattachment lines.
Abstract: Helicity density and normalized helicity are introduced as important tools for the graphical representation of three-dimensional flowfields that contain concentrated vortices. The use of these two quantities filters out the flowfield regions of low vorticity, as well as regions of high vorticity but low speed where the angle between the velocity and vorticity vectors is large (such as in the boundary layer). Their use permits the researcher to identify and accentuate the concentrated vortices, differentiate between primary and secondary vortices, and mark their separation and reattachment lines. The method also allows locating singular points in the flowfield and tracing the vortex-core streamlines that emanate from them. Nomenclature H = helicity Hd = helicity density Hn — normalized helicity MOO = freestream Mach number ReD = Reynolds number V = velocity a = angle of attack co = vorticity

Journal ArticleDOI
TL;DR: The expression of functional vEGF receptors was inhibited when the cells were preincubated with tunicamycin, indicating that glycosylation of the receptor is important for the expression offunctional v EGF receptors.

Proceedings ArticleDOI
01 Aug 1990
TL;DR: A novel modular method for constructing uniform self stabilizing mutual exclusion (or in short USSA4E) protocols is presented and the viability of the method is demonstrated by constructing for the first time a randomized USSME protocol for any arbitrary dynamic graph and another one for dyna.mic rings.
Abstract: A self-stabilizing system is a system which reaches a legal configuration by itself, without any kind of an outside intervention; when started from any arbitrary configuration. Hence a self-stabilizing system accommodates any possible initial configuration and tolerates transient bugs. This fact contributes most of the extra difficulty of devising selfstabilizing systems. On the other hand, the same fact makes self-stabilizing systems so appealing as no initialization of the system is required. In this paper a novel modular method for constructing uniform self stabilizing mutual exclusion (or in short USSA4E) protocols is presented. The viability of the method is demonstrated by constructing for the first time a randomized USSME protocol for any arbitrary dynamic graph and another one for dyna.mic rings. The correctness of both protocols is proved and their complexity is analyzed. The analysis of the new protocols in*Partially supported by VPR Funds Japan TS Research Fund and B. Sr. G. Greenberg Research Fund (Ottawa). tpartially supported by a Gutwirth fellowship. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and /or specific permission.

Journal ArticleDOI
TL;DR: Changes in blood flow characteristics in the ascending uterine artery reflect the perpetual growth and development of the uteroplacental circulation, which provides the metabolic demands of the growing fetus throughout gestation.

Journal ArticleDOI
TL;DR: A formalism is developed to obtain two different types of nearest-neighbor probability density functions and closely related quantities, such as their associated cumulative distributions and conditional pair distributions, for many-body systems of {ital D}-dimensional spheres.
Abstract: The probability of finding a nearest neighbor at some given distance from a reference point in a many-body system of interacting particles is of importance in a host of problems in the physical as well as biological sciences. We develop a formalism to obtain two different types of nearest-neighbor probability density functions ({ital void} and {ital particle} probability densities) and closely related quantities, such as their associated cumulative distributions and conditional pair distributions, for many-body systems of {ital D}-dimensional spheres. For the special case of impenetrable (hard) spheres, we compute low-density expansions of each of these quantities and obtain analytical expressions for them that are accurate for a wide range of sphere concentrations. Using these results, we are able to calculate the mean nearest-neighbor distance for distributions of {ital D}-dimensional impenetrable spheres. Our theoretical results are found to be in excellent agreement with computer-simulation data.

Book ChapterDOI
01 Feb 1990
TL;DR: It is shown that every language that admits an interactive proof admits a (computational) zero-knowledge interactive proof.
Abstract: Assuming the existence of a secure probabilistic encryption scheme, we show that every language that admits an interactive proof admits a (computational) zero-knowledge interactive proof. This result extends the result of Goldreich, Micali and Wigderson, that, under the same assumption, all of NP admits zero-knowledge interactive proofs. Assuming envelopes for bit commitment, we show that every language that admits an interactive proof admits a perfect zero-knowledge interactive proof.

Journal ArticleDOI
TL;DR: The results suggest that transcutaneous oxygen tension may be useful as a general indicator of the effectiveness of PDT and as an in situ predictor of the energy required to elicit tumor damage.
Abstract: Among the sequence of events which occur during photodynamic therapy (PDT) are depletion of oxygen and disruption of tumor blood flow. In order to more clearly understand these phenomena we have utilized transcutaneous oxygen electrodes to monitor tissue oxygen disappearance. These results provide, for the first time, non-invasive real-time information regarding the influence of light dose on tissue oxygenation during irradiation. Measurements were conducted on transplanted VX-2 skin carcinomas grown in the ears of New Zealand white rabbits. Rabbits were treated with Photofrin II and tumors were irradiated with up to 200 kJ/m2 (500 W/m2) of 630-nm light. Substantial reductions in tumor oxygen tension were observed upon administration of as little as 20 kJ/m2. For a series of brief irradiations, oxygen tension was modulated by the appearance of laser light. Tissue oxygen reversibility appeared to be dependent upon PDT dose. Long-term, irreversible tissue hypoxia was recorded in tumors for large (200 kJ/m2) fluences. These results suggest that transcutaneous oxygen tension may be useful as a general indicator of the effectiveness of PDT and as an in situ predictor of the energy required to elicit tumor damage.

Journal ArticleDOI
TL;DR: An algebraic framework is developed that exposes the structural kinship among the deBruijn, shujCfle-exchange, butterjy, and cube-connected cycles networks and shows a family of "leveled" algorithms which run as efficiently on T/ as they do on (the much larger) G.
Abstract: The authors develop an algebraic framework that exposes the structural kinship among the deBruijn, shujCfle-exchange, butterjy, and cube-connected cycles networks and illustrate algorithmic benefits that ensue from the exposed relationships. The framework builds on two alge- braically specified genres of graphs: A group action graph (GAG, for short) is given by a set V of vertices and a set H of permutations of V: For each v E V and each r EII, there is an arc labeled r from vertex v to vertex v. A Cayley graph is a GAG (V, H), where V is the group Gr(H) generated by H and where each r 6 H acts on each g 6 Gr(H) by right multiplication. The graphs (Gr(H), H) and (V, H) are called associated graphs. It is shown that every GAG is a quotient graph of its associated Cayley graph. By applying such general results, the authors determine the following: The butterfly network (a Cayley graph) and the deBruijn network (a GAG) are associated graphs. The cube-connected cycles network (a Cayley graph) and the shuffle-exchange network (a GAG) are associated graphs. The order-n instance of both the butterfly and the cube-connected cycles share the same underlying group, but have slightly different generator sets II. By analyzing these algebraic results, it is delimited, for any Cayley graph G and associated GAG 7-/, a family of "leveled" algorithms which run as efficiently on T/ as they do on (the much larger) G. Further analysis of the results yields new, surprisingly efficient simulations by the shuffle-oriented networks (the shuffle-exchange and deBruijn networks) of like-sized butterfly-oriented networks (the butterfly and cube-connected cycles networks): An N-vertex butterfly-oriented network can be simulated by the smallest shuffle-oriented network that is big enough to hold it with slowdown O(log log N). This simulation is exponentially faster than the anticipated logarithmic slowdown. The mappings that underlie the simulation can be computed in linear time; and they afford one an algorithmic tech- nique for translating any program developed for a butterfly-oriented architecture into an equivalent program for a shuffle-oriented architecture, the latter program incurring only the indicated slowdown factor.

Proceedings ArticleDOI
01 Apr 1990
TL;DR: It is proved the existence of an efficient “simulation” of randomized on-line algorithms by deterministic ones, which is best possible in general.
Abstract: Against in adaptive adversary, we show that the power of randomization in on-line algorithms is severely limited! We prove the existence of an efficient “simulation” of randomized on-line algorithms by deterministic ones, which is best possible in general. The proof of the upper bound is existential. We deal with the issue of computing the efficient deterministic algorithm, and show that this is possible in very general cases.

Journal ArticleDOI
TL;DR: Probabilistic algorithms are proposed to overcome the difficulty of designing a ring of n processors such that they will be able to choose a leader by sending messages along the ring, if the processors are indistinguishable.
Abstract: Given a ring of n processors it is required to design the processors such that they will be able to choose a leader (a uniquely designated processor) by sending messages along the ring If the processors are indistinguishable then there exists no deterministic algorithm to solve the problem To overcome this difficulty, probabilistic algorithms are proposed The algorithms may run forever but they terminate within finite time on the average For the synchronous case several algorithms are presented: The simplest requires, on the average, the transmission of no more than 2442 n bits and O ( n ) time More sophisticated algorithms trade time for communication complexity If the processors work asynchronously then on the average O ( n log n ) bits are transmitted In the above cases the size of the ring is assumed to be known to all the processors If the size is not known then finding it may be be done only with high probability: any algorithm may yield incorrect results (with nonzero probability) for some values of n Another difficulty is that, if we insist on correctness, the processors may not explicity terminate Rather, the entire ring reaches an inactive state, in which no processor initiates communication

Book ChapterDOI
01 Jul 1990
TL;DR: A basic question concerning zero-knowledge proof systems is whether their (sequential and/or parallel) composition is zero- knowledge too.
Abstract: A basic question concerning zero-knowledge proof systems is whether their (sequential and/or parallel) composition is zero-knowledge too. This question is not only of natural theoretical interest, but is also of great practical importance as it concerns the use of zero-knowledge proofs as subroutines in cryptographic protocols.


Journal ArticleDOI
TL;DR: It is shown that the option of allowing ties in the preference lists can significantly affect the computational complexity of stable matching, and it will be shown that, when ties are allowed, the roommate problem is NP-complete.

Proceedings ArticleDOI
01 Aug 1990
TL;DR: This paper explores the possibility of extending an arbitrary program into a self-stabilizing one using an asynchronous distributed message-passing system whose communication topology is an arbitrary graph.
Abstract: A self-stabilizing program eventually resumes normal behavior even if excution begins in, an abnormal initial state. In this paper, we explore the possibility of extending an arbitrary program into a self-stabilizing one. Our contributions are: (1) a formal definition of the concept of one program being aself-stabilizing extension of another; (2) a characterization of what properties may hold in such extensions; (3) a demonstration of the possibility of mechanically creating such extensions. The computtional model used is that of an asynchronous distributed message-passing system whose communication topology is an arbitrary graph. We contrast the difficulties of self-stabilization in thismodel with those of themore common shared-memory models.

Journal ArticleDOI
TL;DR: In this article, the observed morphologies of planetary nebulae (PNs) that are known to have close-binary nuclei, in the light of theoretical studies of common-envelope ejection followed by wind shaping, are examined.
Abstract: This paper examines the observed morphologies of planetary nebulae (PNs) that are known to have close-binary nuclei, in the light of theoretical studies of common-envelope ejection followed by wind shaping. Some of the physical aspects of the spiraling-in process and PN ejection are described, and the subsequent shaping of the PN via a stellar wind is examined. A list of the PNs known to have close-binary nuclei is presented together with the imagery of these PNs. The observed morphologies are discussed in the context of the theoretical predictions. 97 refs.

Journal ArticleDOI
TL;DR: Two main electronic processes are shown: quasiparticle avalanche production during hot-carrier thermalization, which takes about 300 fsec; and recombination of qu asiparticles to form Cooper pairs, which is completed within 5 psec.
Abstract: Femtosecond dynamics of photogenerated quasiparticles in ${\mathrm{YBa}}_{2}$${\mathrm{Cu}}_{3}$${\mathrm{O}}_{7\mathrm{\ensuremath{-}}\mathrm{\ensuremath{\delta}}}$ superconducting thin films shows, at T\ensuremath{\le}${\mathit{T}}_{\mathit{c}}$, two main electronic processes: (i) quasiparticle avalanche production during hot-carrier thermalization, which takes about 300 fsec; (ii) recombination of quasiparticles to form Cooper pairs, which is completed within 5 psec. In contrastr, nonsuperconducting epitaxial films such as ${\mathrm{PrBa}}_{2}$${\mathrm{Cu}}_{2}$${\mathrm{O}}_{7}$ and ${\mathrm{YBa}}_{2}$${\mathrm{Cu}}_{3}$${\mathrm{O}}_{6}$ show regular picosecond electronic response.

Journal ArticleDOI
TL;DR: A procedure that determines whether a relational schema is EER-convertible is developed, a normal form is proposed for relational schemas representing EER object structures, and the corresponding normalization procedure is presented.
Abstract: Relational schemas consisting of relation-schemes, key dependencies and key-based inclusion dependencies (referential integrity constraints) are considered. Schemas of this form are said to be entity-relationship (EER)-convertible if they can be associated with an EER schema. A procedure that determines whether a relational schema is EER-convertible is developed. A normal form is proposed for relational schemas representing EER object structures. For EER-convertible relational schemas, the corresponding normalization procedure is presented. The procedures can be used for analyzing the semantics of existing relational databases and for converting relational database schemas into object-oriented database schemas. >

Journal ArticleDOI
01 May 1990-Infor
TL;DR: A pilot application of Data Envelopment Analysis (DEA) for the measurement of the efficiency of highway maintenance patrols is demonstrated and results are compared to those obtained from a conventional DEA model.
Abstract: A pilot application of Data Envelopment Analysis (DEA) for the measurement of the efficiency of highway maintenance patrols is demonstrated. Selection of pertinent factors is discussed and the potential benefits of the analysis listed. A bounded DEA model is constructed and results are compared to those obtained from a conventional DEA model. The effects of secondary factors on the relative efficiencies of patrols are examined by analyses of sub-groups of Decision Making Units (DMUs), differing in the intensities of the respective factors.

Journal ArticleDOI
01 Dec 1990
TL;DR: The classical direct detection optical channel is modelled by an observed Poisson process with intensity (rate) γ(t) + γo, where γ (t) is the information carrying input waveform andγo represents the ‘dark current’.
Abstract: The classical direct detection optical channel is modelled by an observed Poisson process with intensity (rate) γ(t) + γo, where γ(t) is the information carrying input waveform and γo represents the ‘dark current’. The capacity of this channel is considered within a restricted class of peak power γ(t) ≤ A and average power E(γ(t)) ≤ σ constrained-pulse amplitude modulated- input waveforms. Within this class where γ(t) = γi, during the ith signalling interval iΔ ≤ t <(i + 1)Δ the ‘symbol duration’ Δ affects the spectral properties (‘bandwidth’) of γ(t). The capacity achieving distribution of the symbols {γi} is determined by setting {γi} to be an independent identically distributed sequence of discrete random variables taking on a finite number of values. The two valued distribution of γ with mass points located at 0 and A is capacity achieving for σ = A (no average power constraint) and γo = 0, in the region 0 < AΔ < 3.3679. In the following region (3.3679 ≤ AΔ < e) the ternary distribution is capacity achieving with the additional mass point rising at 0.339A.