scispace - formally typeset
Search or ask a question

Showing papers by "Technion – Israel Institute of Technology published in 1993"


Journal ArticleDOI
TL;DR: In this article, the authors show that most of the characterizations that were reported thus far in the literature are special cases of the following general result: a standard multilayer feedforward network with a locally bounded piecewise continuous activation function can approximate any continuous function to any degree of accuracy if and only if the network's activation function is not a polynomial.

1,581 citations


Book
01 Jan 1993
TL;DR: This book introduces a new cryptographic method, called differential cryptanalysis, which can be applied to analyze cryptosystems, and describes the cryptanalysis of DES, deals with the influence of its building blocks on security, and analyzes modified variants.
Abstract: DES, the Data Encryption Standard, is one of several cryptographic standards. The authors of this text detail their cryptanalytic "attack" upon DES and several other systems, using creative and novel tactics to demonstrate how they broke DES up into 16 rounds of coding. The methodology used offers valuable insights to cryptographers and cryptanalysts alike in creating new encryption standards, strengthening current ones, and exploring new ways to test important data protection schemes. This book introduces a new cryptographic method, called differential cryptanalysis, which can be applied to analyze cryptosystems. It describes the cryptanalysis of DES, deals with the influence of its building blocks on security, and analyzes modified variants. The differential cryptanalysis of "Feal" and several other cryptosystems is also described. This method can also be used to cryptanalyze hash functions, as is exemplified by the cryptanalysis of "Snefru".

1,009 citations


Journal ArticleDOI
TL;DR: The behavior and the stability of these compactons is very similar to that observed in completely integrable systems.
Abstract: The understand the role of nonlinear dispersion in pattern formation, we introduce and study Korteweg\char21{}de Vries\char21{}like equations wtih nonlinear dispersion: ${\mathit{u}}_{\mathit{t}}$+(${\mathit{u}}^{\mathit{m}}$${)}_{\mathit{x}}$+(${\mathit{u}}^{\mathit{n}}$${)}_{\mathit{x}\mathit{x}\mathit{x}}$=0, m,ng1. The solitary wave solutions of these equations have remarkable properties: They collide elastically, but unlike the Korteweg\char21{}de Vries (m=2, n=1) solitons, they have compact support. When two ``compactons'' collide, the interaction site is marked by the birth of low-amplitude compacton-anticompacton pairs. These equations seem to have only a finite number of local conservation laws. Nevertheless, the behavior and the stability of these compactons is very similar to that observed in completely integrable systems.

934 citations


Journal ArticleDOI
TL;DR: It is shown how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with “almost” equal probability.
Abstract: It is shown how to efficiently construct a small probability space on n binary random variables such that for every subset, its parity is either zero or one with “almost” equal probability. They are called $\epsilon $-biased random variables. The number of random bits needed to generate the random variables is $O(\log n + \log \frac{1}{\epsilon })$. Thus, if $\epsilon $ is polynomially small, then the size of the sample space is also polynomial. Random variables that are $\epsilon $-biased can be used to construct “almost” k-wise independent random variables where $\epsilon $ is a function of k.These probability spaces have various applications: l. Derandomization of algorithms: Many randomized algorithms that require only k-wise independence of their random bits (where k is bounded by $O(\log n)$), can be derandomized by using $\epsilon $-biased random variables. 2. Reducing the number of random bits required by certain randomized algorithms, e.g., verification of matrix multiplication. 3. Exhaustive tes...

690 citations


Journal ArticleDOI
TL;DR: For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions, and it is shown that this Nash equilibrium point possesses interesting monotonicity properties.
Abstract: The authors consider a communication network shared by several selfish users. Each user seeks to optimize its own performance by controlling the routing of its given flow demand, giving rise to a noncooperative game. They investigate the Nash equilibrium of such systems. For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions. It is shown that this Nash equilibrium point possesses interesting monotonicity properties. For general networks, these convexity conditions are not sufficient for guaranteeing uniqueness, and a counterexample is presented. Nonetheless, uniqueness of the Nash equilibrium for general topologies is established under various assumptions. >

591 citations


Journal ArticleDOI
01 Mar 1993-Nature
TL;DR: In this paper, double-headed surfactants with a short spacer were used to control amphiphile self-aggregation in aqueous solution, and microstructures formed by these dimeric surfactant with m = 12 and s = 2,3 or 4 were investigated with transmission electron microscopy.
Abstract: SURFACTANT molecules in water form organized assemblies of various shapes, such as micelles and bilayer lamellae, which are of interest as analogues of biological structures, as model systems for studying complex phase behaviour and because of their technological importance, for example to the food and paint industries. The polar head groups are usually arranged randomly at the surface of these assemblies. We have studied the effect on the microstructure of these assemblies of imposing constraints on the head-group spacing. We investigate the structures formed by 'double-headed' surfactants in which two quaternary ammonium species (CmH2m+1N+(CH3)2) are linked at the level of the head groups by a hydrocarbon spacer (CsH2s). Here we report the microstructures formed by these dimeric surfactants with m = 12 and s = 2,3 or 4 in aqueous solution, by rapidly cooling the micellar solutions and investigating the vitrified structures with transmission electron microscopy. The surfactants with a short spacer (s = 2,3) form long, thread-like and entangled micelles even at low concentrations, whereas the corresponding monomeric ammonium surfactants can form only spherical micelles. The dimeric surfactants with s = 4 form spheroidal micelles. Thus short spacers (which impose reduced head-group separation) appear to promote lower spontaneous curvature in the assemblies. This approach may afford a new way to control amphiphile self-aggregation.

492 citations


Journal ArticleDOI
17 Jan 1993
TL;DR: In contrast to the classical matched decoding case, here, under the mismatched decoding regime, the highest achievable rate depends on whether the performance criterion is the bit error rate or the message error probability and whether the coding strategy is deterministic or randomized.
Abstract: Reliable transmission over a discrete-time memoryless channel with a decoding metric that is not necessarily matched to the channel (mismatched decoding) is considered. It is assumed that the encoder knows both the true channel and the decoding metric. The lower bound on the highest achievable rate found by Csiszar and Korner (1981) and by Hui (1983) for DMC's, hereafter denoted C/sub LM/, is shown to bear some interesting information-theoretic meanings. The bound C/sub LM/ turns out to be the highest achievable rate in the random coding sense, namely, the random coding capacity for mismatched decoding. It is also demonstrated that the /spl epsiv/-capacity associated with mismatched decoding cannot exceed C/sub LM/. New bounds and some properties of C/sub LM/ are established and used to find relations to the generalized mutual information and to the generalized cutoff rate. The expression for C/sub LM/ is extended to a certain class of memoryless channels with continuous input and output alphabets, and is used to calculate C/sub LM/ explicitly for several examples of theoretical and practical interest. Finally, it is demonstrated that in contrast to the classical matched decoding case, here, under the mismatched decoding regime, the highest achievable rate depends on whether the performance criterion is the bit error rate or the message error probability and whether the coding strategy is deterministic or randomized. >

444 citations


Journal ArticleDOI
TL;DR: The nearest-level-spacing distribution function P(s) in a disordered system near the metal-insulator transition is studied and a scaling theory of critical behavior of P(S) in finite samples is proposed and verified numerically.
Abstract: We study the nearest-level-spacing distribution function P(s) in a disordered system near the metal-insulator transition. We claim that in the limit of an infinite system there are only three possible functions P(s): Wigner surmise ${\mathit{P}}_{\mathit{W}}$(s) in a metal, Poisson law ${\mathit{P}}_{\mathit{P}}$(s) in an insulator, and a third one ${\mathit{P}}_{\mathit{T}}$(s), exactly at the transition. The function ${\mathit{P}}_{\mathit{T}}$ is an interesting hybrid of ${\mathit{P}}_{\mathit{W}}$(s) and ${\mathit{P}}_{\mathit{P}}$(s), it has the small-s behavior of the former and the large-s behavior of the latter one. A scaling theory of critical behavior of P(s) in finite samples is proposed and verified numerically.

426 citations


Journal ArticleDOI
TL;DR: Three wait-free implementations of atomicsnapshot memory are presented, one of which uses unbounded(integer) fields in these registers, and is particularly easy tounderstand, while the second and third use bounded registers.
Abstract: This paper introduces a general formulation of atomic snapshot memory, a shared memory partitioned into words written (updated) by individual processes, or instantaneously read (scanned) in its entirety. This paper presents three wait-free implementations of atomic snapshot memory. The first implementation in this paper uses unbounded (integer) fields in these registers, and is particularly easy to understand. The second implementation uses bounded registers. Its correctness proof follows the ideas of the unbounded implementation. Both constructions implement a single-writer snapshot memory, in which each word may be updated by only one process, from single-writer, n-reader registers. The third algorithm implements a multi-writer snapshot memory from atomic n-writer, n-reader registers, again echoing key ideas from the earlier constructions. All operations require Θ(n2) reads and writes to the component shared registers in the worst case. —Authors' Abstract

426 citations


Journal ArticleDOI
TL;DR: The DEA efficiency ratings can be useful tool for port managers and for researchers, providing a deeper insight into port performance, and does not require the development of ‘standards’ against which efficiency is measured, although such standards can be incorporated in the DEA analysis.
Abstract: The large variety of factors that influence port performance complicates the determination of port efficiency. This paper presents a new approach to the measurement of efficiency. Data envelopment analysis (DEA), as a it is called, has particular applicability in the service sector. Applying mathematical programming techniques. DEA enables relative efficiency ratings ti be derived within a set of analysed units. Thus it does not require the developmentof ‘standards’ against which efficiency is measured, although such standards can be incorporated in the DEA analysis. The efficiency of units is compared with an ‘efficiency envelope’ that contains the most efficient units in the group. The DEA approach will be demonstrated by a hypothetical numerical example where the performance of 20 ports are compared. The DEA efficiency ratings can be useful tool for port managers and for researchers, providing a deeper insight into port performance. Weaknesses can be detected, leadungs the way to potential improvements.

420 citations


Journal ArticleDOI
01 Feb 1993
TL;DR: In this paper, the authors show that the total world consumption of fertilizer N, P2O5, and K2O in 1990/1991 was 78, 37, and 26 million tons per annum, respectively, with a projected yearly increase of demand of about 2 to 3%.
Abstract: Total world consumption of fertilizer N, P2O5, and K2O in 1990/1991 was 78, 37, and 26 million tons per annum, respectively, with a projected yearly increase of demand of about 2 to 3%. Trends in crop production (maize and wheat) in the last four decades show that N application rates increased about 15 times whereas its accumulation in grain increased only 3 to 4 times. At the same time nutrient recovery by crops remained relatively low (e.g. about 50% for N). This represents a potentially alarming situation from environmental, economic and resource conservation points of view and indicates an urgent need for improving efficiency of fertilizer use.

Journal ArticleDOI
TL;DR: Three self-stabilizing protocols for distributed systems in the shared memory model are presented, one of which is a mutual-exclusion prootocol for tree structured systems and the other two are a spanning tree protocol for systems with any connected communication graph.
Abstract: Three self-stabilizing protocols for distributed systems in the shared memory model are presented The first protocol is a mutual-exclusion protocol for tree structured systems The second protocol is a spanning tree protocol for systems with any connected communication graph The third protocol is obtained by use of fair protocol combination, a simple technique which enables the combination of two self-stabilizing dynamic protocols The result protocol is a self-stabilizing, mutual-exclusion protocol for dynamic systems with a general (connected) communication graph The presented protocols improve upon previous protocols in two ways: First, it is assumed that the only atomic operations are either read or write to the shared memory Second, our protocols work for any connected network and even for dynamic networks, in which the topology of the network may change during the execution

Journal ArticleDOI
TL;DR: Of special interest is the oxidative modification of LDL which was demonstrated to occur in vivo, which involves cellular lipid peroxidation and requires the binding of LDL to its receptor on macrophages.

Journal ArticleDOI
TL;DR: A newaffine invariant scale-space for planar curves is presented and the affine-invariant progressive smoothing property of the evolution equation is demonstrated as well.
Abstract: A newaffine invariant scale-space for planar curves is presented in this work. The scale-space is obtained from the solution of a novel nonlinear curve evolution equation which admits affine invariant solutions. This flow was proved to be the affine analogue of the well knownEuclidean shortening flow. The evolution also satisfies properties such ascausality, which makes it useful in defining a scale-space. Using an efficient numerical algorithm for curve evolution, this continuous affine flow is implemented, and examples are presented. The affine-invariant progressive smoothing property of the evolution equation is demonstrated as well.

Journal ArticleDOI
TL;DR: In this article, a wavelet expansion can adaptively fit itself to the various length scales associated with the scatterer by distributing the localized functions near the discontinuities and the more spatially diffused ones over the smooth expanses of the SCA.
Abstract: An approach which incorporates the theory of wavelet transforms in method-of-moments solutions for electromagnetic wave interaction problems is presented. The unknown field or response is expressed as a twofold summation of shifted and dilated forms of a properly chosen basis function, which is often referred to as the mother wavelet. The wavelet expansion can adaptively fit itself to the various length scales associated with the scatterer by distributing the localized functions near the discontinuities and the more spatially diffused ones over the smooth expanses of the scatterer. The approach is thus best suited for the analysis of scatterers which contain a broad spectrum of length scales ranging from a subwavelength to several wavelengths. Using a Galerkin method and subsequently applying a threshold procedure, the moment-method matrix is rendered sparsely populated. The structure of the matrix reveals the localized scale-fitting distribution long before the matrix equation is solved. The performance of the proposed discretization scheme is illustrated by a numerical study of electromagnetic coupling through a double-slot aperture. >

Journal ArticleDOI
TL;DR: In this article, the effectiveness of intergroup competition as a solution to free-riding in a lifelike orange-picking task was investigated. But the results were limited to real-life settings.

Book ChapterDOI
01 Jun 1993
TL;DR: This paper describes in detail how the new implementation works and gives realistic examples to illustrate its power, and discusses a number of directions for future research.
Abstract: Temporal logic model checking is an automatic technique for verifying finite-state concurrent systems. Specifications are expressed in a propositional temporal logic, and the concurrent system is modeled as a state-transition graph. An efficient search procedure is used to determine whether or not the state-transition graph satisfies the specification. When the technique was first developed ten years ago, it was only possible to handle concurrent systems with a few thousand states. In the last few years, however, the size of the concurrent systems that can be handled has increased dramatically. By representing transition relations and sets of states implicitly using binary decision diagrams, it is now possible to check concurrent systems with more than 10120 states. In this paper we describe in detail how the new implementation works and give realistic examples to illustrate its power. We also discuss a number of directions for future research. The necessary background information on binary decision diagrams, temporal logic, and model checking has been included in order to make the exposition as self-contained as possible.

Journal ArticleDOI
TL;DR: A method for evaluating the performance of packet switching communication networks under a fixed, session-based, routing strategy is proposed, based on properly bounding the probability distribution functions of the system input processes.
Abstract: A method for evaluating the performance of packet switching communication networks under a fixed, session-based, routing strategy is proposed. The approach is based on properly bounding the probability distribution functions of the system input processes. The suggested bounds which are decaying exponentials, possess three convenient properties. When the inputs to an isolated network element are all bounded, they result in bounded outputs and assure that the delays and queues in this element have exponentially decaying distributions. In some network settings, bounded inputs result in bounded outputs. Natural traffic processes can be shown to satisfy such bounds. Consequently, this method enables the analysis of various previously intractable setups. Sufficient conditions are provided for the stability of such networks, and derive upper bounds for the parameters of network performance are derived. >

Journal ArticleDOI
TL;DR: Performance of a wideband multipath-fading terrestrial digital coded communication system with applications to a cellular system using direct-sequence spread-spectrum code-division multiaccess (CDMA) with M-ary orthogonal modulation on the many-to-one reverse link.
Abstract: Performance of a wideband multipath-fading terrestrial digital coded communication system is treated. The analysis has applications to a cellular system using direct-sequence spread-spectrum code-division multiaccess (CDMA) with M-ary orthogonal modulation on the many-to-one reverse (user-to-base station) link. For these links, power control of each multiple-access user by the cell base station is a critically important feature. This feature is implemented by measuring the power received at the base station for each user and sending a command to either raise or lower reverse link transmitter power by a fixed amount. Assuming perfect interleaving, the effect of the power control accuracy on the system performance is assessed. >

Journal ArticleDOI
TL;DR: Xylan completely protected the protein from inactivation by N-bromosuccinimide, and its activity was strongly inhibited by Zn2+, Cd2+, and Hg2+.
Abstract: Bacillus stearothermophilus T-6 produces an extracellular xylanase that was shown to optimally bleach pulp at pH 9 and 65 degrees C. The enzyme was purified and concentrated in a single adsorption step onto a cation exchanger and is made of a single polypeptide with an apparent M(r) of 43,000 (determined by sodium dodecyl sulfate-polyacrylamide gel electrophoresis). Xylanase T-6 is an endoxylanase that completely degrades xylan to xylose and xylobiose. The pIs of the purified protein were 9 and 7 under native and denaturing conditions, respectively. The optimum activity was at pH 6.5; however, 60% of the activity was still retained at pH 10. At 65 degrees C and pH 7, the enzyme was stable for more than 10 h; at 65 degrees C and pH 9, the half-life of the enzyme was approximately 6 h. Kinetic experiments at 55 degrees C gave Vmax and Km values of 288 U/mg and 1.63 mg/ml, respectively. The enzyme had no apparent requirement for cofactors, and its activity was strongly inhibited by Zn2+, Cd2+, and Hg2+. Xylan completely protected the protein from inactivation by N-bromosuccinimide. The N-terminal sequence of the first 45 amino acids of the enzyme showed high homology with the N-terminal region of xylanase A from the alkalophilic Bacillus sp. strain C-125.

Journal ArticleDOI
TL;DR: The nonnegative rank of a nonnegative matrix is the smallest number of non negative rank-one matrices into which the matrix can be decomposed additively.


Journal ArticleDOI
TL;DR: It is concluded that, at the primary afferent level, dynamic hyperalgesia is mediated by myelinated fibers, whereas static hyperalGESia depends on unmyelinated afferents.
Abstract: Two behavioral kinds of mechanical hyperalgesia can be clearly discerned by clinical criteria in patients with neuropathic syndromes, i.e., a dynamic type, elicitable by lightly stroking the symptomatic skin, and a static type, elicitable by steadily applying gentle pressure on it. Of 28 patients studied, 19 had dynamic and 18 had static type mechanical hyperalgesia (9 expressed both types). Experimental compression-ischemia nerve block totally abolished the dynamic hyperalgesia in all patients except in 2, in whom it was markedly diminished. Disappearance of dynamic hyperalgesia occurred contemporaneously with block of myelinated A fibers, as expressed by abolition of the sensations of touch and cold, monitored quantitatively. Static hyperalgesia, however, outlasted A-fiber block in 15 of 18 patients; the phenomenon persisted during the stage when only unmyelinated fibers were available for impulse conduction. It is thus concluded that, at the primary afferent level, dynamic hyperalgesia is mediated by myelinated fibers, whereas static hyperalgesia depends on unmyelinated afferents. These two kinds of hyperalgesia represent discrete pathophysiological entities with distinct clinical connotations.

Journal ArticleDOI
TL;DR: In this paper, a conceptual framework for the treatment of factor weights in DEA is proposed, where general guidelines for setting bounds on factor weights are proposed and alternative methods to limit the range within which these factors are allowed to vary.
Abstract: Provisions for controlling factor weights constitute a significant extension of the data envelopment analysis (DEA) methodology, as an effective tool for measuring efficiency. This paper suggests a conceptual framework for the treatment of factor weights in DEA. First, the paper proposes general guidelines for setting bounds on factor weights. Then, it develops and presents alternative methods to limit the range within which these factor weights are allowed to vary. All of these methods involve additional information which is entered into the analysis in the form of constraints, bounds or different objective functions. Finally, the implications of the various approaches is discussed.

Journal ArticleDOI
TL;DR: A unified texture model that is applicable to a wide variety of texture types found in natural images is presented and results show that the deterministic components should be parameterized separately from the purely indeterministic component.
Abstract: A unified texture model that is applicable to a wide variety of texture types found in natural images is presented. This model leads to the derivation of texture analysis and synthesis algorithms designed to estimate the texture parameters and to reconstruct the original texture field from these parameters. The texture field is assumed to be a realization of a regular homogeneous random field, which is characterized in general by a mixed spectral distribution. The texture field is orthogonally decomposed into a purely indeterministic component and a deterministic component. The deterministic component is further orthogonally decomposed into a harmonic component, and a generalized-evanescent component. Both analytical and experimental results show that the deterministic components should be parameterized separately from the purely indeterministic component. The model is very efficient in terms of the number of parameters required to faithfully represent textures. Reconstructed textures are practically indistinguishable from the originals. >

Journal ArticleDOI
TL;DR: In this article, the authors used an optical reflection technique to measure the thickness of water films existing on the surfaces of isolated single H2O ice crystals, at temperatures below O°C.

Journal ArticleDOI
TL;DR: The condensation polymerization of linear ω-hydroxyesters using the commercially available crude porcine pancreatic lipase (PPL) suspended in organic solvents as the preferred enzyme for the reactions was investigated in this paper.
Abstract: The condensation polymerization of linear ω-hydroxyesters using the commercially-available crude porcine pancreatic lipase (PPL), suspended in organic solvents as the preferred enzyme for the reactions, was investigated. The optimal conditions for the condensation polymerization were determined. Film-forming polyesters with a degree of polymerization up to about 100 were obtained from linear aliphatic hydroxyesters in refluxing n-hexane. For the ring opening polymerization of lactones, the reaction of e-caprolactone with methanol as the preferred nucleophile was studied. Polyesters witha DP of up to 35 were obtained in hexane at temperatures between 25 an 40°C

Journal ArticleDOI
TL;DR: In this paper, carbon black was introduced as an alternative microfiller to silica fume, which was found to be effective in modifying the basic concrete-matrix strength relationship.

Journal ArticleDOI
TL;DR: The results show that forward error correction schemes become less efficient due to the bursty nature of the packet loss processes; real-time traffic might be more sensitive to network congestion than was previously assumed; and the retransmission probability of ATM messages has been overestimated by the use of the independence assumption.
Abstract: The packet loss process in a single-server queueing system with a finite buffer capacity is analyzed. The model used addresses the packet loss probabilities for packets within a block of a consecutive sequence of packets. An analytical approach is presented that yields efficient recursions for the computation of the distribution of the number of lost packets within a block of packets of fixed or variable size for several arrival models and several numbers of sessions. Numerical examples are provided to compare the distribution obtained with that obtained using the independence assumption to compute the loss probabilities of packets within a block. The results show that forward error correction schemes become less efficient due to the bursty nature of the packet loss processes; real-time traffic might be more sensitive to network congestion than was previously assumed; and the retransmission probability of ATM messages has been overestimated by the use of the independence assumption. >