scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 1993"


Journal ArticleDOI
TL;DR: A comprehensive review of spatiotemporal pattern formation in systems driven away from equilibrium is presented in this article, with emphasis on comparisons between theory and quantitative experiments, and a classification of patterns in terms of the characteristic wave vector q 0 and frequency ω 0 of the instability.
Abstract: A comprehensive review of spatiotemporal pattern formation in systems driven away from equilibrium is presented, with emphasis on comparisons between theory and quantitative experiments. Examples include patterns in hydrodynamic systems such as thermal convection in pure fluids and binary mixtures, Taylor-Couette flow, parametric-wave instabilities, as well as patterns in solidification fronts, nonlinear optics, oscillatory chemical reactions and excitable biological media. The theoretical starting point is usually a set of deterministic equations of motion, typically in the form of nonlinear partial differential equations. These are sometimes supplemented by stochastic terms representing thermal or instrumental noise, but for macroscopic systems and carefully designed experiments the stochastic forces are often negligible. An aim of theory is to describe solutions of the deterministic equations that are likely to be reached starting from typical initial conditions and to persist at long times. A unified description is developed, based on the linear instabilities of a homogeneous state, which leads naturally to a classification of patterns in terms of the characteristic wave vector q0 and frequency ω0 of the instability. Type Is systems (ω0=0, q0≠0) are stationary in time and periodic in space; type IIIo systems (ω0≠0, q0=0) are periodic in time and uniform in space; and type Io systems (ω0≠0, q0≠0) are periodic in both space and time. Near a continuous (or supercritical) instability, the dynamics may be accurately described via "amplitude equations," whose form is universal for each type of instability. The specifics of each system enter only through the nonuniversal coefficients. Far from the instability threshold a different universal description known as the "phase equation" may be derived, but it is restricted to slow distortions of an ideal pattern. For many systems appropriate starting equations are either not known or too complicated to analyze conveniently. It is thus useful to introduce phenomenological order-parameter models, which lead to the correct amplitude equations near threshold, and which may be solved analytically or numerically in the nonlinear regime away from the instability. The above theoretical methods are useful in analyzing "real pattern effects" such as the influence of external boundaries, or the formation and dynamics of defects in ideal structures. An important element in nonequilibrium systems is the appearance of deterministic chaos. A greal deal is known about systems with a small number of degrees of freedom displaying "temporal chaos," where the structure of the phase space can be analyzed in detail. For spatially extended systems with many degrees of freedom, on the other hand, one is dealing with spatiotemporal chaos and appropriate methods of analysis need to be developed. In addition to the general features of nonequilibrium pattern formation discussed above, detailed reviews of theoretical and experimental work on many specific systems are presented. These include Rayleigh-Benard convection in a pure fluid, convection in binary-fluid mixtures, electrohydrodynamic convection in nematic liquid crystals, Taylor-Couette flow between rotating cylinders, parametric surface waves, patterns in certain open flow systems, oscillatory chemical reactions, static and dynamic patterns in biological media, crystallization fronts, and patterns in nonlinear optics. A concluding section summarizes what has and has not been accomplished, and attempts to assess the prospects for the future.

6,145 citations


Book
01 Jan 1993
TL;DR: An efficient translator is implemented that takes as input a linear AMPL model and associated data, and produces output suitable for standard linear programming optimizers.
Abstract: Practical large-scale mathematical programming involves more than just the application of an algorithm to minimize or maximize an objective function. Before any optimizing routine can be invoked, considerable effort must be expended to formulate the underlying model and to generate the requisite computational data structures. AMPL is a new language designed to make these steps easier and less error-prone. AMPL closely resembles the symbolic algebraic notation that many modelers use to describe mathematical programs, yet it is regular and formal enough to be processed by a computer system; it is particularly notable for the generality of its syntax and for the variety of its indexing operations. We have implemented an efficient translator that takes as input a linear AMPL model and associated data, and produces output suitable for standard linear programming optimizers. Both the language and the translator admit straightforward extensions to more general mathematical programs that incorporate nonlinear expressions or discrete variables.

3,176 citations


Proceedings Article
Jane Bromley1, Isabelle Guyon1, Yann LeCun1, E. Sackinger1, Roopak Shah1 
29 Nov 1993
TL;DR: An algorithm for verification of signatures written on a pen-input tablet based on a novel, artificial neural network called a "Siamese" neural network, which consists of two identical sub-networks joined at their outputs.
Abstract: This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a "Siamese" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.

2,980 citations


Journal ArticleDOI
TL;DR: For wireless cellular communication systems, one seeks a simple effective means of power control of signals associated with randomly dispersed users that are reusing a single channel in different cells, and the authors demonstrate exponentially fast convergence to these settings whenever power settings exist for which all users meet the rho requirement.
Abstract: For wireless cellular communication systems, one seeks a simple effective means of power control of signals associated with randomly dispersed users that are reusing a single channel in different cells. By effecting the lowest interference environment, in meeting a required minimum signal-to-interference ratio of rho per user, channel reuse is maximized. Distributed procedures for doing this are of special interest, since the centrally administered alternative requires added infrastructure, latency, and network vulnerability. Successful distributed powering entails guiding the evolution of the transmitted power level of each of the signals, using only focal measurements, so that eventually all users meet the rho requirement. The local per channel power measurements include that of the intended signal as well as the undesired interference from other users (plus receiver noise). For a certain simple distributed type of algorithm, whenever power settings exist for which all users meet the rho requirement, the authors demonstrate exponentially fast convergence to these settings. >

1,831 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the basic quantitative features of the observed BOLD-based signal changes, including the signal amplitude and its magnetic field dependence and dynamic effects such as a pronounced oscillatory pattern that is induced in the signal from primary visual cortex during photic stimulation experiments.

1,581 citations


Journal ArticleDOI
Andrew J. Millis1
TL;DR: I reexamine the work of Hertz on quantum phase transitions in itinerant fermion systems and obtains different regimes of behavior of the correlation length and free energy in the disordered phase of the effective bosonic theory.
Abstract: I reexamine the work of Hertz on quantum phase transitions in itinerant fermion systems. I determine when it is permissible to integrate out the fermions and analyze the critical phenomena via an effective bosonic theory in which only fluctuations of the ordering field are explicitly retained. By solving appropriate scaling equations I obtain the different regimes of behavior of the correlation length and free energy in the disordered phase of the effective bosonic theory. The results in many cases differ from those of Hertz, but make contact with more recent work on the dilute Bose gas. I briefly discuss the relevance of the results to heavy-fermion materials.

1,407 citations


Journal ArticleDOI
TL;DR: In this article, a Siamese time delay neural network is used to measure the similarity between pairs of signatures, and the output of this half network is the feature vector for the input signature.
Abstract: This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory.

1,297 citations


Book ChapterDOI
01 Jan 1993
TL;DR: This work presents two semidecision procedures for verifying safety properties of piecewiselinear hybrid automata, in which all variables change at constant rates, and demonstrates that for many of the typical workshop examples, the procedures do terminate and thus provide an automatic way for verifying their properties.
Abstract: We introduce the framework of hybrid automata as a model and specification language for hybrid systems. Hybrid automata can be viewed as a generalization of timed automata, in which the behavior of variables is governed in each state by a set of differential equations. We show that many of the examples considered in the workshop can be defined by hybrid automata. While the reachability problem is undecidable even for very restricted classes of hybrid automata, we present two semidecision procedures for verifying safety properties of piecewiselinear hybrid automata, in which all variables change at constant rates. The two procedures are based, respectively, on minimizing and computing fixpoints on generally infinite state spaces. We show that if the procedures terminate, then they give correct answers. We then demonstrate that for many of the typical workshop examples, the procedures do terminate and thus provide an automatic way for verifying their properties.

1,260 citations


Journal ArticleDOI
26 Nov 1993-Science
TL;DR: Individual carbocyanine dye molecules in a sub-monolayer spread have been imaged with near-field scanning optical microscopy and the orientation of each molecular dipole can be determined to map the electric field distribution in the near- field aperture with molecular spatial resolution.
Abstract: Individual carbocyanine dye molecules in a sub-monolayer spread have been imaged with near-field scanning optical microscopy. Molecules can be repeatedly detected and spatially localized (to ∼λ/50 where λ is the wavelength of light) with a sensitivity of at least 0.005 molecules/(Hz)1/2 and the orientation of each molecular dipole can be determined. This information is exploited to map the electric field distribution in the near-field aperture with molecular spatial resolution.

1,201 citations


Journal ArticleDOI
TL;DR: It is shown how the ultrasoft pseudopotentials which have recently been proposed by Vanderbilt can be implemented efficiently in the context of Car-Parrinello molecular-dynamics simulations.
Abstract: We show how the ultrasoft pseudopotentials which have recently been proposed by Vanderbilt can be implemented efficiently in the context of Car-Parrinello molecular-dynamics simulations We address the differences with respect to the conventional norm-conserving schemes, identify certain problems which arise, and indicate how these problems can be overcome This scheme extends the possibility of performing first-principles molecular dynamics to systems including first-row elements and transition metals

1,106 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss several constructions of orthonormal wavelet bases on the interval, and introduce a new construction that avoids some of the disadvantages of earlier constructions.

Proceedings ArticleDOI
22 Jun 1993
TL;DR: In this article, a method for clustering words according to their distribution in particular syntactic contexts is described and evaluated experimentally, where words are represented by the relative frequency distributions of contexts in which they appear, and relative entropy between those distributions is used as the similarity measure for word clustering.
Abstract: We describe and evaluate experimentally a method for clustering words according to their distribution in particular syntactic contexts. Words are represented by the relative frequency distributions of contexts in which they appear, and relative entropy between those distributions is used as the similarity measure for clustering. Clusters are represented by average context distributions derived from the given words according to their probabilities of cluster membership. In many cases, the clusters can be thought of as encoding coarse sense distinctions. Deterministic annealing is used to find lowest distortion sets of clusters: as the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical "soft" clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data.


Journal ArticleDOI
TL;DR: In this article, a two-dimensional electron system in an external magnetic field, with Landau-level filling factor \ensuremath{ u}=1/2, can be transformed to a mathematically equivalent system of fermions interacting with a Chern-Simons gauge field such that the average effective magnetic field acting on the fermion is zero.
Abstract: A two-dimensional electron system in an external magnetic field, with Landau-level filling factor \ensuremath{ u}=1/2, can be transformed to a mathematically equivalent system of fermions interacting with a Chern-Simons gauge field such that the average effective magnetic field acting on the fermions is zero. If one ignores fluctuations in the gauge field, this implies that for a system with no impurity scattering, there should be a well-defined Fermi surface for the fermions. When gauge fluctuations are taken into account, we find that there can be infrared divergent corrections to the quasiparticle propagator, which we interpret as a divergence in the effective mass ${\mathit{m}}^{\mathrm{*}}$, whose form depends on the nature of the assumed electron-electron interaction v(r). For long-range interactions that fall off slower than 1/r at large separation r, we find no infrared divergences; for short-range repulsive interactions, we find power-law divergences; while for Coulomb interactions, we find logarithmic corrections to ${\mathit{m}}^{\mathrm{*}}$. Nevertheless, we argue that many features of the Fermi surface are likely to exist in all these cases. In the presence of a weak impurity-scattering potential, we predict a finite resistivity ${\mathrm{\ensuremath{\rho}}}_{\mathit{x}\mathit{x}}$ at low temperatures, whose value we can estimate. We compute an anomaly in surface acoustic wave propagation that agrees qualitatively with recent experiments. We also make predictions for the size of the energy gap in the fractional quantized Hall state at \ensuremath{ u}=p/(2p+1), where p is an integer. Finally, we discuss the implications of our picture for the electronic specific heat and various other physical properties at \ensuremath{ u}=1/2, we discuss the generalization to other filling fractions with even denominators, and we discuss the overall phase diagram that results from combining our picture with previous theories that apply to the regime where impurity scattering is dominant.

Journal ArticleDOI
TL;DR: The authors align sentences based on a simple statistical model of character lengths and assign a probabilistic score to each proposed correspondence of sentences, based on the scaled difference of lengths of the two sentences (in characters) and the variance of this difference.
Abstract: Researchers in both machine translation (e.g., Brown et al. 1990) and bilingual lexicography (e.g., Klavans and Tzoukermann 1990) have recently become interested in studying bilingual corpora, bodies of text such as the Canadian Hansards (parliamentary proceedings), which are available in multiple languages (such as French and English). One useful step is to align the sentences, that is, to identify correspondences between sentences in one language and sentences in the other language.This paper will describe a method and a program (align) for aligning sentences based on a simple statistical model of character lengths. The program uses the fact that longer sentences in one language tend to be translated into longer sentences in the other language, and that shorter sentences tend to be translated into shorter sentences. A probabilistic score is assigned to each proposed correspondence of sentences, based on the scaled difference of lengths of the two sentences (in characters) and the variance of this difference. This probabilistic score is used in a dynamic programming framework to find the maximum likelihood alignment of sentences.It is remarkable that such a simple approach works as well as it does. An evaluation was performed based on a trilingual corpus of economic reports issued by the Union Bank of Switzerland (UBS) in English, French, and German. The method correctly aligned all but 4% of the sentences. Moreover, it is possible to extract a large subcorpus that has a much smaller error rate. By selecting the best-scoring 80% of the alignments, the error rate is reduced from 4% to 0.7%. There were more errors on the English-French subcorpus than on the English-German subcorpus, showing that error rates will depend on the corpus considered; however, both were small enough to hope that the method will be useful for many language pairs.To further research on bilingual corpora, a much larger sample of Canadian Hansards (approximately 90 million words, half in English and and half in French) has been aligned with the align program and will be available through the Data Collection Initiative of the Association for Computational Linguistics (ACL/DCI). In addition, in order to facilitate replication of the align program, an appendix is provided with detailed c-code of the more difficult core of the align program.

Journal ArticleDOI
01 Mar 1993-Nature
TL;DR: In this article, the same authors reported the synthesis of the related compound HgBa2CuO4+δ (Hg-1201), with only one CuO2 layer per unit cell, and showed that it is superconducting below 94 K.
Abstract: FOLLOWING the discovery1 of high-transition-temperature (high-Tc) superconductivity in doped La2CuO4, several families of related compounds have been discovered which have layers of CuO2 as the essential requirement for superconductivity: the highest transition temperatures so far have been found for thallium-bearing compounds2. Recently the mercury-bearing compound HgBa2Rcu2O6+δ (Hg-1212) was synthesized3 (where R is a rare-earth element), with a structure similar to the thallium-bearing superconductor TlBa2CaCu2O7 (Tl-1212), which has one T1O layer and two CuO2 layers per unit cell, and a Tc of 85 K (ref. 2). But in spite of its resemblance to Tl-1212, Hg-1212 was found not to be superconducting. Here we report the synthesis of the related compound HgBa2CuO4+δ (Hg-1201), with only one CuO2 layer per unit cell, and show that it is superconducting below 94 K. Its structure is similar to that of Tl-1201 (which has a Tc of < 10 K)4, but its transition temperature is considerably higher. The availability of a material with high Tc but only a single metal oxide (HgO) layer may be important for technological applications, as it seems that a smaller spacing between CuO2 planes leads to better superconducting properties in a magnetic field5.

Journal ArticleDOI
01 Oct 1993
TL;DR: It is proposed that fundamental limits in the science can be expressed by the semiquantitative concepts of perceptual entropy and the perceptual distortion-rate function, and current compression technology is examined in that framework.
Abstract: The notion of perceptual coding, which is based on the concept of distortion masking by the signal being compressed, is developed. Progress in this field as a result of advances in classical coding theory, modeling of human perception, and digital signal processing, is described. It is proposed that fundamental limits in the science can be expressed by the semiquantitative concepts of perceptual entropy and the perceptual distortion-rate function, and current compression technology is examined in that framework. Problems and future research directions are summarized. >

Journal ArticleDOI
TL;DR: The purpose of this paper is to collect a number of useful results about Markov-modulated Poisson processes and queues with Markov -modulated input and to summary of recent developments.

Journal ArticleDOI
Robert C. Haddon1
17 Sep 1993-Science
TL;DR: Application of the wr-orbital axis vector theory to the geometries of structurally characterized organometallic derivatives of C60 and C70 shows that the reactivity exhibited by the fullerenes may be attributed to the relief of a combination of local and global strain energy.
Abstract: Within the wr-orbital axis vector theory, the total rehybridization required for closure of the fullerenes is approximately conserved. This result allows the development of a structure-based index of strain in the fullerenes, and it is estimated that about 80 percent of the heat of formation of the carbon atoms in C60 may be attributed to a combination of v strain and steric inhibition of resonance. Application of this analysis to the geometries of structurally characterized organometallic derivatives of C60 and C70 shows that the reactivity exhibited by the fullerenes may be attributed to the relief of a combination of local and global strain energy. C60 is of ambiguous aromatic character with anomalous magnetic properties but with the reactivity of a continuous aromatic molecule, moderated only by the tremendous strain inherent in the spheroidal structure.

Journal ArticleDOI
J.D. Musa1
TL;DR: Using an operational profile to guide testing ensures that if testing is terminated and the software is shipped because of schedule constraints, the most-used operations will have received the most testing and the reliability level will be the maximum that is practically achievable for the given test time.
Abstract: A systematic approach to organizing the process of determining the operational profile for guiding software development is presented. The operational profile is a quantitative characterization of how a system will be used that shows how to increase productivity and reliability and speed development by allocating development resources to function on the basis of use. Using an operational profile to guide testing ensures that if testing is terminated and the software is shipped because of schedule constraints, the most-used operations will have received the most testing and the reliability level will be the maximum that is practically achievable for the given test time. For guiding regression testing, it efficiently allocates test cases in accordance with use, so the faults most likely to be found, of those introduced by changes, are the ones that have the most effect on reliability. >

Proceedings ArticleDOI
01 Jun 1993
TL;DR: A counting algorithm that tracks the number of alternative derivations (counts) for each derived tuple in a view, and shows that the count for a tuple can be computed at little or no cost above the cost of deriving the tuple.
Abstract: We present incremental evaluation algorithms to compute changes to materialized views in relational and deductive database systems, in response to changes (insertions, deletions, and updates) to the relations. The view definitions can be in SQL or Datalog, and may use UNION, negation, aggregation (e.g. SUM, MIN), linear recursion, and general recursion.We first present a counting algorithm that tracks the number of alternative derivations (counts) for each derived tuple in a view. The algorithm works with both set and duplicate semantics. We present the algorithm for nonrecursive views (with negation and aggregation), and show that the count for a tuple can be computed at little or no cost above the cost of deriving the tuple. The algorithm is optimal in that it computes exactly those view tuples that are inserted or deleted. Note that we store only the number of derivations, not the derivations themselves.We then present the Delete and Rederive algorithm, DRed, for incremental maintenance of recursive views (negation and aggregation are permitted). The algorithm works by first deleting a superset of the tuples that need to be deleted, and then rederiving some of them. The algorithm can also be used when the view definition is itself altered.

Journal ArticleDOI
Anwar Elwalid1, Debasis Mitra1
TL;DR: It is shown that for general Markovian traffic sources it is possible to assign a notional effective bandwidth to each source that is an explicitly identified, simply computed quantity with provably correct properties in the natural asymptotic regime of small loss probabilities.
Abstract: A prime instrument for controlling congestion in a high-speed network is admission control, which limits calls and guarantees a grade of service determined by delay and loss probability in the multiplexer. It is shown that for general Markovian traffic sources it is possible to assign a notional effective bandwidth to each source that is an explicitly identified, simply computed quantity with provably correct properties in the natural asymptotic regime of small loss probabilities. It is the maximal real eigenvalue of a matrix that is directly obtained from the source characteristics and the admission criterion, and for several sources it is simply additive. Both fluid and point process models are considered. Numerical results show that the acceptance set for heterogeneous classes of sources is closely approximated and conservatively bounded by the set obtained from the effective bandwidth approximation. The bandwidth-reducing properties of the leaky bucket regulator are exhibited numerically. >

Journal ArticleDOI
TL;DR: A four-pass algorithm for drawing directed graphs is presented, which creates good drawings and is fast.
Abstract: A four-pass algorithm for drawing directed graphs is presented. The fist pass finds an optimal rank assignment using a network simplex algorithm. The seconds pass sets the vertex order within ranks by an iterative heuristic, incorporating a novel weight function and local transpositions to reduce crossings. The third pass finds optimal coordinates for nodes by constructing and ranking an auxiliary graph. The fourth pass makes splines to draw edges. The algorithm creates good drawings and is fast. >

Journal ArticleDOI
19 Nov 1993-Science
TL;DR: Size-selective precipitation and size-exclusion chromatography cleanly separate the silicon nanocrystals from larger crystallites and aggregates and provide direct evidence for quantum confinement in luminescence.
Abstract: The dynamics and spectroscopy of silicon nanocrystals that emit at visible wavelengths were analyzed. Size-selective precipitation and size-exclusion chromatography cleanly separate the silicon nanocrystals from larger crystallites and aggregates and provide direct evidence for quantum confinement in luminescence. Measured quantum yields are as high as 50 percent at low temperature, principally as a result of efficient oxide passivation. Despite a 0.9—electron-volt shift of the band gap to higher energy, the nanocrystals behave fundamentally as indirect gap materials with low oscillator strength.

Journal ArticleDOI
TL;DR: In this article, high pressure "hydrogen loading" has been used to sensitise standard singlemode fibres, resulting in the largest reported UV induced index changes for low GeO/sub 2/ fibres.
Abstract: High pressure 'hydrogen loading' has been used to sensitise standard singlemode fibres, resulting in the largest reported UV induced index changes for low GeO/sub 2/ fibres. Grating bandwidths of 4 nm and peak nu ns of 5.9*10/sup -3/ have been reproducibly achieved. Substantial index changes have also been achieved by rapidly heating H/sub 2/ loaded fibres of various compositions.< >

Journal ArticleDOI
Kuldip K. Paliwal1, B. Atal1
TL;DR: It is shown that the split vector quantizer can quantize LPC information in 24 bits/frame with an average spectral distortion of 1 dB and less than 2% of the frames having spectral distortion greater than 2 dB.
Abstract: For low bit rate speech coding applications, it is important to quantize the LPC parameters accurately using as few bits as possible. Though vector quantizers are more efficient than scalar quantizers, their use for accurate quantization of linear predictive coding (LPC) information (using 24-26 bits/frame) is impeded by their prohibitively high complexity. A split vector quantization approach is used here to overcome the complexity problem. An LPC vector consisting of 10 line spectral frequencies (LSFs) is divided into two parts, and each part is quantized separately using vector quantization. Using the localized spectral sensitivity property of the LSF parameters, a weighted LSF distance measure is proposed. With this distance measure, it is shown that the split vector quantizer can quantize LPC information in 24 bits/frame with an average spectral distortion of 1 dB and less than 2% of the frames having spectral distortion greater than 2 dB. The effect of channel errors on the performance of this quantizer is also investigated and results are reported. >

Journal ArticleDOI
Lawrence O'Gorman1
TL;DR: The document spectrum (or docstrum) as discussed by the authors is a method for structural page layout analysis based on bottom-up, nearest-neighbor clustering of page components, which yields an accurate measure of skew, within-line, and between-line spacings and locates text lines and text blocks.
Abstract: Page layout analysis is a document processing technique used to determine the format of a page. This paper describes the document spectrum (or docstrum), which is a method for structural page layout analysis based on bottom-up, nearest-neighbor clustering of page components. The method yields an accurate measure of skew, within-line, and between-line spacings and locates text lines and text blocks. It is advantageous over many other methods in three main ways: independence from skew angle, independence from different text spacings, and the ability to process local regions of different text orientations within the same image. Results of the method shown for several different page formats and for randomly oriented subpages on the same image illustrate the versatility of the method. We also discuss the differences, advantages, and disadvantages of the docstrum with respect to other lay-out methods. >

Journal ArticleDOI
H S Seung1, Haim Sompolinsky
TL;DR: It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic, and although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle.
Abstract: In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models.

Proceedings ArticleDOI
01 Dec 1993
TL;DR: Two ways to accomplish EKE augmented so that hosts do not store cleartext passwords are shown, one using digital signatures and one that relies on a family of commutative one-way functions.
Abstract: The encrypted key exchange (EKE) protocol is augmented so that hosts do not store cleartext passwords. Consequently, adversaries who obtain the one-way encrypted password file may (i) successfully mimic (spoof) the host to the user, and (ii) mount dictionary attacks against the encrypted passwords, but cannot mimic the user to the host. Moreover, the important security properties of EKE are preserved—an active network attacker obtains insufficient information to mount dictionary attacks. Two ways to accomplish this are shown, one using digital signatures and one that relies on a family of commutative one-way functions.

Book ChapterDOI
Doron Peled1
28 Jun 1993
TL;DR: An algorithm for constructing a state graph that contains at least one representative sequence for each equivalence class, and a formula rewriting technique is presented to allow coarser equivalence relation among sequences, such that less representatives are needed.
Abstract: Checking that a given finite state program satisfies a linear temporal logic property is suffering in many cases from a severe space and time explosion. One way to cope with this is to reduce the state graph used for model checking. We define an equivalence relation between infinite sequences, based on infinite traces such that for each equivalence class, either all or none of the sequences satisfy the checked formula. We present an algorithm for constructing a state graph that contains at least one representative sequence for each equivalence class. This allows applying existing model checking algorithms to the reduced state graph rather than on the larger full state graph of the program. It also allows model checking under fairness assumptions, and exploits these assumptions to obtain smaller state graphs. A formula rewriting technique is presented to allow coarser equivalence relation among sequences, such that less representatives are needed.