scispace - formally typeset
Search or ask a question

Showing papers on "Operator (computer programming) published in 1998"


Journal ArticleDOI
TL;DR: This paper shows that connected operators work implicitly on a structured representation of the image made of flat zones, and proposes the max-tree as a suitable and efficient structure to deal with the processing steps involved in antiextensive connected operators.
Abstract: This paper deals with a class of morphological operators called connected operators. These operators filter the signal by merging its flat zones. As a result, they do not create any new contours and are very attractive for filtering tasks where the contour information has to be preserved. This paper shows that connected operators work implicitly on a structured representation of the image made of flat zones. The max-tree is proposed as a suitable and efficient structure to deal with the processing steps involved in antiextensive connected operators. A formal definition of the various processing steps involved in the operator is proposed and, as a result, several lines of generalization are developed. First, the notion of connectivity and its definition are analyzed. Several modifications of the traditional approach are presented. They lead to connected operators that are able to deal with texture. They also allow the definition of connected operators with less leakage than the classical ones. Second, a set of simplification criteria are proposed and discussed. They lead to simplicity-, entropy-, and motion-oriented operators. The problem of using a nonincreasing criterion is analyzed. Its solution is formulated as an optimization problem that can be very efficiently solved by a Viterbi (1979) algorithm. Finally, several implementation issues are discussed showing that these operators can be very efficiently implemented.

656 citations


Journal ArticleDOI
TL;DR: An extensive survey and comparative assessment of different existing methods for constructing the ABCs are presented and a new ABCs technique proposed in recent work is described, which allows one to obtain highly accurate ABCs in the form of certain (nonlocal) boundary operator equations.

617 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived a factorization of the far field operator F in the form and proved that the ranges of and G coincide, and gave an explicit characterization of the scattering obstacle which uses only the spectral data of F. This result is used to prove a convergence result for a recent numerical method proposed by Colton, Kirsch, Monk, Piana and Potthast.
Abstract: This paper is concerned with the inverse obstacle scattering problem for time harmonic plane waves. We derive a factorization of the far field operator F in the form and prove that the ranges of and G coincide. Then we give an explicit characterization of the scattering obstacle which uses only the spectral data of the far field operator F. This result is used to prove a convergence result for a recent numerical method proposed by Colton, Kirsch, Monk, Piana and Potthast. We illustrate this method by some numerical examples.

553 citations


Journal ArticleDOI
TL;DR: In this paper, the representation of the n-spin correlation functions in terms of expectation values (in ferromagnetic reference state) of the operator entries of the quantum monodromy matrix satisfying Yang-Baxter algebra was derived.
Abstract: Form factors for local spin operators of the XXZ Heisenberg spin-1/2 finite chain are computed. Representation theory of Drinfel'd twists for the sl2 quantum affine algebra in finite dimensional modules is used to calculate scalar products of Bethe states (leading to Gaudin formula) and to solve the quantum inverse problem for local spin operators in the finite XXZ chain. Hence, we obtain the representation of the n-spin correlation functions in terms of expectation values(in ferromagnetic reference state) of the operator entries of the quantum monodromy matrix satisfying Yang-Baxter algebra. This leads to the direct calculation of the form factors of the XXZ Heisenberg spin-1/2 finite chain as determinants of usual functions of the parameters of the model. A two-point correlation function for adjacent sites is also derived using similar techniques.

331 citations


Journal ArticleDOI
TL;DR: In this article, weak and strong comparison theorems for solutions of differential inequalities involving a class of elliptic operators that includes the p-laplacian operator were proved. And these theorem together with the method of moving planes and the sliding method were used to get symmetry and monotonicity properties of solutions to quasilinear elliptic equations in bounded domains.
Abstract: We prove some weak and strong comparison theorems for solutions of differential inequalities involving a class of elliptic operators that includes the p-laplacian operator. We then use these theorems together with the method of moving planes and the sliding method to get symmetry and monotonicity properties of solutions to quasilinear elliptic equations in bounded domains.

328 citations


Journal ArticleDOI
TL;DR: The classical theory of Calderon-Zygmund operators started with the study of convolution operators on the real line having singular kernels and has developed into a large branch of analysis covering a quite wide class of singular integral operators on abstract measure spaces (so called spaces of homogeneous type) as discussed by the authors.
Abstract: The classical theory of Calderon–Zygmund operators started with the study of convolution operators on the real line having singular kernels. (A typical example of such an operator is the so called Hilbert transform, defined by Hf(t) = ∫ R f(s) ds t−s .) Later it has developed into a large branch of analysis covering a quite wide class of singular integral operators on abstract measure spaces (so called “spaces of homogeneous type”). To see how far the theory has evolved during the last 30 years, it is enough to compare the classical textbook [St1] by Stein published in 1970 (which remains an excellent introduction to the subject) to the modern outline of the theory in [DJ], [St2], [Ch2], and [CW]. The only thing that has remained unchallenged until very recently was the doubling property of the measure, i.e., the assumption that for some constant C > 0,

317 citations


Journal ArticleDOI
TL;DR: It turns out that computing the exponential of strictly elliptic operators in the wavelet system of coordinates yields sparse matrices (for a finite but arbitrary accuracy) and this observation makes the approach practical in a number of applications.

276 citations


Journal ArticleDOI
TL;DR: In this article, the convergence problem of Ishikawa and Mann iterative sequences for strongly pseudo-contractive mappings without the Lipschitz condition has been studied and the results presented in this paper improve and extend the corresponding results in [ 4, 5, 7, 10, 12, 15, 16 ] in the more general setting.

250 citations


Journal ArticleDOI
TL;DR: Simulations indicate that both the adaptive and nonadaptive versions of this operator are capable of producing solutions that are statistically as good as, or better, than those produced when using Gaussian or Cauchy mutations alone.
Abstract: Traditional investigations with evolutionary programming for continuous parameter optimization problems have used a single mutation operator with a parametrized probability density function (PDF), typically a Gaussian. Using a variety of mutation operators that can be combined during evolution to generate PDFs of varying shapes could hold the potential for producing better solutions with less computational effort. In view of this, a linear combination of Gaussian and Cauchy mutations is proposed. Simulations indicate that both the adaptive and nonadaptive versions of this operator are capable of producing solutions that are statistically as good as, or better, than those produced when using Gaussian or Cauchy mutations alone.

245 citations


Proceedings Article
01 Jan 1998
TL;DR: This paper presents a new mesh reduction algorithm which clearly reflects this meta scheme and efficiently generates decimated high quality meshes while observing global error bounds and considers most of the suggested algorithms as generic templates leaving the freedom to plug in specific instances of predicates.
Abstract: The decimation of highly detailed meshes has emerged as an important issue in many computer graphics related fields. A whole library of different algorithms has been proposed in the literature. By carefully investigating such algorithms, we can derive a generic structure for mesh reduction schemes which is analogous to a class of greedy-algorithms for heuristic optimization. Particular instances of this algorithmic template allow to adapt to specific target applications. We present a new mesh reduction algorithm which clearly reflects this meta scheme and efficiently generates decimated high quality meshes while observing global error bounds. Introduction In several areas of computer graphics and geometric modeling, the representation of surface geometry by polygonal meshes is a well established standard. However, the complexity of the object models has increased much faster than the through-put of today’s graphics hardware. Hence, in order to be able to display and modify geometric objects within reasonable response times, it is necessary to reduce the amount of data by removing redundant information from triangle meshes. A precise definition of the term redundancy in this context obviously depends on the application for which the decimated mesh is to be used. Technically speaking, the most important aspect is the approximation error, i.e., the modified mesh has to stay within a prescribed tolerance to the original data. From an optical point of view, local flatness of the mesh might be a better indicator for redundancy. It is natural that applications as different as rendering and finite element analysis put their emphasis also on the preservation of different aspects in the simplified geometric shape. In the last years, a host of proposed algorithms for mesh reduction has been applied successfully to level of detail generation [14, 2], progressive transmission [6], and reverse engineering [1]. See [15] for an overview of some relevant literature. We consider most of the suggested algorithms as generic templates leaving the freedom to plug in specific instances of predicates. For example, each algorithm is based on a scalar valued oracle which indicates the degree of redundancy of a particular vertex, edge, or triangle. Depending on the target application, different choices for this oracle are appropriate but this does not affect the algorithmic structure of the scheme. On the most abstract level, there are two different basic approaches to find a coarser approximation of a given polygonal mesh. The one is to build the new mesh without necessarily inheriting the topology of the original and the other is to obtain the new mesh by (iteratively) modifying the original without changing the topology. Having a topologically simplified model of the original mesh is useful in applications where the topology itself does not carry crucial information. For example, when rendering remote objects, small holes can be removed without affecting the quality but for a finite element simulation on the same object the holes might be important to obtain reliable results. In this paper we will analyze incremental mesh reduction, i.e., algorithms that reduce the mesh complexity by the iterative application of simple topological operations instead of completely reorganizing the mesh. We will identify the slots where custom tailored predicates or operators can be inserted and will give recommendations when to use which. We then present an original mesh reduction algorithm based on these considerations. The algorithm is fast according to Schroeder’s recent definition [17] yet allows global error control with respect to the geometric Hausdorff distance. The scheme is validated in the result section by showing and discussing some examples. Relevant algorithmic aspects The topology preserving mesh reduction schemes typically use a simple operation which removes a small submesh and retriangulates the remaining hole. Some schemes use local optimization to find the best retriangulation. To control the decimation process, a scalar valued predicate induces a priority ordering on the set of candidates for being removed. This predicate can be based purely on distance measures between the original and the reduced mesh or it can additionally take local flatness into account. This macroscopic description matches most of the known incremental mesh reduction schemes. Due to the overwhelming variety of different algorithms that have been proposed in the literature, there are several authors who attempted to identify important features and classify the different approaches accordingly [16, 15, 3]. We do not want to add another survey but we just give an abridged overview. We will focus on three fundamental ingredients that are necessary (and sufficient) to build your own mesh reduction algorithm. The ingredients are a topological operator to modify the mesh locally, a distance measure to check whether the maximum tolerance is not violated, and a fairness criterion to evaluate the quality of the current mesh. Topological operators The classical scheme of [18] removes a single vertex v and retriangulates its crown. Thus, in every step, a patch of n triangles (the valence of v) is replaced by a new patch with n 2 triangles. In general, a local edgeswapping optimization is necessary to guarantee a reasonable quality of the retriangulated patch. In [6], edges pq are collapsed into a new vertex r which removes two triangles from the mesh. This operation can also be understood as submesh removal and retriangulation. In this case the local connectivity of the retriangulation is fixed but the optimal location for r is determined by a local energy minimization heuristic. We could cut out larger submeshes from the original mesh but this would require a more sophisticated treatment of special cases. A nice property of the basic vertex-removal and edgecollapse operators is that consistency preservation is easy to guarantee. We just have to check the injectivity of the crown of the vertex v or the edge pq respectively. The rejection of all operations that would lead to complex vertices or edges is the reason why most incremental schemes do not change the global topology of a mesh. Our observation when testing different reduction schemes on a variety of meshed models is that the underlying topological operator on which an algorithm is based does not have a significant impact on the results. The quality of the resulting mesh turns out to be much more sensitive to the criteria which decide where to apply the next reduction operation. Hence, we recommend to make the topological operator itself as simple as possible, i.e., by eliminating all geometric degrees of freedom. Concluding from these considerations, we suggest the use of what we call the half-edge collapse. A common way to store orientable triangle meshes is the half-edge structure [13] where an undirected edge pq is represented by two directed halves p ! q and q ! p. Collapsing the halfedge p ! q means to pull the vertex q into p and to remove the triangles that have become singular. This topological operator’s major advantage is that it does not contain any unset degrees of freedom which would have to be determined by local optimization. If we treat the two half-edge mates as separate entities then the only decision is whether a particular collapse is to be performed or not. Moreover, the reduction operation does not “invent” new geometry by letting some heuristic decide about the position of r. The vertices of the decimated mesh are always a proper subset of the original vertices. The half-edge collapse can be understood as a vertex removal without the freedom of chosing the triangulation or as an edge collapse without the freedom of setting the position of the new vertex. Figure 1 shows the submeshes involved in the basic topological operations. Figure 1: Vertex-removal, Edge-collapse, and Half-Edge-

235 citations


Book
30 Jun 1998
TL;DR: In this article, the authors present an overview of the history of non-uniform spaces in the Hilbert space, including inner-outer and inner-outer factorization, J-Unitary operators, and lossless Cascade Factorization.
Abstract: Preface. 1. Introduction. Part I: Realization. 2. Notation and Properties of Non-Uniform Spaces. 3. Time-Varying State Space Realizations. 4. Diagonal Algebra. 5. Operator Realization Theory. 6. Isometric and Inner Operators. 7. Inner-Outer Factorization and Operator Inversion. Part II: Interpolation and Approximation. 8. J-Unitary Operators. 9. Algebraic Interpolation. 10. Hankel-Norm Model Reduction. 11. Low-Rank Matrix Approximation and Subspace Tracking. Part III: Factorization. 12. Orthogonal Embedding. 13. Spectral Factorization. 14. Lossless Cascade Factorizations. 15. Conclusions. Appendices: A. Hilbert Space Definitions and Properties. References. Glossary of Notation. Index.

Journal ArticleDOI
TL;DR: In this article, the complete and rigorous kernel of the Wheeler-DeWitt constraint operator for four-dimensional, Lorentzian, non-perturbative, canonical vacuum quantum gravity in the continuum was determined.
Abstract: We determine the complete and rigorous kernel of the Wheeler - DeWitt constraint operator for four-dimensional, Lorentzian, non-perturbative, canonical vacuum quantum gravity in the continuum. We do this for the non-symmetric version of the operator constructed previously in this series. We also construct a symmetric, regulated constraint operator. For the regulated Euclidean Wheeler - DeWitt operator as well as for the regulated generator of the Wick transform from the Euclidean to the Lorentzian regime we prove existence of self-adjoint extensions and based on these we propose a method of proof of self-adjoint extensions for the regulated Lorentzian operator. Both constraint operators evaluated at unit lapse as well as the generator of the Wick transform can be shown to have regulator-independent and symmetric duals on the diffeomorphism-invariant Hilbert space. Finally, we comment on the status of the Wick rotation transform in the light of the present results and give an intuitive description of the action of the Hamiltonian constraint.

Book ChapterDOI
27 Sep 1998
TL;DR: This paper investigates the usefulness of a new operator, inver-over, for an evolutionary algorithm for the TSP, and the proposed operator is unary, since the inversion is applied to a segment of a single individual, however, the selection of a segment to be inverted is population driven, thus the operator displays some characterictics of recombination.
Abstract: In this paper we investigate the usefulness of a new operator, inver-over, for an evolutionary algorithm for the TSP. Inver-over is based on simple inversion, however, knowledge taken from other individuals in the population influences its action. Thus, on one hand, the proposed operator is unary, since the inversion is applied to a segment of a single individual, however, the selection of a segment to be inverted is population driven, thus the operator displays some characterictics of recombination.

Journal ArticleDOI
17 Apr 1998-Science
TL;DR: A gradient-based systematic procedure for optimizing these transformations is described that finds the largest projection of a transformed initial operator onto the target operator and, thus, the maximum spectroscopic signal.
Abstract: Experiments in coherent magnetic resonance, microwave, and optical spectroscopy control quantum-mechanical ensembles by guiding them from initial states toward target states by unitary transformation. Often, the coherences detected as signals are represented by a non-Hermitian operator. Hence, spectroscopic experiments, such as those used in nuclear magnetic resonance, correspond to unitary transformations between operators that in general are not Hermitian. A gradient-based systematic procedure for optimizing these transformations is described that finds the largest projection of a transformed initial operator onto the target operator and, thus, the maximum spectroscopic signal. This method can also be used in applied mathematics and control theory.

Journal ArticleDOI
TL;DR: A local regularization operator on triangular or quadrilateral finite elements built on structured or unstructured meshes is developed and it is proved that it has the same optimal approximation properties as the standard interpolation operator.
Abstract: This paper develops a local regularization operator on triangular or quadrilateral finite elements built on structured or unstructured meshes. This operator is a variant of the regularization operator of Clement; however, ours is constructed via a local projection in a reference domain. We prove in this paper that it has the same optimal approximation properties as the standard interpolation operator, and we present some applications.

01 Jan 1998
TL;DR: This paper investigates the performance of simple tripartite GAs on a number of simple to complex test problems from a practical standpoint and recommends that when in doubt, the use of the crossover operator with an adequate population size is a reliable approach.
Abstract: Genetic algorithms (GAs) are multi-dimensional and stochastic search methods, involving complex interactions among their parameters For last two decades, researchers have been trying to understand the mechanics of GA parameter interactions by using various techniques The methods include careful 'functional' decomposition of parameter interactions, empirical studies, and Markov chain analysis Although the complex knot of these interactions are getting loose with such analyses, it still remains an open question in the mind of a new-comer to the field or to a GA-practitioner as to what values of GA parameters (such as population size, choice of GA operators, operator probabilities, and others) to use in an arbitrary problem In this paper, we investigate the performance of simple tripartite GAs on a number of simple to complex test problems from a practical standpoint Since function evaluations are most time-consuming in a real-world problem, we compare different GAs for a fixed number of function evaluations Based on probability calculations and simulation results, it is observed that for solving simple problems (unimodal or small modality problems) mutation operator plays an important role, although crossover operator can also solve these problems However, two operators (when applied alone) have two different working zones for population size For complex problems involving massive multimodality and misleadingness (deception), crossover operator is the key search operator and performs reliably with an adequate population size Based on these studies, it is recommended that when in doubt, the use of the crossover operator with an adequate population size is a reliable approach

Journal ArticleDOI
TL;DR: This work proposes a unifying model that enables a uniform description of the problem of discovering association rules, and provides a SQL-like operator, named X⇒Y, which is capable of expressing all the problems presented so far in the literature concerning the mining of association rules.
Abstract: Data mining evolved as a collection of applicative problems and efficient solution algorithms relative to rather peculiar problems, all focused on the discovery of relevant information hidden in databases of huge dimensions In particular, one of the most investigated topics is the discovery of association rules This work proposes a unifying model that enables a uniform description of the problem of discovering association rules The model provides a SQL-like operator, named MINE RULE, which is capable of expressing all the problems presented so far in the literature concerning the mining of association rules We demonstrate the expressive power of the new operator by means of several examples, some of which are classical, while some others are fully original and correspond to novel and unusual applications We also present the operational semantics of the operator by means of an extended relational algebra

Journal ArticleDOI
TL;DR: The results obtained indicate that the applicability of operator adaptation is dependent upon three basic assumptions being satisfied by the problem being tackled, including the ability of the operators to produce children of increased fitness.
Abstract: In the majority of genetic algorithm implementations, the operator settings are fixed throughout a given run. However, it has been argued that these settings should vary over the course of a genetic algorithm run---so as to account for changes in the ability of the operators to produce children of increased fitness. This paper describes an investigation into this question. The effect upon genetic algorithm performance of two adaptation methods upon both well-studied theoretical problems and a hard problem from operations research, the flowshop sequencing problem, are therefore examined. The results obtained indicate that the applicability of operator adaptation is dependent upon three basic assumptions being satisfied by the problem being tackled.

Patent
05 Jan 1998
TL;DR: In this article, a self-contained sleepiness monitor for steering wheel or dashboard mounting is used for individual driver/operator interrogation and response, combined with various objective sensory inputs, and translates these inputs into weighting factors to adjust a biological activity circadian rhythm reference model, in turn to provide an audio-visual sleepiness warning indication.
Abstract: A vehicle driver or machine operator sleepiness monitor (10), configured as a self-contained module (11), for steering wheel or dashboard mounting, provides for individual driver/operator interrogation and response, combined with various objective sensory inputs (13, 15, 27, 29) on vehicle condition and driver control action, and translates these inputs into weighting factors to adjust a biological activity circadian rhythm reference model, in turn to provide an audio-visual sleepiness warning indication (18).

Journal ArticleDOI
TL;DR: In this article, the relation between the vacuum eigenvalues of CFT Q-operators and spectral determinants of one-dimensional Schroedinger operator with homogeneous potential was proven.
Abstract: Relation between the vacuum eigenvalues of CFT Q-operators and spectral determinants of one-dimensional Schroedinger operator with homogeneous potential, recently conjectured by Dorey and Tateo for special value of Virasoro vacuum parameter p, is proven to hold, with suitable modification of the Schroedinger operator, for all values of p.

Journal ArticleDOI
TL;DR: In this article, a factorization approximation of the T4 operator was proposed, which requires only an n7 procedure and provides results nearly identical to those obtained with the CCSDTQ-1 method.
Abstract: The general inclusion of the T4 operator into the coupled cluster equations requires an n10 computational procedure, and even n9 in the lowest order, as in the CCSDTQ-1 (coupled cluster singles, doubles, triples and lowest-order quadruples) method. That level of n-dependence makes it difficult to apply the method to larger systems. In this paper we circumvent this difficulty by a factorization approximation that requires only an n7 procedure, but that provides results nearly identical to those obtained with the CCSDTQ-1 method. This observation offers a practical and accurate method to go beyond the CCSDT (coupled cluster singles, doubles and triples) approach. We also consider noniterative CCSDT(Qf) (coupled cluster singles, doubles, triples and noniterative quadruples) and CCSD(TQf) (coupled cluster singles and doubles with noniterative triples and quadruples) methods.

Journal ArticleDOI
TL;DR: In this paper, the expectation values of the fields in the Bullough-Dodd model are derived by adopting the reflection relations which involve the reflection S-matrix of the Liouville theory, as well as a special analyticity assumption.

Journal ArticleDOI
TL;DR: In this paper, the authors define a hierarchy of clusters in Z2-Hilbert spaces and define a set of clusters associated with the Cartesian and tensor products associated with a cluster.
Abstract: 0. Introduction 192 1. General Definitions 195 1.1. Hilbert spaces 195 1.2. Homomorphisms of commutative C∗and W ∗-algebras 196 1.3. Permutations 197 1.4. Clusters 197 1.5. Cartesian and tensor products associated with a cluster 198 1.6. Clusterings 198 1.7. Cartesian and tensor products associated with a clustering 199 2. Distinguishable Particles 199 2.1. Elementary particles 199 2.2. Clusters of elementary particles 199 2.3. Asymptotic particles 201 2.4. Clusters of asymptotic particles 201 2.5. Identification operator 202 2.6. Existence of the asymptotic velocity 202 2.7. Short-range case 202 2.8. Long-range case – free region 203 2.9. Long-range case – asymptotic interacting Hamiltonian 204 2.10. Long-range case – modified wave operators 205 3. Second Quantization in the Category of Sets 206 3.1. Second quantization of a set 206 3.2. “Third quantization” of a set 207 3.3. Permutations that preserve species 207 3.4. Clusters with composition n ∈ Γ(E) 208 3.5. Permutations that preserve species and a clustering 208 3.6. Clusterings associated with k ∈ Γ(E) 210 3.7. Identification operators 210 4. Second Quantization in the Category of Z2-Hilbert Spaces 211 4.1. Fock spaces 211

Journal ArticleDOI
TL;DR: The transformations of all the Schrodinger operators with point interactions in dimension one under space reflection P, time reversal T and (Weyl) scaling Wλ are presented in this paper.
Abstract: The transformations of all the Schrodinger operators with point interactions in dimension one under space reflection P, time reversal T and (Weyl) scaling Wλ are presented. In particular, those operators which are invariant (possibly up to a scale) are selected. Some recent papers on related topics are commented upon.

Journal ArticleDOI
TL;DR: In this paper, the Mourre theory for an abstract class of fibered self-adjoint operators, called analytically fibered operators, is developed and a conjugate operator for which a Mourre estimate holds is constructed.

Patent
26 Feb 1998
TL;DR: In this article, an improved driver interface system provides prompts to an operator to adjust several vehicle parameters in order to adjust the necessary components upon start up of the vehicle, which is particularly important when the driver is unfamiliar with the car, as with a rental car.
Abstract: An improved driver interface system provides prompts to an operator to adjust several vehicle parameters In this way, the operator is assured to adjust the necessary components upon start up of the vehicle This is particularly important when the driver is unfamiliar with the car, as with a rental car In one embodiment, the driver may be provided with a transmitter that stores desired settings for at least some parameters The transmitter can then transmit the operator's desired parameters to a control, and the vehicle components can then begin to be moved to the desired positions as the operator approaches the vehicle In other aspects of this invention, improved switches are disclosed A rotary switch rotates on a steering wheel rim This switch is easily accessible to an operator, and does not require the operator to divert attention from the road A second type switch includes an element which is sensitive to touch by the operator In this way, the operator can easily set a desired level by simply touching the switch This type switch has particular benefits in cruise control systems, or in systems for positioning vehicle components such as windows

Journal ArticleDOI
TL;DR: In this paper, the relative modular operator is used to define a generalized relative entropy for any convex operator function g on the positive real line satisfying g(1) = 0, and these convex functions can be partitioned into convex subsets each of which defines a unique symmetrized relative entropy, a unique family of continuous monotone Riemannian metrics, and a unique geodesic distance on the space of density matrices.
Abstract: We use the relative modular operator to define a generalized relative entropy for any convex operator function g on the positive real line satisfying g(1) = 0. We show that these convex operator functions can be partitioned into convex subsets each of which defines a unique symmetrized relative entropy, a unique family (parameterized by density matrices) of continuous monotone Riemannian metrics, a unique geodesic distance on the space of density matrices, and a unique monotone operator function satisfying certain symmetry and normalization conditions. We describe these objects explicitly in several important special cases, including the familiar logarithmic relative entropy. The relative entropies, Riemannian metrics, and geodesic distances obtained by our procedure all contract under completely positive, trace-preserving maps. We then define and study the maximal contraction associated with these quantities.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the heat kernels of second order elliptic operators with complex bounded measurable coefficients on Rn and obtained Gaussian bounds without further assumption ifn⩽2, and when the principal part has Holder continuous coefficients if n⩾3.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the spectrum of certain non-self-adjoint Schrodinger operators is unstable in the semi-classical limit of the spectrum, where the JWKB method can be used to construct approximate semiclassical modes of the operator for energies far from the spectrum.
Abstract: We prove that the spectrum of certain non-self-adjoint Schrodinger operators is unstable in the semi-classical limit. Similar results hold for a fixed operator in the high energy limit. The method involves the construction of approximate semi-classical modes of the operator by the JWKB method for energies far from the spectrum.

Journal ArticleDOI
TL;DR: This paper investigated different readings of plural and reciprocal sentences and how they can be derived from syntactic surface structures in a systematic way, and showed that these readings result from different ways of inserting logical operators at the level of Logical Form.
Abstract: This paper investigates different readings of plural and reciprocal sentences and how they can be derived from syntactic surface structures in a systematic way. The main thesis is that these readings result from different ways of inserting logical operators at the level of Logical Form. The basic operator considered here is a cumulative mapping from predicates that apply to singularities onto the corresponding predicates that apply to pluralities. Given a theory which allows for free insertion of such operators, it can then be shown that the lexical semantics of the reciprocal expressions each other/one another consists of exactly two components, namely an anaphoric variable and a non-identity statement. This receives further support from the observation that it is exactly these two components that can be focused by only; all that remains to be done is to correctly manipulate these components at the level of LF.