scispace - formally typeset
Search or ask a question

Showing papers on "Point (geometry) published in 1990"


Proceedings ArticleDOI
23 Oct 1990
TL;DR: The authors present a tool for the display and analysis of N-dimensional data based on a technique called dimensional stacking, to collapse and N-dimension space down into a 2-D space and then render the values contained therein.
Abstract: The authors present a tool for the display and analysis of N-dimensional data based on a technique called dimensional stacking. This technique is described. The primary goal is to create a tool that enables the user to project data of arbitrary dimensions onto a two-dimensional image. Of equal importance is the ability to control the viewing parameters, so that one can interactively adjust what ranges of values each dimension takes and the form in which the dimensions are displayed. This will allow an intuitive feel for the data to be developed as the database is explored. The system uses dimensional stacking, to collapse and N-dimension space down into a 2-D space and then render the values contained therein. Each value can then be represented as a pixel or rectangular region on a 2-D screen whose intensity corresponds to the data value at that point. >

279 citations


Journal ArticleDOI
TL;DR: In this article, the three-point euclidean correlation functions were studied and a theorem relating their asymptotic behaviour in euclidic time and for infinite space volume to the threshold time-like form factor and the scattering length was derived.

213 citations


Journal ArticleDOI
TL;DR: In this article, the authors generalize the MOSP to collections of approximately compact sets in metric spaces and define a sequence of successive projections (SOSP) in such a context and then proceed to establish conditions for the convergence of a SOSP to a solution point.
Abstract: Many problems in applied mathematics can be abstracted into finding a common point of a finite collection of sets. If all the sets are closed and convex in a Hilbert space, the method of successive projections (MOSP) has been shown to converge to a solution point, i.e., a point in the intersection of the sets. These assumptions are however not suitable for a broad class of problems. In this paper, we generalize the MOSP to collections of approximately compact sets in metric spaces. We first define a sequence of successive projections (SOSP) in such a context and then proceed to establish conditions for the convergence of a SOSP to a solution point. Finally, we demonstrate an application of the method to digital signal restoration.

125 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of finding the minimum number of viewpoints to see the entire surface, and of locating a fixed number of views to maximize the area visible, and possible extensions.
Abstract: The viewshed of a point on an irregular topographic surface is defined as the area visible from the point. The area visible from a set of points is the union of their viewsheds. We consider the problems of locating the minimum number of viewpoints to see the entire surface, and of locating a fixed number of viewpoints to maximize the area visible, and possible extensions. We discuss alternative methods of representing the surface in digital form, and adopt a TIN or triangulated irregular network as the most suitable data structure. The space is tesselated into a network of irregular triangles whose vertices have known elevations and whose edges join vertices which are Thiessen neighbours, and the surface is represented in each one by a plane. Visibility is approximated as a property of each triangle: a triangle is defined as visible from a point if all of its edges are fully visible. We present algorithms for determination of visibility, and thus reduce the problems to variants of the location set covering and maximal set covering problems. We examine the performance of a variety of heuristics.

92 citations


Journal ArticleDOI
TL;DR: A new simple algorithm for the so-called largest empty rectangle problem, i.e., the problem of finding a maximum area rectangle contained inA and not containing any point ofS in its interior, is presented.
Abstract: A rectangleA and a setS ofn points inA are given. We present a new simple algorithm for the so-called largest empty rectangle problem, i.e., the problem of finding a maximum area rectangle contained inA and not containing any point ofS in its interior. The computational complexity of the presented algorithm isO(n logn + s), where s is the number of possible restricted rectangles considered. Moreover, the expected performance isO(n · logn).

92 citations


01 Jan 1990
TL;DR: In this paper, a method of recursive model-space subdivision using binary space partitioning is presented to reduce the number of polygons processed during interactive building walkthroughs by an average factor of 30 and a worst case factor of at least 3.25.
Abstract: : Pre-processing some building models can radically reduce the number of polygons processed during interactive building walkthroughs. New model-space subdivision and potentially visible set (PVS) calculation techniques, used in combination, reduce the number of polygons processed in a real building model by an average factor of 30, and a worst case factor of at least 3.25. A method of recursive model-space subdivision using binary space partitioning is presented. Heuristics are developed to guide the choice of splitting planes. The spatial subdivisions resulting from binary space partitioning are called cells. Cells correspond roughly to rooms. An observer placed in a cell may see features exterior to the cell through transparent portions of the cell boundary called portals. Computing the polygonal definitions of the portals is cast as a problem of computing a set difference operation on co-planar polygons. A plane-sweep algorithm to compute the set operations, union, intersection and difference, on co-planar sets of polygons is presented with an emphasis on handling real-world data. Two different approaches to computing the PVS for a cell are explored. The first uses point sampling and has the advantage that it is easy to trade time for results, but has the disadvantage of under-estimating the PVS. The second approach is to analytically compute a conservative over-estimation of the PVS using techniques similar to analytical shadow computation. An implementation of the Radiosity lighting model is described along with the issues involved in combining it with the algorithms described in this dissertation.

88 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: The definition and recognition algorithm for a digital straight segment (DSS) and the definition of a digital circular arc (DCA) which uses the notion of centers of the pixels on both sides of a given curve is given.
Abstract: The definition and recognition algorithm for a digital straight segment (DSS) is presented. The points of a DSS must have a limited distance from that edge. A recognition algorithm is given which uses only integer arithmetic and needs an average of about 10 such operations per point. The definition of a digital circular arc (DCA) which uses the notion of centers of the pixels (a pixel is considered as an elementary rectangular area) on both sides of a given curve is given. The centers comprise two sets: the left and the right. The curve is a DCA if a Euclidean circle separating the sets from each other exists. An efficient algorithm for finding all such circles is presented. >

81 citations


Proceedings ArticleDOI
13 May 1990
TL;DR: The algorithm is used to realize the smallest worst-case path length possible in its category, and its performance is compared with that of the existing algorithms.
Abstract: A nonheuristic path planning for moving a point object, or mobile automation (MA), in a two-dimensional plane, amidst unknown obstacles, is considered. A path is to be generated, point by point, using only the local information, like the MA's current position and whether it is in contact with an obstacle. A path-planning algorithm to solve this problem is proposed. The algorithm is used to realize the smallest worst-case path length possible in its category. The procedure for the algorithm is presented with explanations. Its various characteristics, such as local cycle creation, worst-case path length, target reachability conditions, etc. are dealt with. Its performance is compared with that of the existing algorithms. Examples showing the operation of the algorithm are presented. >

72 citations


Book ChapterDOI
01 Jun 1990
TL;DR: The paper provides a set of rules for the stepwise synthesis of all and only live and bounded Free Choice nets.
Abstract: The paper provides a set of rules for the stepwise synthesis of all and only live and bounded Free Choice nets. The starting point are nets composed by a circuit containing one place and one transition.

56 citations


Journal ArticleDOI
Ron Holzman1
TL;DR: It is shown that three independent axioms determine together a unique solution, located at the point that minimizes the sum of the squares of the distances to the users.
Abstract: The problem under consideration is that of locating a facility on a tree-network, given data specifying the locations of the users on the network. The approach taken is to formulate axioms that require consistent response of the solution to variations in the users' location data. It is shown that three independent axioms determine together a unique solution, located at the point that minimizes the sum of the squares of the distances to the users.

55 citations


Journal ArticleDOI
TL;DR: In this article, the joint probability for a closed Brownian curve to wind n times around a prescribed point and enclose a given algebraic area is computed, and an estimate from below of the arithmetic area is obtained.
Abstract: The authors compute the joint probability for a closed Brownian curve to wind n times around a prescribed point and to enclose a given algebraic area. An estimate from below of the arithmetic area is obtained.

Proceedings ArticleDOI
01 Jan 1990
TL;DR: In this article, it was shown that a convex set in d dimensions can be solved in strongly polynomial time if, by deleting a constant number of rows and columns, it can be converted to a problem which is already known to be solvable in strongly polylogarithmic time.
Abstract: Consider a convex set in d dimensions. Assume that we are given a separation subroutine which, given a point, tells us whether this point is in the set. Moreover, if the point is not in the set, the subroutine separates the point from the set by a hyperplane. We show that, if d is fixed and the separation subroutine is linear in the input vector (by “linear” we mean that each comparison made by the subroutine is between two expressions that can be written as linear functions of the input vector), this implies that one can optimize a linear objective function over the convex set in time polynomial in the number of arithmetic operations used by the separation subroutine. We apply this result to extend the class of linear programms solvable in strongly polynomial time. We show that a problem can be solved in strongly polynomial time if, by deleting a constant number of rows and columns, it can be converted to a problem which is already known to be solvable in strongly polynomial time. For example, this yields a strongly polynomial algorithm for the concurrent multi-commodity flow problem.

Proceedings Article
01 Sep 1990
TL;DR: This paper considers the problem of indexing straight line segments to enable efficient retrieval of all line segments that go through a specified point, or intersect a specified line segment, and proposes a data organization, based on the Hough transform, that can be used to solve both retrieval problems efficiently.
Abstract: In several image applications, it is necessary to retrieve specific line segments born a potentially very large set. In this paper, we consider the problem of indexing straight line segments to enable efficient retrieval of all line segments that (i) go through a specified point, or (ii) intersect a specified line segment. We propose a data organization, based on the Hough transform, that can be used to solve both retrieval problems efficiently. In addition, the proposed structure can be used for approximate retrievals, finding all line segments that pass close to a specified point. We show, through analysis and experiment, that the proposed technique always does as well as or better than retrieval based on minimum bounding rectangles or line segment end-points.

Journal ArticleDOI
TL;DR: In this paper, an explicit formula for the time-dependent propagator of the Schrodinger equation with one point interaction in three dimensions is given, based on the inverse Laplace transformation applied to the corresponding resolvent.
Abstract: An explicit formula for the time-dependent propagator of the Schrodinger equation with one point interaction in three dimensions is given. The derivation is based on the inverse Laplace transformation applied to the corresponding resolvent.

Journal ArticleDOI
TL;DR: It is shown that Collins' classical quantifier elimination procedure contains most of the ingredients for an efficient point location algorithm in higher-dimensional space, which leads to a polynomial-size data structure that allows us to locate a point among a collection of real algebraic varieties of constant maximum degree.


Proceedings ArticleDOI
01 Jan 1990
TL;DR: Techniques for solving geometric closest-point aud farthest-point query problems, in the presence of deletions, are presented, including efficient implementations of classical greedy heuristics for .miuimnmweight matching, and maximum-weight matching.
Abstract: We present techniques for solving geometric closest-point aud farthest-point query problems, in the presence of deletions. Applications include efficient implementations of classical greedy heuristics for .miuimnmweight matching (where our result improves on that of Bentley and Saxe), and maximum-weight matching.

Journal ArticleDOI
TL;DR: In this article, a field test of line transect sampling theory is presented, where a set of Beer cans are used to simulate point clusters of objects with cluster sizes 1, 2, 4, and 8.
Abstract: An important problem in line transect sampling is that objects or point clusters of objects of different sizes have different sighting probabilities. In a recent paper Drummer and McDonald (1987, Biometrics 43, 13-21) develop a bivariate sighting function. Their function is dependent on perpendicular distance and object size. One important special case is an extension of the exponential power series sighting function first proposed by Pollock (1978, Biometrics 34, 475-478). In this note empirical evidence is given for this model based on a field test of line transect sampling theory. Beer cans were used to simulate point clusters of objects with cluster sizes 1, 2, 4, and 8. To achieve approximately equal precision of parameter estimates, equal numbers of each cluster size were taken.

Journal ArticleDOI
TL;DR: The work that has led to practical algorithms for the static version of the problem is surveyed, and current research on the corresponding dynamic algorithms are discussed.
Abstract: Point location is a fundamental primitive in Computational Geometry. In the plane it is stated as follows: Given a subdivision ℛ of the plane and a query point q, determine the region of ℛ containing q. We survey the work that has led to practical algorithms for the static version of the problem, and discuss current research on the corresponding dynamic algorithms.

Patent
18 Dec 1990
TL;DR: In this paper, an apparatus for measuring the temperature of a piston in an internal combustion engine is described, which includes a thermistor to sense the temperature and generate an electrical signal representative of that temperature, a transmission unit connected to the thermistor for receiving the electrical signal and for converting the signal to an infrared beam for transmission to a point remote from the piston.
Abstract: An apparatus for measuring the temperature of a piston in an internal combustion engine. The apparatus includes a thermistor to sense the temperature of the piston and generate an electrical signal representative of that temperature, a transmission unit connected to the thermistor for receiving the electrical signal and for converting the signal to an infrared beam for transmission to a point remote from the piston, and a receiver to receive the beam and convert the beam to an electrical signal corresponding to the electrical signal generated by the thermistor. The first mentioned electrical signal is converted to a rectangular wave form prior to transmission of the infrared beam.

Journal ArticleDOI
TL;DR: This problem is prototypical of a class of problems in computer vision, pattern recognition, and data fitting, and it is shown that given a tolerance one can determine the number of lines that should be fitted to a given point configuration.
Abstract: It is a simple problem to fit one line to a collection of points in the plane. But when the problem is generalized to two or more lines then the problem complexity becomes exponential in the number of points because we must decide on a partitioning of the points among the lines they are to fit. The same is true for fitting lines to points in three-dimensional space or hyperplanes to data points of high dimensions. We show that this problem despite its exponential complexity can be formulated as an optimization problem for which very good, but not necessarily optimal, solutions can be found by using an artificial neural network. Furthermore, we show that given a tolerance one can determine the number of lines (or planes) that should be fitted to a given point configuration. This problem is prototypical of a class of problems in computer vision, pattern recognition, and data fitting. For example, the method we propose can be used in reconstructing a planar world from range data or in recognizing point patterns in an image.

Journal ArticleDOI
TL;DR: The proposed approach to camera parameter determination using line features will yield better results than other approaches using point landmarks because the information of image point positions are not involved in the solutions and space lines can be extracted more accurately than space points.

Journal ArticleDOI
TL;DR: In this article, a detailed analytical study of the design geometry, specification, point sharpening and predictive mechanics of cutting models for thrust, torque and power is presented, while the preferred method of specification for the “general” Facet Point geometry requires seven drill point features.

Proceedings Article
29 Jul 1990
TL;DR: In this article, the authors argue that the assumption that the goal nodes for a given problem are distributed randomly along the fringe of the search tree is often invalid in practice, suggest that a more reasonable assumption is that decisions made at each point in the search carry equal weight, and show that a new search technique that is called iterative broadening leads to orders of magnitude savings in the time needed to search a space satisfying this assumption.
Abstract: Conventional blind search techniques generally assume that the goal nodes for a given problem are distributed randomly along the fringe of the search tree. We argue that this is often invalid in practice, suggest that a more reasonable assumption is that decisions made at each point in the search carry equal weight, and show that a new search technique that we call iterative broadening leads to orders-of-magnitude savings in the time needed to search a space satisfying this assumption. Both theoretical and experimental results are presented.

Journal ArticleDOI
TL;DR: An automated vessel-tracking method based on the double-square-box region-of-search technique, for efficient tracking of the connected vascular tree in a digital subtraction angiography (DSA) image, which appears to correspond well to those in DSA images.
Abstract: We are developing an automated vessel-tracking method based on the double-square-box region-of-search technique, for efficient tracking of the connected vascular tree in a digital subtraction angiography (DSA) image. Tracking points and branch vessels are located by searching of the perimeter of boxes, which are centered on previously determined tracking points. The most accurate results (90% true-positive rate with six false-positives per image) are obtained by tracking using the double-square-box method. In relatively straight regions of vessels, a large box is employed for efficient tracking; in curved regions of vessels, a small box is employed to ensure accurate tracking. When tracking is completed, accurate vessel information, ie, the vessel position, size, and contrast determined at each tracking point, is available for further quantitative analysis. Computer reproductions of tracked vessel trees appear to correspond well to those in DSA images.

Journal ArticleDOI
Lucia Lo1
TL;DR: In this paper, the authors present a spatial translog demand model that accounts for interdependence among travel alternatives and handles varying elasticities of substitution for various destination pairs, which is primarily relevant to the demand for shopping trips.
Abstract: With the view that travel behavior stems from the principle of utility maximization, in this paper I present a spatial translog demand model that accounts for interdependence among travel alternatives and that handles varying elasticities of substitution for various destination pairs. Using simulation as the mode of inquiry, this model describes the effect of spatial size, spatial configuration, and spatial substitution on spatial interaction. In addition to indicating how varying spatial sizes and configurations affect the average trip length and the trip making pattern, the simulation results also point out the possible effect of having spatially dependent locations in the system. Competing destinations increase the attractiveness of nearby locations, and complementary destinations reduce the impeding effect of space. The model is primarily relevant to the demand for shopping trips.


Journal ArticleDOI
TL;DR: In this article, the point of smallest Euclidean norm in the convex hull of a given set of points in R n ≥ 2 was calculated using the active set method.
Abstract: This note suggests new ways for calculating the point of smallest Euclidean norm in the convex hull of a given set of points inR n . It is shown that the problem can be formulated as a linear least-square problem with nonnegative variables or as a least-distance problem. Numerical experiments illustrate that the least-square problem is solved efficiently by the active set method. The advantage of the new approach lies in the solution of large sparse problems. In this case, the new formulation permits the use of row relaxation methods. In particular, the least-distance problem can be solved by Hildreth's method.

Patent
13 Mar 1990
TL;DR: In this paper, a road-finishing machine with a laying beam and a control member for adjusting the height and inclination of the laying beam as well as measuring sensors for this purpose is described.
Abstract: The invention relates to a road-finishing machine with a laying beam (2) which is provided with control members (8) for adjusting the height and inclination of the laying beam (2) as well as measuring sensors for this purpose, the output signals of the measuring sensors serving as actual values for adjusters (10, 11) controlling the control members (8) in accordance with desired values which can be predetermined In order to be able to automatically maintain transverse inclination values according to a predetermined profile plan, it is provided that a path measurement device (12) is provided, the output signals of which can be fed to an onboard computer (14), the onboard computer (14) being provided along the laying path with a data store (17) for storing the length of a transitional section and the differential value of the transverse inclination, which differential value is to be maintained between the starting and finishing point of the transitional section, and it being possible for the desired values, which can be predetermined and are calculated by the onboard computer (14) for the inclination adjuster (11), to be altered continually by the onboard computer (14) depending on the road surface from the starting to the finishing point