Author

# Kokichi Sugihara

Other affiliations: University of Tokyo, Nagoya University, Purdue University

Bio: Kokichi Sugihara is an academic researcher from Meiji University. The author has contributed to research in topics: Voronoi diagram & Weighted Voronoi diagram. The author has an hindex of 36, co-authored 215 publications receiving 8601 citations. Previous affiliations of Kokichi Sugihara include University of Tokyo & Nagoya University.

##### Papers published on a yearly basis

##### Papers

More filters

•

01 Jan 1992TL;DR: In this article, the Voronoi diagram generalizations of the Voroni diagram algorithm for computing poisson Voroni diagrams are defined and basic properties of the generalization of Voroni's algorithm are discussed.

Abstract: Definitions and basic properties of the Voronoi diagram generalizations of the Voronoi diagram algorithms for computing Voronoi diagrams poisson Voronoi diagrams spatial interpolation models of spatial processes point pattern analysis locational optimization through Voronoi diagrams.

4,018 citations

••

TL;DR: A ‘natural’ extension of the univariate kernel method to density estimation on a network is formulated, and it is proved that its estimator is biased; in particular, it overestimates the densities around nodes.

Abstract: We develop a kernel density estimation method for estimating the density of points on a network and implement the method in the GIS environment. This method could be applied to, for instance, finding 'hot spots' of traffic accidents, street crimes or leakages in gas and oil pipe lines. We first show that the application of the ordinary two-dimensional kernel method to density estimation on a network produces biased estimates. Second, we formulate a 'natural' extension of the univariate kernel method to density estimation on a network, and prove that its estimator is biased; in particular, it overestimates the densities around nodes. Third, we formulate an unbiased discontinuous kernel function on a network. Fourth, we formulate an unbiased continuous kernel function on a network. Fifth, we develop computational methods for these kernels and derive their computational complexity; and we also develop a plug-in tool for operating these methods in the GIS environment. Sixth, an application of the proposed methods to the density estimation of traffic accidents on streets is illustrated. Lastly, we summarize the major results and describe some suggestions for the practical use of the proposed methods.

330 citations

••

27 May 2008TL;DR: In this paper, measures of Dispersion Quadrat Analysis (MDA) and Dispersion Distance Methods Methods of Arrangements (MDF) are presented. But they do not consider the relationship between the distance and the dispersion distance.

Abstract: Introduction Measures of Dispersion Quadrat Analysis Measures of Dispersion Distance Methods Measures of Arrangements Summary

211 citations

•

13 Aug 2012

TL;DR: Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation, from Stochastic Point Processes on a Network and Network Voronoi Diagrams, to Network K-function and Point Density Estimation Methods, and the Network Huff Model.

Abstract: In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Processes on a Network and Network Voronoi Diagrams, to Network K-function and Point Density Estimation Methods, and the Network Huff Model. The authors also discuss and illustrate the undertaking of the statistical tests described in a Geographical Information System (GIS) environment as well as demonstrating the user-friendly free software package SANET.

196 citations

##### Cited by

More filters

••

TL;DR: When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput /spl lambda/(n) obtainable by each node for a randomly chosen destination is /spl Theta/(W//spl radic/(nlogn)) bits persecond under a noninterference protocol.

Abstract: When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput /spl lambda/(n) obtainable by each node for a randomly chosen destination is /spl Theta/(W//spl radic/(nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is /spl Theta/(W/spl radic/An) bit-meters per second. Thus even under optimal circumstances, the throughput is only /spl Theta/(W//spl radic/n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.

9,008 citations

••

6,278 citations

••

TL;DR: Recognition-by-components (RBC) provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition.

Abstract: The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).

5,464 citations

••

TL;DR: This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond Algorithm, and provides empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory.

Abstract: The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond Algorithm. It is similar to the randomized, incremental algorithms for convex hull and delaunay triangulation. We provide empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory. computational geometry algorithms have traditionally assumed that input sets are well behaved. When an algorithm is implemented with floating-point arithmetic, this assumption can lead to serous errors. We briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. The output is a set of “thick” facets that contain all possible exact convex hulls of the input. A variation is effective in five or more dimensions.

5,050 citations

••

07 Aug 2002TL;DR: In this paper, the authors describe decentralized control laws for the coordination of multiple vehicles performing spatially distributed tasks, which are based on a gradient descent scheme applied to a class of decentralized utility functions that encode optimal coverage and sensing policies.

Abstract: This paper describes decentralized control laws for the coordination of multiple vehicles performing spatially distributed tasks. The control laws are based on a gradient descent scheme applied to a class of decentralized utility functions that encode optimal coverage and sensing policies. These utility functions are studied in geographical optimization problems and they arise naturally in vector quantization and in sensor allocation tasks. The approach exploits the computational geometry of spatial structures such as Voronoi diagrams.

2,445 citations