scispace - formally typeset
Search or ask a question
Author

Potter

Bio: Potter is an academic researcher from Kent State University. The author has contributed to research in topics: Concurrent computing & SIMD. The author has an hindex of 1, co-authored 1 publications receiving 66 citations.

Papers
More filters
Journal ArticleDOI
Potter1
TL;DR: The massively parallel processor's computing power and extreme flexibility will allow the development of new techniques for scene analysis - realtime scene analysis, for example, in which the sensor can interact with the scene as needed.
Abstract: A review of the massively parallel processor (MPP) is provided. The MPP, a single instruction, multiple data parallel computer with 16K processors being built for NASA by Goodyear Aerospace, can perform over six billion eight-bit adds and 1.8 billion eight-bit multiplies per second. Its SIMD architecture and immense computing power promise to make the MPP an extremely useful and exciting new tool for all types of pattern recognition and image processing applications. The SIMD parallelism can be used to directly calculate 16K statistical pattern recognition results simultaneously. Moreover, the 16K processors can be configured into a two-dimensional array to efficiently extract features from a 128x128 subimage in parallel. The parallel search capability of SIMD architectures can be used to search recognition and production rules in parallel, thus eliminating the need to sort them. This feature is particularly important if the rules are dynamically changing. Finally, the MPP's computing power and extreme flexibility will allow the development of new techniques for scene analysis - realtime scene analysis, for example, in which the sensor can interact with the scene as needed.

66 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The tutorial provided in this paper reviews both binary morphology and gray scale morphology, covering the operations of dilation, erosion, opening, and closing and their relations.
Abstract: For the purposes of object or defect identification required in industrial vision applications, the operations of mathematical morphology are more useful than the convolution operations employed in signal processing because the morphological operators relate directly to shape. The tutorial provided in this paper reviews both binary morphology and gray scale morphology, covering the operations of dilation, erosion, opening, and closing and their relations. Examples are given for each morphological concept and explanations are given for many of their interrelationships.

2,676 citations

Proceedings ArticleDOI
12 Oct 2019
TL;DR: This work is the first work to demonstrate in-memory computation with off-the-shelf, unmodified, commercial, DRAM, by violating the nominal timing specification and activating multiple rows in rapid succession, which happens to leave multiple rows open simultaneously, thereby enabling bit-line charge sharing.
Abstract: In-memory computing has long been promised as a solution to the "Memory Wall" problem. Recent work has proposed using chargesharing on the bit-lines of a memory in order to compute in-place and with massive parallelism, all without having to move data across the memory bus. Unfortunately, prior work has required modification to RAM designs (e.g. adding multiple row decoders) in order to open multiple rows simultaneously. So far, the competitive and low-margin nature of the DRAM industry has made commercial DRAM manufacturers resist adding any additional logic into DRAM. This paper addresses the need for in-memory computation with little to no change to DRAM designs. It is the first work to demonstrate in-memory computation with off-the-shelf, unmodified, commercial, DRAM. This is accomplished by violating the nominal timing specification and activating multiple rows in rapid succession, which happens to leave multiple rows open simultaneously, thereby enabling bit-line charge sharing. We use a constraint-violating command sequence to implement and demonstrate row copy, logical OR, and logical AND in unmodified, commodity, DRAM. Subsequently, we employ these primitives to develop an architecture for arbitrary, massively-parallel, computation. Utilizing a customized DRAM controller in an FPGA and commodity DRAM modules, we characterize this opportunity in hardware for all major DRAM vendors. This work stands as a proof of concept that in-memory computation is possible with unmodified DRAM modules and that there exists a financially feasible way for DRAM manufacturers to support in-memory compute.

112 citations

Journal ArticleDOI
01 Jul 1984
TL;DR: Comparison of the operation of the proposed optical logic gate with that of array logic in digital electronics leads to a design concept for an optical parallel array logic system available for optical parallel digital computing.
Abstract: A new, simple method of optically implementing optical parallel logic gates has been described. Optical parallel logic gates can be implemented by using a lensless shadow-casting system with a light-emitting diode (LED) array as a light source. Pattern logic, i.e., parallel logic for two binary patterns (variables), is simply obtained with these gates; this logic describes a complete set of logical operations on a large array of binary variables in parallel. Coding methods for input images are considered. Applications of the method for a parallel shift operation and optical digital image processing, processing of gray-level images, and parallel operations of addition and subtraction for two binary variables are presented. Comparison of the operation of the proposed optical logic gate with that of array logic in digital electronics leads to a design concept for an optical parallel array logic system available for optical parallel digital computing.

109 citations

Patent
Hungwen Li1, Ching-Chy Wang1
10 Mar 1987
TL;DR: In this article, an array processor made up of adaptive processing elements can adapt dynamically to changes in its input data stream, and thus can be dynamically optimized, resulting in greatly enhanced performance at very low incremental cost.
Abstract: Equipping individual processing elements with an instruction adapter provides an array processor with adaptive spatial-dependent and data-dependent processing capability. The instruction becomes variable, at the processing element level, in response to spatial and data parameters of the data stream. An array processor can be optimized, for example, to carry out very different instructions on spatial-dependent data such as blank margin surrounding the black lines of a sketch. Similarly, the array processor can be optimized for data-dependent values, for example to execute different instructions for positive data values than for negative data values. Providing each processing element with a processor identification register permits an easy setup by flowing the setup values to the individual processing elements, together with setup of condition control values. Each individual adaptive processing element responds to the composite values of original setup and of the data stream to derive the instruction for execution during the cycle. In the usual operation, each adaptive processing element is individually addressed to set up a base instruction; it also is conditionally set up to execute a derived instruction instead of the base instruction. An array processor made up of adaptive processing elements can adapt dynamically to changes in its input data stream, and thus can be dynamically optimized, resulting in greatly enhanced performance at very low incremental cost.

93 citations

Journal ArticleDOI
R. Cypher1, J. Sanz1
TL;DR: A critical survey of parallel architectures and algorithms for image processing on single-instruction-stream, multiple-data-stream (SIMD) computers with nonshared memory and the effects of architectural decisions on algorithms design are examined.
Abstract: The authors present a critical survey of parallel architectures and algorithms for image processing. The emphasis is on single-instruction-stream, multiple-data-stream (SIMD) computers with nonshared memory. A number of parallel architectures are discussed and compared. They fall into three categories: mesh-connected computers, pyramid computers, and hypercube and related computers. A set of seven image processing and computer vision tasks is selected, and algorithms for performing these seven tasks on each of the three categories of computers are given. All of the algorithms are evaluated using asymptotic (big-Oh) analysis. An analysis of parallel algorithms for several-pixel level, intermediate level, and high-level image processing tasks is presented. The effects of architectural decisions on algorithms design are examined in detail. >

81 citations