scispace - formally typeset
Search or ask a question

Showing papers on "Sorting published in 1990"


Journal ArticleDOI
27 Jul 1990-Cell

524 citations


Journal ArticleDOI
TL;DR: Resistance to distraction depends on the rate of reinforcers obtained in the presence of component stimuli but is independent of baseline response rates and response-reinforcer contingencies, demonstrating that the determination of resistance to change by stimulus- reinforcer relations is not confined to controlled laboratory settings or unique to the pigeon.
Abstract: Adults with mental retardation in a group home received popcorn or coffee reinforcers for sorting plastic dinnerware. In Part 1 of the experiment, reinforcers were dispensed according to a variable-interval 60-s schedule for sorting dinnerware of one color and according to a variable-interval 240-s schedule for sorting dinnerware of a different color in successive components of a multiple schedule. Sorting rates were similar in baseline, but when a video program was shown concurrently, sorting of dinnerware was more resistant to distraction when correlated with a higher rate of reinforcement. In Part 2 of the experiment, popcorn or coffee reinforcers were contingent upon sorting both colors of dinnerware according to variable-interval 60-s schedules, but additional reinforcers were given independently of sorting according to a variable-time 30-s schedule during one dinnerware-color component. Baseline sorting rate was lower but resistance to distraction by the video program was greater in the component with additional variable-time reinforcers. These results demonstrate that resistance to distraction depends on the rate of reinforcers obtained in the presence of component stimuli but is independent of baseline response rates and response-reinforcer contingencies. Moreover, these results are similar to those obtained in laboratory studies with pigeons, demonstrating that the determination of resistance to change by stimulus-reinforcer relations is not confined to controlled laboratory settings or unique to the pigeon.

205 citations


Proceedings ArticleDOI
01 Apr 1990
TL;DR: A deterministic sorting algorithm, called Sharesort, is presented that sorts n records on an n -processor hypercube, shuffle-exchange, or cube-connected cycles in O (log n (log log n ) 2 ) time in the worst case.
Abstract: This paper presents a deterministic sorting algorithm, called Sharesort, that sorts n records on an n -processor hypercube, shuffle-exchange, or cube-connected cycles in O (log n (log log n ) 2 ) time in the worst case. The algorithm requires only a constant amount of storage at each processor. The fastest previous deterministic algorithm for this problem was Batcher's bitonic sort, which runs in O (log 2 n ) time.

153 citations


Book
01 Jan 1990
TL;DR: This chapter discusses predicate Calculus, the Guarded Command Language, and general programming techniques, as well as some of the techniques used in designing Efficient Programs.
Abstract: * Predicate Calculus * The Guarded Command Language * Quantifications * General Programming Techniques * Designing Efficient Programs * Searching * Segment Problems * Slope Search * Mixed Problems * Array Manipulations * Sorting * Auxiliary Arrays

147 citations



Patent
27 Jul 1990
TL;DR: In this paper, checks and deposit slips are processed by capturing a video image of each document and sorting each document into a predetermined rehandle pocket during a prime pass, prior to proof of deposit processing.
Abstract: Financial documents, such as checks and deposit slips, are processed by capturing a video image of each document and sorting each document into a predetermined rehandle pocket during a prime pass, prior to proof of deposit processing. Particular one of the documents are selected for priority processing. Selected data is read from the captured images of the selected documents and any errors are corrected to create verified data, including balanced deposit information. The selected documents are then encoded with machine readable data indicative of the balanced deposit information and are further sorted into predetermined kill pockets during a rehandle pass.

120 citations


Journal ArticleDOI
TL;DR: A constant time sorting algorithm is derived on a three-dimensional processor array equipped with a reconfigurable bus system, which is far more feasible than the CRCW PRAM model.

117 citations


Journal ArticleDOI

96 citations


Patent
29 Mar 1990
TL;DR: A mail document sorting device as discussed by the authors includes a document input feeder, and at least one singulation device for orienting and singulating the documents so that indicia on their faces can be disposed at a predetermined level about a data reference plane.
Abstract: A mail document sorting device includes a document input feeder, and at least one singulation device for orienting and singulating the documents so that indicia on their faces can be disposed at a predetermined level about a data reference plane. Single documents pass to an indicia reader, which generates indicia indicating signals. An electronic/computer mechanism processes the indicating signals, and provides for sorting the read documents into bins. A plurality of the bins is located in side-by-side horizontal array, with an elongated belt disposed along the array of bins for moving documents received therefrom. The elongated belt has an inboard edge adjacent the array of bins and an outboard edge remote from the array of bins. The sorting device further includes a mechanism for moving documents from the bins onto the elongated belt, locating device associated with the elongated belt for positioning an edge of documents on the elongated belt, a shingler for shingling documents received from the elongated belt, transport of the shingled documents as they are discharged from the shingler means, and loading received shingled documents sequentially into mail trays.

87 citations


Book
02 Jan 1990
TL;DR: This chapter discusses data structures and their applications, and some examples from the literature show the importance of knowing the structure of a graph and its applications.
Abstract: 1. Introduction to Data Structures. 2. The Stack. 3. Recursion. 4. Queues and Lists. 5. Trees. 6. Sorting. 7. Searching. 8. Graphs and their Applications. 9. Storage Management.

78 citations


Book ChapterDOI
17 Dec 1990
TL;DR: This research addresses the problem of sorting n integers each in the range {0, ..., m - 1} in parallel on the PRAM model of computation and gives randomized algorithms that run in O(1) time using n processors, whenever m is not too close to n.
Abstract: We address the problem of sorting n integers each in the range {0, ..., m - 1} in parallel on the PRAM model of computation. We present a randomized algorithm that runs with very high probability in time O(lg n/lg lg n + lg lg m) with a processor-time product of O(n lg lg m) and O(n) space on the CRCW (Collision) PRAM [7]. The main features of this algorithm is that it matches the run-time and processor requirements of the algorithms in the existing literature [2, 10], while it assumes a weaker model of computation and uses a linear amount of space. The techniques used extend to improved randomized algorithms for the problem of chaining [11, 15], which is the following: given an array x1, ..., x n , such that m of the locations contain non-zero elements, to chain together all the non-zero elements into a linked list. We give randomized algorithms that run in O(1) time using n processors, whenever m is not too close to n. A byproduct of our research is the weakening of the model of computation required by some other sorting algorithms.

Journal ArticleDOI
TL;DR: This work has mapped the first sorting signal in a vacuolar membrane protein, repressible alkaline phosphatase, and has shown it to be both necessary and sufficient for vacUolar delivery of this enzyme.

Book
01 Jan 1990
TL;DR: Introduction to data structures the stack recursion queues and lists trees sorting searching graphs and their applications storage management.
Abstract: Introduction to data structures the stack recursion queues and lists trees sorting searching graphs and their applications storage management.

Proceedings ArticleDOI
Yoshihiro Shima1, Takuhiro Murakami1, Masashi Koga1, H. Yashiro1, Hiromichi Fujisawa1 
16 Jun 1990
TL;DR: A fast algorithm for component labeling of binary images is proposed, which was ascertained by experiment that the processing time is 16.7 times faster than that of the conventional pixel-based labeling method.
Abstract: A fast algorithm for component labeling of binary images is proposed. Component labeling is an important method for separation of objects in document image understanding, especially in character and picture extraction. The proposed algorithm is based on sorting and tracking of runs and label propagation to the connected runs. Each scan line is partitioned into small segments, i.e. blocks and runs on the scan line are sorted according to the block sequence to speed up tracking of runs. It was ascertained by experiment that the processing time is 16.7 times faster than that of the conventional pixel-based labeling method. >

Journal ArticleDOI
01 Oct 1990
TL;DR: It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network and can be extended to more complicated functions, such as multiple products, division, rational functions, and approximation of analytic functions.
Abstract: A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n-bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions. >

Patent
13 Dec 1990
TL;DR: In this article, a document sorting and stacking device provides automated processing of randomly assembled batches of sheet items or objects such as negotiable instruments, currency, data cards, envelopes and the like which are fed into the unit in a continuing sequential progression for their separation and stacking, each in one of the designated multiple pockets.
Abstract: A document sorting and stacking device provides for automated processing of randomly assembled batches of sheet items (11) or objects such as negotiable instruments, currency, data cards, envelopes and the like which are fed into the unit (10) in a continuing sequential progression for their separation and stacking, each in one of the designated multiple pockets. The motor driven transport conveyor (29) of the unit moves the objects through the device at a known speed. If an object (35) is 'tagged' to enter a particular pocket (71), a computing microprocessor (19) controls a step-motor (49) to advance a sorting belt (38) to extend one of its capture fingers (39) into the transport path (23) for rendezvous with the object. As the belt is advanced further the capture finger closes upon the object and the belt pulls it forward for deposit in the designated stacking pocket. If the object is not 'tagged' for the pocket, the microprocessor holds the sorting belt in a position that hides the tip (45) of a capture finger behind a parking paw (42) to preclude rendezvous with the advancing object which then passes on to the next sorting and stacking stage of the device.

Journal ArticleDOI
TL;DR: A novel neural network parallel algorithm for sorting problems is presented that requires only two steps, and does not depend on the size of the problem, while the conventional parallel sorting algorithm using O(n) processors by F.T. Leighton (1984) needs the computation time O(log n/sup 2/).
Abstract: A novel neural network parallel algorithm for sorting problems is presented. The proposed algorithm using O(n/sup 2/) processors requires only two steps, and does not depend on the size of the problem, while the conventional parallel sorting algorithm using O(n) processors by F.T. Leighton (1984) needs the computation time O(log n/sup 2/). A set of simulation results substantiates the proposed algorithm. The hardware system based on the proposed parallel algorithm is also presented. >


Journal ArticleDOI
TL;DR: A new topological ordering is defined which significantly reduces the time requirements for the fast box counting method proposed in a recent paper by Liebovitch and Toth.


Journal ArticleDOI
TL;DR: A review and feasibility study undertaken to investigate methods of preprocessing envelope images to extract the address block from the image in the presence of other data, and presort the addresses into sub-classes suitable for recognition by an OCR system with separate recognition channels for machine and handwritten address classes.

Journal ArticleDOI
TL;DR: A parallelization of the Quicksort algorithm that is suitable for execution on a shared memory multiprocessor with an efficient implementation of the fetch-and-add operation is presented.
Abstract: A parallelization of the Quicksort algorithm that is suitable for execution on a shared memory multiprocessor with an efficient implementation of the fetch-and-add operation is presented. The partitioning phase of Quicksort, which has been considered a serial bottleneck, is cooperatively executed in parallel by many processors through the use of fetch-and-add. The parallel algorithm maintains the in-place nature of Quicksort, thereby allowing internal sorting of large arrays. A class of fetch-and-add-based algorithms for dynamically scheduling processors to subproblems is presented. Adaptive scheduling algorithms in this class have low overhead and achieve effective processor load balancing. The basic algorithm is shown to execute in an average of O(log(N)) time on an N-processor PRAM (parallel random-access machine) assuming a constant-time fetch-and-add. Estimated speedups, based on simulations, are also presented for cases when the number of items to be sorted is much greater than the number of processors. >

Patent
20 Jun 1990
TL;DR: In this paper, a sorting system in which uniquely coded garments are hung on trolleys, also having unique identification bar codes, is described, where the garment identification is initially correlated in the computer data base before sorting with the trolley identification code.
Abstract: A sorting system in which uniquely coded garments are hung on trolleys, also having unique identification bar codes. The trolley bar code is read as it progresses through the sorting system and is compared with a computer data base of garment information to determine the sorting route. The garment identification is initially correlated in the computer data base before sorting with the trolley identification code. Once all garment trolleys and associated garments of a lot have been identified and correlated by the computer, an algorithm is carried out for assigning a sort value to each trolley. A master distributor switch then distributes the garment trolleys a first time to a plurality of sort paths based upon a least significant digit of the sort value. Second and subsequent digits of the sort value of each trolley are employed to recirculate and resort the garment trolleys to the plural sort paths to complete the sorting operation with the parallel paths. The garment trolleys are then sequentially released from the parallel sort paths to an exit sort path in a final serial order representative of the order of the garments desired by the customers.

Proceedings ArticleDOI
22 Oct 1990
TL;DR: A natural k-round tournament over n=2/sup k/ players is analyzed, and it is demonstrated that the tournament possesses a surprisingly strong ranking property.
Abstract: A natural k-round tournament over n=2/sup k/ players is analyzed, and it is demonstrated that the tournament possesses a surprisingly strong ranking property. The ranking property of this tournament is exploited by being used as a building block for efficient parallel sorting algorithms under a variety of different models of computation. Three important applications are provided. First, a sorting circuit of depth 7.44 log n, which sorts all but a superpolynomially small fraction of the n-factorial possible input permutations, is defined. Secondly, a randomized sorting algorithm that runs in O(log n) word steps with very high probability is given for the hypercube and related parallel computers (the butterfly, cube-connected cycles, and shuffle-exchange). Thirdly, a randomized algorithm that runs in O(m+log n)-bit steps with very high probability is given for sorting n O(m)-bit records on an n log n-node butterfly. >

Patent
19 Jan 1990
TL;DR: In this article, a method for sorting data in a computer data storage system that has particular advantages in implementing a key index tree structure was proposed, using buffer-size substrings to sort strings of key records into a linked list structure that can be directly transformed into an index tree.
Abstract: A method for sorting data in a computer data storage system that has particular advantages in implementing a key index tree structure. The sorting method uses buffer-size substrings to sort strings of key records into a linked list structure that can be directly transformed into an index tree. The sorting method also may be used for sorting large sets of data records in place on a computer storage system.


Proceedings ArticleDOI
23 Oct 1990
TL;DR: The authors discuss visual information processing issues relevant to the research, methodology and data analyses used to develop the classification system, results of the empirical study, and possible directions for future research.
Abstract: An exploratory effort to classify visual representations into homogeneous clusters is discussed. The authors collected hierarchical sorting data from twelve subjects. Five principal groups of visual representations emerged from a cluster analysis of sorting data: graphs and tables, maps, diagrams, networks, and icons. Two dimensions appear to distinguish these clusters: the amount of spatial information and cognitive processing effort. The authors discuss visual information processing issues relevant to the research, methodology and data analyses used to develop the classification system, results of the empirical study, and possible directions for future research. >

Proceedings ArticleDOI
08 Oct 1990
TL;DR: It is concluded that the most reasonable large-array sort for this machine will combine hypercube virtualization with the processor axes transposed dynamically within an xnet embedding.
Abstract: The problem of sorting a collection of values on a mesh-connected, distributed-memory, SIMD (single-instruction-stream, multiple-data-stream) computer using variants of Batcher's bitonic sort algorithm is considered for the case in which the number of values exceeds the number of processors in the machine. In this setting the number of comparisons can be reduced asymptotically if the processors have addressing autonomy (locally indirect addressing), and communication costs can be reduced by judicious domain decomposition. The implementation of several related adaptations of bitonic sort on a MasPar MP-1 is reported. Performance is analyzed in relation to the virtualization ratio VPR. It is concluded that the most reasonable large-array sort for this machine will combine hypercube virtualization with the processor axes transposed dynamically within an xnet embedding. >


Proceedings ArticleDOI
22 Oct 1990
TL;DR: The technique converts any network that uses unreliable comparators to a fault-tolerant network that produces the correct output with overwhelming probability, even if each comparator is faulty with some probability smaller than 1/2, independently of other comparators.
Abstract: A general technique for enhancing the reliability of sorting networks and other comparator-based networks is presented. The technique converts any network that uses unreliable comparators to a fault-tolerant network that produces the correct output with overwhelming probability, even if each comparator is faulty with some probability smaller than 1/2, independently of other comparators. The depth of the fault-tolerant network is only a constant times the depth of the original network, and the width of the network is increased by a logarithmic factor. >