scispace - formally typeset
Search or ask a question

Showing papers on "Encoding (memory) published in 1993"


Patent
14 May 1993
TL;DR: In this article, a plurality of compute modules, in a preferred embodiment, for a total of four compute modules coupled in parallel, are used for processing video data for compression/decompression in realtime.
Abstract: An apparatus and method for processing video data for compression/decompression in real-time. The apparatus comprises a plurality of compute modules, in a preferred embodiment, for a total of four compute modules coupled in parallel. Each of the compute modules has a processor, dual port memory, scratch-pad memory, and an arbitration mechanism. A first bus couples the compute modules and a host processor. Lastly, the device comprises a shared memory which is coupled to the host processor and to the compute modules with a second bus. The method handles assigning portions of the image for each of the processors to operate upon.

286 citations


Journal ArticleDOI
TL;DR: Fuzzy-trace theory explains this memory-independence effect on the grounds that reasoning operations do not directly access verbatim traces of critical background information but, rather, process gist that was retrieved and edited in parallel with the encoding of such information.
Abstract: Recent experiments have established the surprising fact that age improvements in reasoning are often dissociated from improvements in memory for determinative informational inputs. Fuzzy-trace theory explains this memory-independence effect on the grounds that reasoning operations do not directly access verbatim traces of critical background information but, rather, process gist that was retrieved and edited in parallel with the encoding of such information. This explanation also envisions 2 ways in which children's memory and reasoning might be mutually interfering: (a) memory-to-reasoning interference, a tendency to process verbatim traces of background inputs on both memory probes and reasoning problems that simultaneously improves memory performance and impairs reasoning, and (b) reasoning-to-memory interference, a tendency for reasoning activities that produce problem solutions to erase or reduce the distinctiveness of verbatim traces of background inputs. Both forms of interference were detected in studies of children's story inferences.

175 citations


Patent
28 Dec 1993
TL;DR: In this paper, an associative memory utilizes a location addressable memory and lookup table to generate from a key the address in memory storing an associated record, such that the sum of valid index values for symbols of a particular key is a unique value that is used as an address to the memory storing the record associated with that key.
Abstract: To provide fast access times with very large key fields, an associative memory utilizes a location addressable memory and lookup table to generate from a key the address in memory storing an associated record. The lookup tables, stored in memory, are constructed with the aid of arithmetic data compression methods to create a near perfect hashing of the keys. For encoding into the lookup table, keys are divided into a string of symbols. Each valid and invalid symbol is assigned an index value, such that the sum of valid index values for symbols of a particular key is a unique value that is used as an address to the memory storing the record associated with that key, and the sum of keys containing invalid index values point to a location in memory containing similar data. Utilizing the lookup tables set and relational operations maybe carried out that provide a user with a maximum number of key records resulting from a sequence of intersection, union and mask operations.

113 citations


Patent
13 Oct 1993
TL;DR: In this paper, a two-layered video encoding technique that adapts the method for encoding information transmitted in the low-priority bit-stream to the rate of cell loss on the network is presented.
Abstract: The quality of video images received at the remote end of an ATM network capable of transmitting data at high and low priorities is greatly improved at high cell loss levels by employing a two-layered video encoding technique that adapts the method for encoding information transmitted in the low-priority bit-stream to the rate of cell loss on the network so that compression efficiency and image quality are high when the network load is low and resiliency to cell loss is high when the network load is high. The encoder adapts its encoding method in response to a cell loss information signal generated by the remote decoder by selecting the prediction mode used to encode the low-priority bit-stream, and by changing the frequency at which slice-start synchronization codes are placed within the low-priority bit-stream.

87 citations


Patent
13 May 1993
TL;DR: In this article, the authors propose an adaptive technique for encoding and decoding which facilitates the transmission, reception, storage, or retrieval of a scalable video signal. But this scaling is performed entirely in the spatial domain.
Abstract: An adaptive technique for encoding and decoding which facilitates the transmission, reception, storage, or retrieval of a scalable video signal. The invention allows this scaling to be performed entirely in the spatial domain. In a specific embodiment of the invention this scaling is realized by adaptively encoding a video signal based upon a selection taken from among a multiplicity of predictions from previously decoded images, and a selection of compatible predictions obtained from up-sampling lower resolution decoded images of the current temporal reference. A technical advantage of the invention is that both the syntax and signal multiplexing structure of at least one encoded lower-resolution scale of video is compatible with the MPEG-1 standards.

71 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review the work of G.D. Forney, Jr., on the algebraic structure of convolutional encoders upon which some new results regarding minimal CNNs rest.
Abstract: The authors review the work of G.D. Forney, Jr., on the algebraic structure of convolutional encoders upon which some new results regarding minimal convolutional encoders rest. An example is given of a basic convolutional encoding matrix whose number of abstract states is minimal over all equivalent encoding matrices. However, this encoding matrix can be realized with a minimal number of memory elements neither in controller canonical form nor in observer canonical form. Thus, this encoding matrix is not minimal according to Forney's definition of a minimal encoder. To resolve this difficulty, the following three minimality criteria are introduced: minimal-basic encoding matrix, minimal encoding matrix, and minimal encoder. It is shown that all minimal-basic encoding matrices are minimal and that there exist minimal encoding matrices that are not minimal-basic. Several equivalent conditions are given for an encoding matrix to be minimal. It is proven that the constraint lengths of two equivalent minimal-basic encoding matrices are equal one by one up to a rearrangement. All results are proven using only elementary linear algebra. >

66 citations


Patent
16 Jun 1993
TL;DR: In this paper, the authors proposed a method to derive a pair of analysis/synthesis windows from a known window function which satisfy various filter selectivity and window overlap-add constraints.
Abstract: The invention relates to the design of analysis and synthesis windows for use in high-quality transform encoding and decoding of audio signals, especially encoding and decoding having a short signal-propagation delay. The design method derives a pair of analysis/synthesis windows from a known window function which satisfy various filter selectivity and window overlap-add constraints.

57 citations


Journal ArticleDOI
TL;DR: This paper found that the intact map prior to the text led to better recall of both map information and facts from the text, and that information from the map can be used to cue retrieval of associated verbal facts, without exceeding the processing constraints of the memorial system.
Abstract: In order to test how associated verbal and spatial stimuli are processed in memory, undergraduates studied a reference map as either an intact unit or as a series of individual features, and read a text containing facts related to map features. In addition, the map was presented either before or after reading the text. Seeing the intact map prior to the text led to better recall of both map information and facts from the text. These results support a dual coding model, where stimuli such as maps possess a retrieval advantage because they allow simultaneous representation in working memory. This advantage occurs because information from the map can be used to cue retrieval of associated verbal facts, without exceeding the processing constraints of the memorial system.

49 citations


Book ChapterDOI
01 Jan 1993
TL;DR: The level-of-processing framework for memory as discussed by the authors has been widely accepted in the literature, which ties storage to the original operations used in encoding events, and ties storage durability to the depth of those operations.
Abstract: Perception-based knowledge representations store memories of the perceptual structure of events and appear to be processed in neural regions close to where the original perceptions were processed. A comparable duality guides the literature on auditory sensory memory. Many experiments on pure tones show evidence of persistence and masking with a time constraint of about a quarter of a second. The simplest proceduralist attitude towards memory of all kinds would then be that those units active on the original occasion will be changed, thereafter, and will be the “locus” of a memory, whether that activity was top-down or bottom-up. Modern memory theory has more or less embraced proceduralism during the last 20 years. One measure of this has been the wide acceptance of the levels of processing framework for memory, which ties storage to the original operations used in encoding events, and ties storage durability to the “depth” of those operations.

44 citations


Patent
Michael Keith1
13 May 1993
TL;DR: In this article, a system and method for encoding and decoding data in a video processor system where image data values represent a plurality of successive images is presented, where the data values representing the current image are simultaneously read from the memory block while the companded values are being stored into the same memory block under the control of a number of pointers and synchronization flows.
Abstract: A system and method are provided for encoding and decoding data in a video processor system wherein image data values represent a plurality of successive images. A current image is stored into a block of memory locations and later read from the block of memory locations, encoded and transmitted. In addition to being transmitted, the encoded data values are also decoded in order to provide companded image values. The companded image values are then stored into the same block of memory locations as the current image. The data values representing the current image are simultaneously read from the memory block while the companded values are being stored into the same memory block under the control of a number of pointers and a number of synchronization flows. Additionally, the companded data values are simultaneously read from the same memory block for the purpose of performing motion estimation. The system of the present invention therefore permits the same block of memory locations to be used for dynamically storing and reading a plurality of images.

35 citations


Patent
Thomas A. Horvath1, Inching Chen1
05 Mar 1993
TL;DR: In this article, an apparatus and method for displaying nonobscured pixels in a multiple-media motion video environment (dynamic image management) possessing overlaid windows is presented, where boundary values and identification values corresponding to each window on a screen are saved in memory of a hardware device.
Abstract: An apparatus and method for displaying non-obscured pixels in a multiple-media motion video environment (dynamic image management) possessing overlaid windows. In an encoding process, only boundary values and identification values corresponding to each window on a screen are saved in memory of a hardware device. In a decoding process, the hardware device utilizes these initial boundary values saved in memory in such a way that when incoming video data enters the hardware device, the hardware device need only compare the incoming video data's identification with the identification saved in memory. The hardware device includes: compare logic devices, counters, minimal memory devices, a control logic block, and a driver.

Patent
08 Nov 1993
TL;DR: In this article, a motion vector detected by a motion detector and a quantizing parameter and a frame structure determined by a controller are stored in a memory and the data thus stored is supplied to an encoder which carries out encode processing corresponding to the stored data.
Abstract: A motion vector detected by a motion detector (2) and a quantizing parameter and a frame structure determined by a controller (4) are stored in a memory (5). The data thus stored is supplied to an encoder (3) which carries out encode processing corresponding to the stored data. Thus, the data are coded via multiple paths which can reduce restrictions from the standpoint of time and also reduce the scale of hardware needed for encoding.

Proceedings ArticleDOI
17 Jan 1993
TL;DR: The set of information words is divided into two subsets: 1) the subset of words that are close to balanced and 2)the subset of Words that are not close tobalanced; then words in each subset are encoded with different methods.
Abstract: A binary word of length n E IN is called balanced when it (I:]) 0's. A code C is a balanced has [SI (LSl> 1's and code with T check bits and k information bits iff 1. C has fixed length n = k + T , 2. each word X E C is balanced, 3. IC1 = 2k. In [4], Knuth showed that if a balanced code with r check bits and I; information bits exists, then T > $log2 k + 0.326; he has designed serial encoding and both serial and parallel decoding schemes with k = 2' and k = 2' T 1 respectively. In both methods, for each given information word, some appropriate number of bits, starting from the first bit, are complemented; then a check is assigned to this modified information word to make the entire word balanced. In the sequential decoding the check represents the weight of the original information word whereas in the parallel decoding the check directly indicates the number of information bits complemented. In [I], [a] and [3] improved design methods are given. In this paper, we divide the set of information words into two subsets: 1)the subset of words that are close to balanced and 2)the subset of words that are not close to balanced; then we encode words in each subset with different methods. More precisely, given tcIN, let ( w ( X ) is the weight of X): Ut sf (xE@ : 0 5 W ( x ) 5 1 QT k t 5 W ( X ) 5 k )

Patent
01 Jun 1993
TL;DR: In this article, a decoder/encoder apparatus is provided which can be programmed to decode data and to encode data, and a memory within the apparatus is preloaded with a first memory map which is descriptive of a selected tree-based binary code.
Abstract: A decoder/encoder apparatus is provided which can be programmed to decode data and to encode data. To encode data, a memory within the apparatus is preloaded with a first memory map which is descriptive of a selected tree-based binary code. The first memory map is a reverse tree representation of the selected tree-based binary code. Data is then provided to the apparatus and is processed as specified by the first memory map thus generating encoded data. To decode data, the same memory is preloaded with a second memory map which is descriptive of the same selected tree-based binary code. The second memory map is a tree representation of the same selected tree-based binary code. Encoded data is then provided to the apparatus and is processed as specified by the second memory map thus generating decoded data.

Proceedings ArticleDOI
28 Mar 1993
TL;DR: The authors examine the performance of four network data representation standards and shows that the areas crucial to efficient encoder and decoder implementations are memory management, buffer management, and the overall simplicity of the encoding rules.
Abstract: The task of encoding complex data structures for network transmission is more expensive is terms of processor time and memory usage than most other components of the protocol stack. This problem can be partially addressed by simplifying the network data encoding rules and streamlining their implementation. The authors examine the performance of four network data representation standards: ASN.1 Basic Encoding Rules (BER) and Packed Encoding Rules (PER), Sun Microsystems' External Data Representation (XDR), and Apollo Computer's Network Data Representation (NDR). It is found that the areas crucial to efficient encoder and decoder implementations are memory management, buffer management, and the overall simplicity of the encoding rules. It is shown that it is possible to implement ASN.1 BER and PER encoders and decoders that are as fast as their corresponding XDR versions. >

Patent
21 Sep 1993
TL;DR: In this article, a self-correcting video compression procedure and devices for encoding and decoding and reduction of artifacts of digitized video images and thereby increasing the compression ratio and the efficiency of image storage and transmission by using a best fit surface encoding technique.
Abstract: Self-correcting video compression procedures and devices for encoding and decoding and reduction of artifacts of digitized video images and thereby increasing the compression ratio and the efficiency of image storage and transmission by using a best fit surface encoding technique. Encoder accurately processes differential image encoding by computing copy of decoder frame and employing feedback methods.

Patent
29 Sep 1993
TL;DR: In this article, the authors propose to suppress the degradation of the quality due to reencoding by storing an encoding parameter obtained at decoding and referring to this encoding parameter to control the encoding at the time of encoding video signal again.
Abstract: PURPOSE: To suppress the degradation of the quality due to reencoding by storing an encoding parameter obtained at the time of decoding and referring to this encoding parameter to control the encoding at the time of encoding video signal again CONSTITUTION: When video communication is started, a line control part 11 performs the reception processing of the information signal sent from a terminal, and a decoding part 12 separates an encoded video signal and performs the error correction code decoding processing to take out the encoded video signal This video signal is subjected to correction code encoding and multiplexing processing by a transmission code encoding part 18 and is transmitted to a communication terminal through a communication line by a line control part 19 Meanwhile, if the reception terminal cannot receive the video signal from the transmission terminal, the video signal from the decoding part 12 is subjected to decoding processing by a decoding part 13, and the encoding parameter is stored in a management part 15 A conversion part 14 refers to the stored parameter to set a re-encoding parameter, and an encoding part 17 encodes the video signal in accordance with this parameter and transmits it to the communication terminal through the encoding part 18 and the control part 19 COPYRIGHT: (C)1995,JPO

Journal ArticleDOI
TL;DR: Results indicate that a sparse encoding of binary data words that supports minimal hologram area usage is an effective scheme for memories based on Fourier-transform computer-generated holography.
Abstract: We discuss the capacity of parallel-access optical memories based on Fourier-transform computer-generated holography. Emphasis is placed on the fundamental capacity cost associated with Fourier-transform computer-generated holography encoding. Capacity cost is discussed in terms of encoder complexity, memory overhead, and media defect tolerance. Results indicate that a sparse encoding of binary data words that supports minimal hologram area usage is an effective scheme for memories based on Fourier-transform computer-generated holography. These results are independent of computer-generated-holography algorithm and media type.

Journal ArticleDOI
TL;DR: This work shows that extensive use of memory can reduce information processing to a simple and flexible procedure, without the need of complicated and specific preprocessing.
Abstract: A memory-based system for autonomous indoor navigation is presented. The system was implemented as a follow-midline reflex on a robot that moves along the corridors of our institute. The robot estimates its position in the environment by comparing the visual input with images contained in its memory. Spatial positions are represented by classes. Memories are formed during a learning phase by encoding labeled images. The output of the system is the a posteriori probability distribution of the classes, given an input image. During performance, an image is assigned to the class that maximizes the probability. This work shows that extensive use of memory can reduce information processing to a simple and flexible procedure, without the need of complicated and specific preprocessing. The system is shown to be reliable, with good generalization capability. With learning limited to a small part of a corridor, the robot navigates along the entire corridor. Furthermore, it is able to move in other corridors of different shape, with different illumination conditions.

Proceedings ArticleDOI
24 Nov 1993
TL;DR: It was found that encoding approaches affect a neural network's ability to extract features from the raw data, and an encoding approach that uses more input nodes to represent a single parameter generally can result in relatively lower training errors for the same training cycles.
Abstract: The authors report the results of an empirical study about the effect of input encoding on the performance of a neural network in the classification of numerical data. Two types of encoding schemes were studied, namely numerical encoding and bit pattern encoding. Fisher Iris data were used to evaluate the performance of various encoding approaches. It was found that encoding approaches affect a neural network's ability to extract features from the raw data. Input encoding also affects the training errors, such as maximum error, root square error, the training times and cycles needed to attain these error thresholds. It was also noted that an encoding approach that uses more input nodes to represent a single parameter generally can result in relatively lower training errors for the same training cycles. >

Patent
07 Oct 1993
TL;DR: In this article, the encoding units are controlled by an encoding controller which selects the encoding memory table for use in accordance with at least one kind of component data corresponding to the input picture element and corresponding to picture elements surrounding the image element.
Abstract: A device for encoding a component color image data having plural, diverse kinds of component data has plural kinds of encoding units including plural kinds of encoding memory tables which correspond to the respective component data and have diverse encoding characteristics. Each encoding unit has component data input thereto in a picture element amount which is encoded by use of one kind of encoding memory table among the plurality of kinds of memory tables and is output. The encoding units are controlled by an encoding controller which selects the encoding memory table for use in accordance with at least one kind of component data corresponding to the input picture element and component data corresponding to picture elements surrounding the input picture element.

Patent
31 Aug 1993
TL;DR: An encoding method of a GBTC type that encoded data and decoded data have a fixed length can provide an encoding/decoding method which, in encoding or decoding, is capable of performing editing processings to rotate an original image in 90° and to rotate it inversely with respect to up and down, right and left.
Abstract: An encoding method of a GBTC type that encoded data and decoded data have a fixed length, which can provide an encoding/decoding method which, in encoding or decoding, is capable of performing editing processings to rotate an original image in 90° and to rotate it inversely with respect to up and down, right and left. When encoded data in each of blocks of an original image and level specification signals φij of respective pixels in each block are written into a memory or are read out therefrom, the encoded data and the pixel level specification signal are arranged in such a manner that they can be rotated or inversely rotated an integral number of times 90°.

Patent
23 Jul 1993
TL;DR: In this paper, the authors proposed an adaptive quantitization of the movement of digital data in an independent channel N in parallel and performing adaptive movement compensation by adding image data S13 in the memory 5 shifted by means of a moving vector and remaining difference block data S14 applied IDCT.
Abstract: PURPOSE:To highly efficiently encode moving picture signal even when the moving picture signal screen is divided at the time of encoding by making the border part of a divided channel and data at border part of the adjacent channel overlap each other. CONSTITUTION:An inverted quantitization circuit 13 performs the inverted quantitization of quantitized data to be outputted by a quantitization circuit 8, inverted quantitization data S11 obtained are applied inverted discrete orthogonal transformation in a inverted discrete orthogonal transformation circuit 14 and stored in an overlap/frame memory 5 as local composite data. In a memory 5, the data is overlapped between the screen divided adjacent channels as to a luminance signal Y. A movement compensation circuit 15 performs the movement compensation by adding image data S13 in the memory 5 shifted by means of a moving vector and remaining difference block data S14 applied IDCT, and updates the contents of the memory 5. Thus, by compensating the movement of digital data in an independent channel N in parallel and performing adaptive quantitization of the movement, highly efficient encoding can be enabled.

Patent
30 Mar 1993
TL;DR: In this paper, an image encoding apparatus for encoding image information in a hierarchical form includes encoders provided for respective hierarchies and code buffers provided after the respective encoder-decoder pairs.
Abstract: An image processing apparatus is provided which can shape codes and omit buffers at a decoding side as well as omit an image memory at an encoding side by providing code buffers for respective encoders. An image encoding apparatus for encoding image information in a hierarchical form includes encoders provided for respective hierarchies and code buffers provided after the respective encoders. Furthermore, it is possible to use a common code buffer memory for respective hierarchies at the encoding side. In such a case, by arranging so that a hierarchical tag can identify to which hierarchy a code belongs, a simplified encoding apparatus is provided.

Proceedings ArticleDOI
TL;DR: Results suggest that, when used to encoded input patterns, ensemble encoding can accelerate learning and improve classification accuracy in MLP networks.
Abstract: Ensemble encoding employs multiple, overlapping receptive fields to yield a distributed representation of analog signals. The effect of ensemble encoding on learning in multi-layer perceptron (MLP) networks is examined by applying it to a neural learning benchmark, sonar signal classification. Results suggest that, when used to encoded input patterns, ensemble encoding can accelerate learning and improve classification accuracy in MLP networks.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
04 Mar 1993
TL;DR: In this article, the authors proposed an audio encoding device capable of securing relatively good tone quality even when the compression rate is large in a device compressing digital audio data, where the auxiliary information of pitch information, a pitch gain, a linear predictive coefficient and an average amplitude of a pseudo residual signal, etc., is constituted so that, when the waveform is regarded as the same, the signal showing the fact and the signal quantizing only the residual signal are sent to an encoding part, and no auxiliary information is quantized and sent to the encoding part.
Abstract: PURPOSE: To provide an audio encoding device capable of securing relatively good tone quality even when the compression rate is large in a device compressing digital audio data. CONSTITUTION: This device is provided with a means 50 judging whether an input audio waveform to an audio encoding part is regarded as the same between adjacent analysis frames or not and an encoding auxiliary information storage part 52 minutely storing the auxiliary information of pitch information, a pitch gain, a linear predictive coefficient and an average amplitude of a pseudo residual signal, etc., and is constituted so that, when the waveform is regarded as the same, the signal showing the fact and the signal quantizing only the residual signal, are sent to an encoding part 45, and no auxiliary information is quantized and sent to the encoding part. Then, the device is constituted so that the quantization number of bit to be allocated to the auxiliary information is added to the quantization number of bit of the residual signal. At this time, the auxiliary information stored in the encoding auxiliary information storage part 52 is used for the auxiliary information used in the adaptive prediction parts 34, 37 and the quantization number of bit adaptation part 51 of the audio encoding part. COPYRIGHT: (C)1994,JPO&Japio

Patent
02 Apr 1993
TL;DR: In this paper, the authors propose to provide the encoder corresponding to a video processing with various input processing patterns by performing the common encoding processing about a video signal with different patterns by the first encoding means.
Abstract: PURPOSE:To provide the encoder corresponding to a video processing with various input processing patterns by performing the common encoding processing about a video signal with different patterns by the first encoding means. CONSTITUTION:A thinning filter 12 performs the band restriction in the horizontal and vertical directions of a picture B and takes the picture element as a picture C having the same number of picture elements as a thinning picture A. A first encoding device 14 outputs an encoding signal X with the same processing, compression rate, and data amount regardless of the modes of a switch 13. An interpolation filter 16 outputs a picture D with the number of picture elements of a reproduction signal picture A of the picture C interpolated, and a second encoding means 18 encodes an addition picture E of a subtracter 17, outputting an encoding signal Y. Thus, as the encoding means 14 performs the common encoding processing on pictures A and B, the encoding can be performed even when a video signal with different input signal, and the encoding data can be decoded by any decoder.

Journal ArticleDOI
TL;DR: This article proposes a new approach of the encoding and use of world knowledge that supports an architecture that can scale up, and encodes the non-systematic knowledge associated with these concepts, or what Fillmore (1982) calls a concept's background frame knowledge.
Abstract: Traditionally, semantic memory is considered to be composed of a single layer of knowledge. This layer can be thought of as encoding the systematic relations that underlie the regularities in our cognitive world. In this article, this notion is extended to include a second layer, so that semantic memory now consists of two tiers. The second tier asserts that each of our concepts has attached to it an associational cloud of knowledge that encodes the non-systematic knowledge associated with these concepts, or what Fillmore (1982) calls a concept's background frame knowledge. Semantic memory is constructed from co-occurrence statistics gathered from the Wall Street Journal text corpus. The associational knowledge is encoded from a set of semantic features extracted from the categories o/Roget's Thesaurus. This approach of the encoding and use of world knowledge is significant in that it supports an architecture that can scale up.