scispace - formally typeset
Search or ask a question

Showing papers on "Encoding (memory) published in 1999"


Journal ArticleDOI
TL;DR: Experimental results from normal subjects and patients with various brain lesions converge on the conclusion that there is a specialization in the verbal working memory system for assigning the syntactic structure of a sentence and using that structure in determining sentence meaning that is separate from theWorking memory system underlying the use of sentence meaning to accomplish other functions.
Abstract: This target article discusses the verbal working memory system used in sentence comprehension. We review the concept of working memory as a short-duration system in which small amounts of information are simultaneously stored and manipulated in the service of accomplishing a task. We summarize the argument that syntactic processing in sentence comprehension requires such a storage and computational system. We then ask whether the working memory system used in syntactic processing is the same as that used in verbally mediated tasks that involve conscious controlled processing. Evidence is brought to bear from various sources: the relationship between individual differences in working memory and individual differences in the efficiency of syntactic processing; the effect of concurrent verbal memory load on syntactic processing; and syntactic processing in patients with poor short-term memory, patients with poor working memory, and patients with aphasia. Experimental results from these normal subjects and patients with various brain lesions converge on the conclusion that there is a specialization in the verbal working memory system for assigning the syntactic structure of a sentence and using that structure in determining sentence meaning that is separate from the working memory system underlying the use of sentence meaning to accomplish other functions. We present a theory of the divisions of the verbal working memory system and suggestions regarding its neural basis.

974 citations


Book ChapterDOI
01 Apr 1999
TL;DR: This model defines working memory as controlled processing involving active maintenance and/or rapid learning, where controlled processing is an emergent property of the dynamic interactions of multiple brain systems, but the prefrontal cortex and hippocampus are especially influential owing to their specialized processing abilities.
Abstract: FIVE CENTRAL FEATURES OF THE MODEL We define working memory as controlled processing involving active maintenance and/or rapid learning, where controlled processing is an emergent property of the dynamic interactions of multiple brain systems, but the prefrontal cortex (PFC) and hippocampus (HCMP) are especially influential owing to their specialized processing abilities and their privileged locations within the processing hierarchy (both the PFC and HCMP are well connected with a wide range of brain areas, allowing them to influence behavior at a global level). The specific features of our model include: (1) A PFC specialized for active maintenance of internal contextual information that is dynamically updated and self-regulated, allowing it to bias (control) ongoing processing according to maintained information (e.g., goals, instructions, partial products). (2) An HCMP specialized for rapid learning of arbitrary information, which can be recalled in the service of controlled processing, whereas the posterior perceptual and motor cortex (PMC) exhibits slow, long-term learning that can efficiently represent accumulated knowledge and skills. (3) Control that emerges from interacting systems (PFC, HCMP, and PMC). (4) Dimensions that define continua of specialization in different brain systems: for example, robust active maintenance, fast versus slow learning. (5) Integration of biological and computational principles. Working memory is an intuitively appealing theoretical construct – perhaps deceptively so.

334 citations


Book ChapterDOI
01 Apr 1999
TL;DR: This final chapter starts where the previous chapter left off (Kintsch, Healy, Hegarty, Pennington, & Salthouse, Chapter 12) and offers some thoughts about the future directions of working memory research.
Abstract: This final chapter starts where the previous chapter left off (Kintsch, Healy, Hegarty, Pennington, & Salthouse, Chapter 12). The main goal of the current chapter is to offer some thoughts we have about the future directions of working memory research. In particular, we present our own view of where the field stands and where it may be going in the belief that such reflection on the “big picture” is something this field needs. The organization of the chapter is as follows. We will first present six points of general theoretical consensus that appear to be emerging among the models of working memory included in this volume. Despite this globallevel agreement about the nature of working memory, there are some important disagreements among different models. Thus, we will next point out some unresolved theoretical issues for each of the eight designated questions. In the last section, we will outline several issues that have not yet received much attention in the current models of working memory, but we believe will become increasingly important for future empirical and theoretical investigations. General Theoretical Consensus About the Nature of Working Memory At the beginning of Chapter 1, we quoted H. J. Eysenck's (1986) rather pessimistic remark about psychometric theories of intelligence1 and pointed out that some people would probably feel the same way about working memory: There are many different models of working memory out there, but they all seem so different that it is difficult to see how they relate to one another.

272 citations


Journal ArticleDOI
TL;DR: Three experiments investigated the relationship between memory for input and inductive learning of morphological rules relating to functional categories in a semiartificial form of Italian to suggest that knowledge of distributional rules does not simply emerge out of memory encodings of the relevant forms but depends upon the appropriate allocation of attention over relationships between input elements at the time of encoding.
Abstract: Three experiments investigated the relationship between memory for input and inductive learning of morphological rules relating to functional categories in a semiartificial form of Italian. A verbatim memory task was used as both the vehicle for presenting sentences and as a continuous measure of memory performance. Experiments 2 and 3 introduced increasingly explicit manipulations of attention to form compared to Experiment 1. In all experiments there were strong relationships between individual differences in memory for input as measured early in the experiment and eventual learning outcomes, and in Experiments 2 and 3 learning form-form (but not form-function) rules was related to vocabulary learning efficiency (taken as a measure of phonological long-term memory ability). These relationships along with the lack of an effect of feedback in Experiment 3 suggest that subjects tended to adopt a data-driven, as opposed to conceptually driven, mode of learning. However, the fact that the introduction of highlighting and vocabulary pretraining in Experiment 2 had a large impact on learning without improving early memory is taken to suggest that knowledge of distributional rules does not simply emerge out of memory encodings of the relevant forms but depends upon the appropriate allocation of attention over relationships between input elements at the time of encoding.

189 citations


Book ChapterDOI
01 Apr 1999
TL;DR: LT-WM reflects a complex skill acquired to meet the particular demands of future accessibility for information with tasks within a particular domain of expertise, so that the traditional assumption of a strict separation between memory, knowledge, and procedures for the task is not valid for skilled performance.

185 citations


Book ChapterDOI
01 Apr 1999
TL;DR: It is shown that working memory is a central construct in cognitive psychology and, more recently, cognitive neuroscience and the term is associated with the radial arm maze paradigm.
Abstract: Working memory plays an essential role in complex cognition. Everyday cognitive tasks – such as reading a newspaper article, calculating the appropriate amount to tip in a restaurant, mentally rearranging furniture in one's living room to create space for a new sofa, and comparing and contrasting various attributes of different apartments to decide which to rent – often involve multiple steps with intermediate results that need to be kept in mind temporarily to accomplish the task at hand successfully. “Working memory” is the theoretical construct that has come to be used in cognitive psychology to refer to the system or mechanism underlying the maintenance of task-relevant information during the performance of a cognitive task (Baddeley & Hitch, 1974; Daneman & Carpenter, 1980). As reflected by the fact that it has been labeled “the hub of cognition” (Haberlandt, 1997, p. 212) and proclaimed as “perhaps the most significant achievement of human mental evolution” (Goldman-Rakic, 1992, p. 111), it is a central construct in cognitive psychology and, more recently, cognitive neuroscience. Despite the familiarity of the term, however, it is not easy to figure out what working memory really is. To begin with, the term working memory is used in quite different senses by different communities of researchers. In the behavioral neuroscience and animal behavior fields, for example, the term is associated with the radial arm maze paradigm.

175 citations


Patent
05 Apr 1999
TL;DR: In this article, a method and apparatus for encoding, illustratively, a video information stream to produce an encoded information stream according to a group of frames (GOF) information structure where the GOF structure and, optionally, a bit budget are modified in response to, respectively, information discontinuities and the presence of redundant information in the video stream.
Abstract: A method and apparatus for encoding, illustratively, a video information stream to produce an encoded information stream according to a group of frames (GOF) information structure where the GOF structure and, optionally, a bit budget are modified in response to, respectively, information discontinuities and the presence of redundant information in the video information stream (due to, e.g., 3:2 pull-down processing).

151 citations


Journal ArticleDOI
TL;DR: The zero-error capacity region and the maximum total number of information hits stored in the memory for T consecutive cycles for the situation where the encoder knows and the decoder does not know the previous state of the memory are determined.
Abstract: The generalized write-once memory introduced by Fiat and Shamir (1984) is a q-ary information storage medium. Each storage cell is expected to store one of q symbols, and the legal state transitions are described by an arbitrary directed acyclic graph. This memory model can be understood as a generalization of the binary write-once memory which was introduced by Rivest and Shamir (1982). During the process of updating information, the contents of a cell can be changed from a 0-state to a 1-state but not vice versa. We study the problem of reusing a generalized write-once memory for T successive cycles (generations). We determine the zero-error capacity region and the maximum total number of information hits stored in the memory for T consecutive cycles for the situation where the encoder knows and the decoder does not know the previous state of the memory. These results extend the results of Wolf, Wyner, Ziv, and Korner (1984) for the binary write-once memory.

129 citations


BookDOI
14 Jan 1999
TL;DR: This chapter discusses the architecture of human memory, functional dissociation of brain regions in learning and memory, evidence for multiple systems, and component processes versus systems.
Abstract: J.K. Foster and M. Jelicic, memory chapers, procedures, and processes E. Tulving, study of memory - processes and systems H.L. Roediger, R. Buckner, K.B. McDermott, components of processing R.M. McDonald, A-M. Ergis, and G. Winocur, functional dissociation of brain regions in learning and memory, evidence for multiple systems T.A. Blaxton, combining disruption and activation techniques to map conceptual and perceptual memory processes in the human brain A.R. Mayes, how does the brain mediate our ability to remember? M.S. Weldon, the memory chop shop - issues in the search for memory systems J.D.E. Gabrieli, the architecture of human memory J.P. Toth and R.R. Hunt, not one versus many, but zero versus any, structure and function in the context of the multiple memory systems debate A.J. Parkin, component processes versus systems, is there really an important difference?

111 citations


Journal ArticleDOI
TL;DR: The mechanisms that encode information into memory belong to a family of mechanisms that are involved in dual-task slowing phenomena and that have been studied under the rubric of the PRP effect (psychological refractory period).
Abstract: We examined the mechanisms that mediate the transfer of information from visual input to storage in memory. Observers performed two concurrent tasks, one of which required input into memory. We discovered that the processes involved in the transfer of information from sensory input into memory cause slowing in concurrent cognitive tasks (dual-task slowing). We used the dual-task slowing effect to demonstrate that memory encoding requires more time when more information is to be encoded and to show that dual-task slowing occurs long after the initial perceptual encoding of visual information (Exp. 1). These results suggest a late and central locus of interaction between the two tasks. Experiment 2 also used two concurrent tasks. However, we reversed the direction of interaction between them and produced a memory deficit from the execution of a concurrent task. Together the results suggest that the mechanisms that encode information into memory belong to a family of mechanisms that are involved in dual-task slowing phenomena and that have been studied under the rubric of the PRP effect (psychological refractory period). We were able to locate the most probable locus of the dual-task interactions to a process that appears necessary for memory encoding. We call this process short-term consolidation.

99 citations


Journal ArticleDOI
TL;DR: Findings support the notion that item and source memory are mediated, as least in part, by different processes during encoding.
Abstract: Item memory and source memory were assessed in a task that simulated a social conversation. Participants generated answers to questions or read statements presented by one of three sources (faces on a computer screen). Positive generation effects were observed for item memory. That is, participants remembered topics of conversation better if they were asked questions about the topics than if they simply read statements about topics. However, anegative generation effect occurred for source memory. That is, remembering the source of some information was disrupted if participants were required to answer questions pertaining to that information. These findings support the notion that item and source memory are mediated, as least in part, by different processes during encoding.

Patent
25 Jun 1999
TL;DR: In this paper, an approach and method for classifying regions of an image, based on the relative importance of the various areas and adaptively use the importance information to allocate processing resources, e.g., bit allocation in an encoding environment.
Abstract: Apparatus and method for classifying regions of an image, based on the relative 'importance' of the various areas and to adaptively use the importance information to allocate processing resources, e.g., bit allocation in an encoding environment.

Journal ArticleDOI
TL;DR: The tensor product of trellises is used to build a trellis which is applicable to multiple description coding and provides remarkable performance with little encoding complexity.
Abstract: We present a construction of multiple description trellis-coded quantizers. We use the tensor product of trellises to build a trellis which is applicable to multiple description coding. The problems of index assignment and set partitioning for the resulting trellis are considered. The Viterbi algorithm provides the best path for encoding and the design procedure utilizes a generalized Lloyd algorithm. The encoding process simultaneously generates all the transmitted sequences. Furthermore, the complexity of the scheme is almost independent of the rate. The quantizer provides remarkable performance with little encoding complexity.

Patent
Yang Gao1
24 Aug 1999
TL;DR: In this article, a multi-rate speech codec supports a number of encoding bit rate modes by adaptively selecting encoding bits rate modes to match communication channel restrictions, and a variety of techniques are applied, many of which involve the classification of the input signal.
Abstract: A multi-rate speech codec supports a number of encoding bit rate modes by adaptively selecting encoding bit rate modes to match communication channel restrictions. In higher bit rate encoding modes, an accurate representation of speech through CELP (code-excited linear prediction) and other associated modeling parameters are generated for higher quality decoding and reproduction. To achieve high quality in high lower bit rate encoding modes, the speech encoder departs from the strict waveform matching criteria of regular CELP coders and strives to identify significant perceptual features of the input signal. To support lower bit rate encoding modes, a variety of techniques are applied, many of which involve the classification of the input signal. For each of the bit rate modes selected, a number of fixed or innovation sub-codebooks are selected in use in generating innovation vectors.


Book ChapterDOI
01 Jan 1999
TL;DR: Soar does not currently include any capacity limits on its dynamic memory (SDM), but is compatible with certain such limitations, and a constraint that SDM can hold at most two items of the same “type” yields a coherent explanation for many psycholinguistic phenomena in the comprehension of sentences.
Abstract: (3) Soar does not currently include any capacity limits on its dynamic memory (SDM), but is compatible with certain such limitations In particular, a constraint that SDM can hold at most two items of the same “type” (suitably defined) yields a coherent explanation for many psycholinguistic phenomena in the comprehension of sentences This constraint is motivated by computational efficiency concerns, and embodies the general principle of similarity-based interference (Baddeley & Logie; Cowan; Schneider; and O’Reilly, Braver & Cohen — all in this volume)

Patent
Peter K. Naji1
08 Dec 1999
TL;DR: In this paper, the state of each cell in a stacked memory comprising stacks of cells in an addressable array with each stack including MTJ memory cells stacked together with current terminals connected in series, and including a first and second current terminals coupled through an electronic switch to a current source.
Abstract: Apparatus and method of reading the state of each cell in a stacked memory comprising stacks of cells in an addressable array with each stack including MTJ memory cells stacked together with current terminals connected in series, and including a first and second current terminals coupled through an electronic switch to a current source. Each stack includes 2 n levels of memory. A voltage drop across an addressed stack is sensed. Reference voltages equal to the 2 n memory levels are provided and the sensed voltage drop is compared to the reference voltages to determine the memory level in the addressed stack. Encoding apparatus is used to convert the voltage drop to a digital output signal.

Journal ArticleDOI
TL;DR: How synchronization routines can be verified and how finite state programs can be analyzed are described and some interesting findings from the verification and the analysis are presented.
Abstract: The Mur/spl psi/ description language and verification system for finite-state concurrent systems is applied to the problem of specifying a family of multiprocessor memory models described in the SPARC Version 9 architecture manual. The description language allows for a straightforward operational description of the memory model which can be used as a specification for programmers and machine architects. The automatic verifier can be used to generate all possible outcomes of small assembly language multiprocessor programs in a given memory model, which is very helpful for understanding the subtleties of the model. The verifier can also check the correctness of assembly language programs including synchronization routines. This paper describes the memory models and their encoding in the Mur/spl psi/ description language. We describe how synchronization routines can be verified and how finite state programs can be analyzed. We also present some interesting findings from the verification and the analysis.

MonographDOI
01 Apr 1999
TL;DR: In this article, Izawa et al. present a 30-year retrospective of the models of human memory and their relationship with the theory of Distributed Associative Models of Memory (TODAM).
Abstract: Contents: R.C. Atkinson, Foreword. Preface. C. Izawa, On Human Memory: A Brief Introduction. R.M. Shiffrin, 30 Years of Memory. B.B. Murdock, The Buffer 30 Years Later: Working Memory in a Theory of Distributed Associative Model (TODAM). W.K. Estes, Models of Human Memory: A 30-Year Retrospective. J.G.W. Raaijmakers, R.H. Phaf, Part-List Cuing Revisited: Testing the Sampling-Bias Hypothesis. D.D. Ohrt, S.D. Gronlund, List-Length Effect and Continuous Memory: Confounds and Solutions. M.S. Humphreys, G. Tehan, Cues and Codes in Working Memory Tasks. A.F. Healy, T.F. Cunningham, Recall of Order Information: Evidence Requiring a Dual-Storage Memory Model. C. Izawa, Efficiency in Acquisition and Short-Term Memory: Study-Test-Rest Presentation Programs and Learning Difficulty. S.E. Clark, Recalling to Recognize and Recognizing Recall. T.D. Wickens, Measuring the Time Course of Retention.

Patent
21 Jan 1999
TL;DR: In this article, a flash memory for 16-value (4-bit) recording is presented, where the encoder converts input data Din into an abbreviated Reed-Solomon code to provide write data WD and the converter converts the write data DW into four-bit parallel data.
Abstract: This invention relates to a memory apparatus or the like adaptable to a multi-value recording flash memory and others. A flash memory 10 is designed for 16-value (4-bit) recording. For a write operation, the encoder (12) converts input data Din into an abbreviated Reed-Solomon code to provide write data WD. The converter (13) converts the write data WD into four-bit parallel data. The converted data are fed and written to the each memory cell constituting cell arrays (11) successively. For a read operation, the converter (14) converts read data RD from the cell arrays (11) into one-byte (8-bit) parallel data and supplies the converted data to the decoder (15) for error correction in units of bytes, whereby output data Dout is obtained. Since the Reed-Solomon code is used, sufficient performance with a limited number of errors to be corrected can be obtained.

Patent
Philippe Ferriere1
26 Aug 1999
TL;DR: An improved teleconferencing data capture, encoding, and decoding architecture incorporates the audio encoding and video encoding functions in capture encoder hardware devices, and incorporates the video decoding function in a video decoder hardware device as discussed by the authors.
Abstract: An improved teleconferencing data capture, encoding, and decoding architecture incorporates the audio encoding and video encoding functions in capture encoder hardware devices, and incorporates the video decoding function in a video decoder hardware device. The video decoder and an audio decoder are able to analyze incoming data packets and are communicably linked to their respective capture encoder devices, or to a single capture encoder device if both audio and video capture and encoding functions are incorporated in a single device, so that the capture and/or encoding functions may be modified during the course of a teleconference.


Proceedings ArticleDOI
04 Mar 1999
TL;DR: An adaptive code-book encoding is proposed, which is applicable for low power chip-interface especially for the deep sub-micron VLSls and results show that this encoding method is effective for lowPower chip- interface.
Abstract: An adaptive code-book encoding is proposed, which is applicable for low power chip-interface. In this method, data transition activity on bus signals is lowered by data encoding similar to the vector quantization (VQ). Transferred data on bus are the quantized vector numbers along with the Hamming difference between the original data and the quantized vector. A computer simulation and measurement results show that this encoding method is effective for low power chip-interface especially for the deep sub-micron VLSls.

Patent
08 Apr 1999
TL;DR: In this paper, an embedded DRAM architecture specially adapted for graphics processing includes multiple processor engines and memory blocks arranged to form a ring topology, including a standard execution unit and specialized execution units coupled to reconfigurable memory blocks to support MIMD style multiprocessing.
Abstract: An embedded DRAM architecture specially adapted for graphics processing includes multiple processor engines and memory blocks arranged to form a ring topology. The processor engines include a standard execution unit and specialized execution units coupled to reconfigurable memory blocks to support MIMD style multiprocessing. Additionally, extensions to the instruction set architecture (ISA) are defined to dramatically improve performance in MPEG-2 and MPEG-2 encoding and other DSP applications.

Patent
09 Mar 1999
TL;DR: In this article, an integrated multimedia encoding system is described, where a stream processor is coupled to the unified memory module and the multimedia encoder for multiplexing the elementary streams into a single stream, and monitoring the actual bit rate of the combined multimedia stream.
Abstract: An integrated multimedia encoding system is disclosed. Multimedia encoders which are capable of adjusting bit rates receive multimedia data to compress the data. After compressing the data, the multimedia encoders adjust the bit rates of the elementary streams responsive to a control input. Bit rates are increased or decreased using delays or, for video data, by allocating more or less bits to each macroblock, frame or group of frames. A unified memory module is coupled to the multimedia encoders to store the multimedia elementary stream data, the Program or Transport stream data, and data from other sources as needed. The unified memory is capable of adjusting storage allocations responsive to the realtime requirements of the incoming multimedia streams and the outgoing Program or Transport stream data. A stream processor is coupled to the unified memory module and the multimedia encoders for multiplexing the elementary streams into a single stream, and monitoring the actual bit rate of the combined multimedia stream. Monitoring the actual bit rate as a function of number of bits passed over a period of time provides accurate feedback as to the system throughput. A multimedia processor then determines the bit rates of the elementary streams, and generating a control signal to adjust the bit rates of the encoder to ensure that an optimal bit rate is continuously achieved by the system. The stream processor also operates using dedicated instructions which allow the stream processor to efficiently multiplex the incoming streams together.

Journal ArticleDOI
TL;DR: The view that lexical codes actively contribute to retention of words in working memory is supported, after using event-related brain potentials recorded while participants were performing a serial recall task.

Patent
08 Jan 1999
TL;DR: In this article, an associative memory utilizes a location addressable memory and lookup table to generate from a key the address in memory storing an associated record, such that the sum of valid index values for symbols of a particular key is a unique value that is used as an address to the memory storing the record associated with that key.
Abstract: To provide fast access times with very large key fields, an associative memory utilizes a location addressable memory and lookup table to generate from a key the address in memory storing an associated record. The lookup tables, stored in memory, are constructed with the aid of arithmetic data compression methods to create a near perfect hashing of the keys. For encoding into the lookup table, keys are divided into a string of symbols. Each valid and invalid symbol is assigned an index value, such that the sum of valid index values for symbols of a particular key is a unique value that is used as an address to the memory storing the record associated with that key, and the sum of keys containing invalid index values point to a location in memory containing similar data. Utilizing the lookup tables set and relational operations maybe carried out that provide a user with a maximum number of key records resulting from a sequence of intersection, union and mask operations.

01 Jan 1999
TL;DR: The model and other considerations suggest that cognitive architectures should enforce a two-element limit on the depth of the stack to deter its use for storing task goals while preserving its use of attention and learning.
Abstract: The notion that memory for goals is organized as a stack i s central in cognitive theory in that stacks are core constructs leading cognitive architectures. However, the stack over-predicts the strength of goal memory and the precision of goal selection order, while under-predicting the maintenance cost of both. A better way to study memory for goals is to treat them like any other kind of memory element. This approach makes accurate and wellconstrained predictions and reveals the nature of goal encoding and retrieval processes. The approach is demonstrated in an ACT-R model of human performance on a canonical goal-based task, the Tower of Hanoi. The model and other considerations suggest that cognitive architectures should enforce a two-element limit on the depth of the stack to deter its use for storing task goals while preserving its use for attention and learning.

Patent
30 Nov 1999
TL;DR: In this article, a method and system for uniformly encoding arrays of values in a video stream is presented, using a data type of an element to be encoded, a predictive value is quantized and encoded as part of the video stream.
Abstract: A method and system for uniformly encoding arrays of values in a video stream. Using a data type of an element to be encoded, a predictive value is quantized and encoded as part of a video stream. The resulting video stream encodes rotations, normals, and vectors.