scispace - formally typeset
Search or ask a question

Showing papers on "Encoding (memory) published in 2009"


Book ChapterDOI
TL;DR: The architecture affords two modes of information processing, an analytical and an associative mode, which provides a theoretically founded formulation of a dual-process theory of reasoning.
Abstract: Starting from the premise that working memory is a system for providing access to representations for complex cognition, six requirements for a working memory system are delineated: (1) maintaining structural representations by dynamic bindings, (2) manipulating structural representations, (3) flexible reconfiguration, (4) partial decoupling from long-term memory, (5) controlled retrieval from long-term memory, and (6) encoding of new structures into longterm memory. The chapter proposes an architecture for a system that meets these requirements. The working memory system consists of a declarative and a procedural part, each of which has three embedded components: the activated part of long-term memory, a component for creating new structural representations by dynamic bindings (the ‘‘region of direct access’’ for declarative working memory, and the ‘‘bridge’’ for procedural working memory), and a mechanism for selecting a single element (‘‘focus of attention’’ for declarative working memory, and ‘‘response focus’’ for procedural working memory). The architecture affords two modes of information processing, an analytical and an associative mode. This distinction provides a theoretically founded formulation of a dual-process theory of reasoning.

452 citations


Journal ArticleDOI
TL;DR: Long after playing squash, your brain continues to process the events that occurred during the game, thereby improving your game, and more generally, enhancing adaptive behavior.
Abstract: Long after playing a game of squash or reading this essay, your memory for playing and reading continues to be processed by your brain. These “offline” processes improve your game and your understanding of this essay, and more generally, enhance adaptive behavior. Yet progress in understanding how the brain regulates the offline processing of memories has been hampered by the absence of robust models for interpreting diverse, and often contradictory, experimental results. In the last 20 years, highly fertile quantitative models across the biological spectrum from the molecular to the behavioral have proved critical in advancing our understanding of memory encoding (e.g., [1–5]). But these models have focused upon the exact moment a memory is formed; while our ability to recall an event is dictated, at least in part, by events that precede and follow the encoding of a new memory. The critical role that events following memory encoding play in determining subsequent recall have been recognized for at least the past 100 years [6]. Yet few, if any, models have been formulated for these “offline” processes that produce qualitative and quantitative changes in a memory during consolidation (Box 1). Our attempts to understand these mysterious processes have generated a purely descriptive set of observations. Although these observations have provided critical glimpses into offline memory processing, they have also produced unresolved contradictions between some of the most fundamental and critical sets of observations (for reviews, see [7–11]). Box 1. Memory Consolidation A memory passes through at least three key milestones in its development: initially it is encoded, then it is consolidated, and finally it is retrieved. During consolidation a memory can undergo both quantitative and qualitative changes. A memory may be enhanced, demonstrated by a quantitative increase in performance, or it may be stabilized, demonstrated by becoming quantitatively less susceptible to interference [10,46,47]. A memory can also undergo qualitative changes: there can be a shift in the strategy used to solve a problem or the emergence of awareness for what had earlier been learned [49,50]. Although there is a rich diversity in the behavioral expression of consolidation, each of these examples may rely upon the same underlying computation (see main text). Consolidation is measured as a change in performance between testing and retesting [46,47]. Contrasting final performance at retesting against an initial baseline provides a direct measure of “offline” performance changes that occur during consolidation. For example, one set of observations suggests that consolidation may occur over any time interval, whereas another body of data suggests that these processes require sleep [6,8]. Clearly, both cannot be true. Resolving the inherent conflict between these perspectives strikes at the very heart of how biological mechanisms process memories after their initial encoding. Making sense of what threatens to become an avalanche of disconnected and incoherent empirical findings may require novel theories that can simultaneously reconcile apparently inconsistent observations and provide a fertile, hypothesis-driven framework for future work. Here, drawing upon examples mainly from the processing of motor skill memories, I take the first tentative steps toward assembling such a framework.

204 citations


Patent
03 Feb 2009
TL;DR: In this article, a method for operating a memory, which includes analog memory cells, includes encoding data with an Error Correction Code (ECC) that is representable by a plurality of equations.
Abstract: A method for operating a memory, which includes analog memory cells, includes encoding data with an Error Correction Code (ECC) that is representable by a plurality of equations. The encoded data is stored in a group of the analog memory cells by writing respective input storage values to the memory cells in the group. Multiple sets of output storage values are read from the memory cells in the group using one or more different, respective read parameters for each set. Numbers of the equations, which are satisfied by the respective sets of the output storage values, are determined. A preferred setting of the read parameters is identified responsively to the respective numbers of the satisfied equations. The memory is operated on using the preferred setting of the read parameters.

196 citations


Journal ArticleDOI
TL;DR: A layered neural architecture is described that implements encoding and maintenance, and links these processes to a plausible comparison process that makes the novel prediction that change detection will be enhanced when metrically similar features are remembered.
Abstract: Efficient visually guided behavior depends on the ability to form, retain, and compare visual representations for objects that may be separated in space and time. This ability relies on a short-term form of memory known as visual working memory. Although a considerable body of research has begun to shed light on the neurocognitive systems subserving this form of memory, few theories have addressed these processes in an integrated, neurally plausible framework. We describe a layered neural architecture that implements encoding and maintenance, and links these processes to a plausible comparison process. In addition, the model makes the novel prediction that change detection will be enhanced when metrically similar features are remembered. Results from experiments probing memory for color and for orientation were consistent with this novel prediction. These findings place strong constraints on models addressing the nature of visual working memory and its underlying mechanisms.

156 citations


Proceedings ArticleDOI
15 Jun 2009
TL;DR: This work investigates channels with memory and proposes algorithms that can exploit channel erasure dependence to increase throughput and decrease delay and formulate the problem of instantly decodable network coding as an integer linear program heuristically.
Abstract: We consider the throughput-delay tradeoff in network coded transmission over erasure broadcast channels. Interested in minimizing decoding delay, we formulate the problem of instantly decodable network coding as an integer linear program and propose algorithms to solve it heuristically. In particular, we investigate channels with memory and propose algorithms that can exploit channel erasure dependence to increase throughput and decrease delay.

130 citations


Patent
01 Jul 2009
TL;DR: In this paper, an intra-TP motion prediction/compensation unit 75 performs motion prediction within a predetermined search range by taking predicted motion vector information generated by an intrapredicted motion vector generating unit 76 as the center of search, on the basis of an image to be intra-predicted from a screen rearrangement buffer 62 and reference images from a frame memory 72.
Abstract: The present invention relates to image processing apparatus and method which make it possible to prevent a decrease in compression efficiency without increasing computational complexity An intra-TP motion prediction/compensation unit 75 performs motion prediction within a predetermined search range by taking predicted motion vector information generated by an intra-predicted motion vector generating unit 76 as the center of search, on the basis of an image to be intra-predicted from a screen rearrangement buffer 62, and reference images from a frame memory 72 An inter-TP motion prediction/compensation unit 78 performs motion prediction within a predetermined search range by taking predicted motion vector information generated by an inter-predicted motion vector generating unit 79 as the center of search, on the basis of an image to be inter-encoded from the screen rearrangement buffer 62, and reference images from the frame memory 72 The present invention can be applied to, for example, an image encoding apparatus that performs encoding in H264/AVC format

119 citations


Journal ArticleDOI
TL;DR: These findings support the view that in the hippocampus attention selects the reference frame for task‐relevant information and propose that synchronous activity leads to enhancements in synaptic strength that mediate the stabilization of hippocampal representations.
Abstract: The hippocampus is critically involved in storing explicit memory such as memory for space. A defining feature of explicit memory storage is that it requires attention both for encoding and retrieval. Whereas, a great deal is now known about the mechanisms of storage, the mechanisms whereby attention modulates the encoding and retrieval of space and other hippocampus-dependent memory representations are not known. In this review we discuss recent studies, including our own, which show on the cellular level that attention is critical for the stabilization of spatial and reward-associated odour representations. Our findings support the view that in the hippocampus attention selects the reference frame for task-relevant information. This mechanism is in part mediated by dopamine acting through D1/D5 receptors and involves an increase in neuronal synchronization in the gamma band frequency. We propose that synchronous activity leads to enhancements in synaptic strength that mediate the stabilization of hippocampal representations.

116 citations


Patent
22 Dec 2009
TL;DR: In this article, methods and systems for encoding and decoding signals using a Multi-input Multi-output Time Encoding Machine (TEM) and Time Decoding Machine are described.
Abstract: Methods and systems for encoding and decoding signals using a Multi-input Multi-output Time Encoding Machine (TEM) and Time Decoding Machine are disclosed herein.

76 citations


Proceedings ArticleDOI
03 May 2009
TL;DR: This work proposes an adaptive-rate ECC scheme with BCH codes that is implemented on the flash memory controller that can trade storage space for higher error correction capability to keep it usable even when there is a high noise level.
Abstract: ECC has been widely used to enhance flash memory endurance and reliability. In this work, we propose an adaptive-rate ECC scheme with BCH codes that is implemented on the flash memory controller. With this scheme, flash memory can trade storage space for higher error correction capability to keep it usable even when there is a high noise level.

75 citations


Patent
27 Feb 2009
TL;DR: In this paper, a memory device may include an internal decoder configured to apply, to a first codeword read from the memory cell array, a first decoding scheme selected based on a characteristic of a first channel in which the first code is read to perform error control codes (ECC) decoding of the first codword, and apply to a second code read from a second cell array.
Abstract: Memory devices and/or encoding/decoding methods are provided. A memory device may include: a memory cell array; an internal decoder configured to apply, to a first codeword read from the memory cell array, a first decoding scheme selected based on a characteristic of a first channel in which the first codeword is read to perform error control codes (ECC) decoding of the first codeword, and apply, to a second codeword read from the memory cell array, a second decoding scheme selected based on a characteristic of a second channel in which the second codeword is read to perform the ECC decoding of the second codeword; and an external decoder configured to apply an external decoding scheme to the ECC-decoded first codeword and the ECC-decoded second codeword to perform the ECC decoding of the first codeword and the second codeword.

71 citations


Journal ArticleDOI
Li Su1, Yan Lu2, Feng Wu2, Shipeng Li2, Wen Gao3 
TL;DR: A joint complexity-distortion optimization approach is proposed for real-time H.264 video encoding under the power-constrained environment and the adaptive allocation of computational resources and the fine scalability of complexity control can be achieved.
Abstract: In this paper, a joint complexity-distortion optimization approach is proposed for real-time H.264 video encoding under the power-constrained environment. The power consumption is first translated to the encoding computation costs measured by the number of scaled computation units consumed by basic operations. The solved problem is then specified to be the allocation and utilization of the computational resources. A computation allocation model (CAM) with virtual computation buffers is proposed to optimally allocate the computational resources to each video frame. In particular, the proposed CAM and the traditional hypothetical reference decoder model have the same temporal phase in operations. Further, to fully utilize the allocated computational resources, complexity-configurable motion estimation (CAME) and complexity-configurable mode decision (CAMD) algorithms are proposed for H.264 video encoding. In particular, the CAME is performed to select the path of motion search at the frame level, and the CAMD is performed to select the order of mode search at the macroblock level. Based on the hierarchical adjusting approach, the adaptive allocation of computational resources and the fine scalability of complexity control can be achieved.

Patent
18 Dec 2009
TL;DR: In this article, the authors describe a system for providing a media stream transmitted from an encoding system to a remotely-located media player, where the media stream is encoded according to an encoding parameter.
Abstract: Systems and methods are described for providing a media stream transmitted from an encoding system to a remotely-located media player. The media stream is encoded according to an encoding parameter. Data is gathered about a transmit buffer within the encoding system, and the gathered data is processed to arrive at an estimate of network capacity and a calculated encoder rate. The encoding parameter is adjusted during subsequent encoding in response to a change in at least one of the estimate of network capacity and the calculated encoder rate.

Journal ArticleDOI
16 Dec 2009-PLOS ONE
TL;DR: Real-time neural ensemble transient dynamics in the mouse hippocampal CA1 region are described and it is demonstrated that real-time memory traces can be decoded on a moment-to-moment basis over any single trial.
Abstract: One of the fundamental goals in neurosciences is to elucidate the formation and retrieval of brain's associative memory traces in real-time. Here, we describe real-time neural ensemble transient dynamics in the mouse hippocampal CA1 region and demonstrate their relationships with behavioral performances during both learning and recall. We employed the classic trace fear conditioning paradigm involving a neutral tone followed by a mild foot-shock 20 seconds later. Our large-scale recording and decoding methods revealed that conditioned tone responses and tone-shock association patterns were not present in CA1 during the first pairing, but emerged quickly after multiple pairings. These encoding patterns showed increased immediate-replay, correlating tightly with increased immediate-freezing during learning. Moreover, during contextual recall, these patterns reappeared in tandem six-to-fourteen times per minute, again correlating tightly with behavioral recall. Upon traced tone recall, while various fear memories were retrieved, the shock traces exhibited a unique recall-peak around the 20-second trace interval, further signifying the memory of time for the expected shock. Therefore, our study has revealed various real-time associative memory traces during learning and recall in CA1, and demonstrates that real-time memory traces can be decoded on a moment-to-moment basis over any single trial.

Journal ArticleDOI
TL;DR: The results suggest that feature switch errors reflect failures to maintain bound objects in working memory, perhaps due to the automatic rewriting and rebinding of information in the face of new perceptual input.
Abstract: In these two experiments, we explored the ability to store bound representations of colour and location information in visual working memory using three different tasks. In the location-cue task, we probed how well colour information could be recalled when observers are given a location cue. In the feature-cue task, we probed how well location information could be recalled when observers are given a colour cue. Finally, in the feature-switch detection task, we tested how well observers could detect a recombination of features (e.g., switching the locations of the red and green items). We hypothesized that these tasks might reveal differences in binding capacity limits between switching and nonswitching tests of visual working memory. We also hoped the tasks could provide an explanation for those differences in terms of the component processes of working memory—do failures occur in the encoding, maintenance, or retrieval stages of the task? Experiment 1 showed that performance in the two cued-recall tasks ...

Journal ArticleDOI
TL;DR: A solution methodology for obtaining a sequential decomposition of the global optimization problem is developed and is extended to the case when the sensor makes an imperfect observation of the state of the plant.
Abstract: A discrete time stochastic feedback control system consisting of a nonlinear plant, a sensor, a controller, and a noisy communication channel between the sensor and the controller is considered. The sensor has limited memory and, at each time, it transmits an encoded symbol over the channel and updates its memory. The controller receives a noise-corrupted copy of the transmitted symbol. It generates a control action based on all its past observations and all its past actions. This control action is fed back to the plant. At each time instant the system incurs an instantaneous cost depending on the state of the plant and the control action. The objective is to choose encoding, memory update, and control strategies to minimize an expected total cost over a finite horizon, or an expected discounted cost over an infinite horizon, or an average cost per unit time over an infinite horizon. A solution methodology for obtaining a sequential decomposition of the global optimization problem is developed. This solution methodology is extended to the case when the sensor makes an imperfect observation of the state of the plant.

Proceedings ArticleDOI
03 May 2009
TL;DR: This paper presents a synthesis technique for a mixed-mode BIST scheme which is able to exploit the regularities of a deterministic test pattern set for minimizing the hardware overhead and memory requirements.
Abstract: Programmable mixed-mode BIST schemes combine pseudo-random pattern testing and deterministic test. This paper presents a synthesis technique for a mixed-mode BIST scheme which is able to exploit the regularities of a deterministic test pattern set for minimizing the hardware overhead and memory requirements. The scheme saves more than 50% hardware costs compared with the best schemes known so far while complete programmability is still preserved.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: A constrained memory is a storage device whose elements change their states under some constraints, in which cell levels are easy to increase but hard to decrease.
Abstract: A constrained memory is a storage device whose elements change their states under some constraints. A typical example is flash memories, in which cell levels are easy to increase but hard to decrease. In a general rewriting model, the stored data changes with some pattern determined by the application. In a constrained memory, an appropriate representation is needed for the stored data to enable efficient rewriting.

Journal ArticleDOI
TL;DR: A new multi-level-set technique is introduced, which is able to incorporate multiple material regions, and which can also handle material specific surface speeds accurately.

Patent
08 Sep 2009
TL;DR: In this paper, a bit length reduction converter converts a picture into a reproduction picture having a smaller bit length than the original picture, and this reproduction picture is stored in a frame memory as a reference picture.
Abstract: In order to improve the encoding efficiency while avoiding an increase in the size or memory band of a frame memory and having adaptability in the encoding/decoding processing of a moving picture, a bit length extension converter converts a target picture having a bit length N into an extended target picture having a bit length M, a compressor encodes the converted picture, and an expander restores the encoded picture. Then, a bit length reduction converter converts the picture into a reproduction picture having a bit length L smaller than the bit length M, and this reproduction picture is stored in a frame memory as a reference picture.

Journal ArticleDOI
06 Jul 2009-Memory
TL;DR: A bimodal superiority effect on memory span with non-verbal material, and a larger span with auditory (or bimmodal) versus visual presentation with verbal material, with a significant effect of articulatory suppression in the two conditions.
Abstract: In spite of a large body of empirical research demonstrating the importance of multisensory integration in cognition, there is still little research about multimodal encoding and maintenance effects in working memory. In this study we investigated multimodal encoding in working memory by means of an immediate serial recall task with different modality and format conditions. In a first non-verbal condition participants were presented with sequences of non-verbal inputs representing familiar (concrete) objects, either in visual, auditory or audio-visual formats. In a second verbal condition participants were presented with written, spoken, or bimodally presented words denoting the same objects represented by pictures or sounds in the non-verbal condition. The effects of articulatory suppression were assessed in both conditions. We found a bimodal superiority effect on memory span with non-verbal material, and a larger span with auditory (or bimodal) versus visual presentation with verbal material, with a significant effect of articulatory suppression in the two conditions.

Patent
10 Jul 2009
TL;DR: In this article, the authors present methods, apparatuses, systems, and architectures for providing fast, independent, and reliable retrieval of system data (e.g., metadata) from a storage system, which enables minimal degradation in the reliability of user data.
Abstract: Methods, apparatuses, systems, and architectures for providing fast, independent, and reliable retrieval of system data (e.g., metadata) from a storage system, which enables minimal degradation in the reliability of user data. Methods generally include encoding the system data at least twice, at least once independently and at least once jointly along with user data. Methods can also include decoding the system data first, and upon a decoding failure, jointly decoding the system data and the user data.

Posted Content
TL;DR: This work implements Rule 30 automata with a majority memory and shows that using the memory function it can transform quasi-chaotic dynamics of classical Rule 30 into domains of travelling structures with predictable behaviour.
Abstract: In cellular automata with memory, the unchanged maps of the conventional cellular automata are applied to cells endowed with memory of their past states in some specified interval. We implement Rule 30 automata with a majority memory and show that using the memory function we can transform quasi-chaotic dynamics of classical Rule 30 into domains of travelling structures with predictable behaviour. We analyse morphological complexity of the automata and classify dynamics of gliders (particles, self-localizations) in memory-enriched Rule 30. We provide formal ways of encoding and classifying glider dynamics using de Bruijn diagrams, soliton reactions and quasi-chemical representations.

Journal ArticleDOI
TL;DR: In this paper, the information capacities of a lossy bosonic channel with correlated noise were evaluated and a global encoding/decoding scheme, which involves input-entangled states among different channel uses, is always preferable with respect to a local one in the presence of memory.
Abstract: We evaluate the information capacities of a lossy bosonic channel with correlated noise. The model generalizes the one recently discussed by Pilyavets et al (2008 Phys. Rev. A 77 052324), where memory effects come from the interaction with correlated environments. Environmental correlations are quantified by a multimode squeezing parameter, which vanishes in the memoryless limit. We show that a global encoding/decoding scheme, which involves input-entangled states among different channel uses, is always preferable with respect to a local one in the presence of memory. Moreover, in a certain range of the parameters, we provide an analytical expression for the classical capacity of the channel showing that a global encoding/decoding scheme allows it to be attained. All the results can be applied to a broad class of bosonic Gaussian channels.

Book ChapterDOI
25 Aug 2009
TL;DR: This work presents a novel data parallel algorithm for variable length encoding using atomic operations, which archives performance speedups of up to 35-50x using a CUDA-enabled GPGPU.
Abstract: Variable-Length Encoding (VLE) is a process of reducing input data size by replacing fixed-length data words with codewords of shorter length. As VLE is one of the main building blocks in systems for multimedia compression, its efficient implementation is essential. The massively parallel architecture of modern general purpose graphics processing units (GPGPUs) has been successfully used for acceleration of inherently parallel compression blocks, such as image transforms and motion estimation. On the other hand, VLE is an inherently serial process due to the requirement of writing a variable number of bits for each codeword to the compressed data stream. The introduction of the atomic operations on the latest GPGPUs enables writing to the output memory locations by many threads in parallel. We present a novel data parallel algorithm for variable length encoding using atomic operations, which archives performance speedups of up to 35-50x using a CUDA-enabled GPGPU.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: A Toeplitz random encoding method is proposed that is universal and spreads out the image energy more evenly and verified by Bloch simulation, and the superior performance of the proposed method is demonstrated in simulation results.
Abstract: Compressed Sensing (CS), as a new framework for data acquisition and signal recovery, has been applied to accelerate conventional magnetic resonance imaging (MRI) with Fourier encoding. However, Fourier encoding is not universal and weakly spreads out the energy of most natural images. This limits the achievable reduction factors. In this paper, we propose a Toeplitz random encoding method that is universal and spreads out the image energy more evenly. The MR physical feasibility of the proposed encoding method is verified by Bloch simulation, and the superior performance of the proposed method is demonstrated in simulation results.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: A model that is relevant for Phase Change Memory, a promising emerging nonvolatile memory technology that exhibits limitations in the number of particular write actions that one may apply to a cell before rendering it unusable is introduced.
Abstract: We study memories capable of storing multiple bits per memory cell, with the property that certain state transitions “wear” the cell. We introduce a model that is relevant for Phase Change Memory, a promising emerging nonvolatile memory technology that exhibits limitations in the number of particular write actions that one may apply to a cell before rendering it unusable. We exploit the theory of Write Efficient Memories to derive a closed form expression for the storage capacity/lifetime fundamental tradeoff for this model. We then present families of codes specialized to distinct ranges for the target lifetimes, covering the full range from moderate redundancy to an arbitrarily large lifetime increase. These codes have low implementation complexity and remarkably good performance; for example in an 8 level cell we can increase the lifetime of a memory by a factor of ten while sacrificing only 2/3 of the uncoded storage capacity of the memory.

Journal ArticleDOI
TL;DR: IEEE Std 1599 allows interaction with music content such as notes and sounds in video applications and in any interactive device.
Abstract: IEEE Std 1599 allows interaction with music content such as notes and sounds in video applications and in any interactive device.

Journal ArticleDOI
TL;DR: It is concluded that it is not possible to tell that one of the encoding types is exactly dominant over the others in all aspects such as convergence, finding the optimum solution, and iteration number.

Journal ArticleDOI
TL;DR: In this article, the influence of implicit motivation on learning and memory performance was investigated in a functional framework and the authors found that implicit motives interact with arousal states that facilitate not only selective encoding, but also effort and speed in memory performance.

Journal ArticleDOI
TL;DR: In this article, the unchanged maps of the conventional cellular automata are applied to cells endowed with memory of their past states in some specified interval, and a majority memory function is used to transform quasi-chaotic dynamics of classical Rule 30 into domains of travelling structures with predictable behaviour.
Abstract: In cellular automata with memory, the unchanged maps of the conventional cellular automata are applied to cells endowed with memory of their past states in some specified interval. We implement Rule 30 automata with a majority memory and show that using the memory function we can transform quasi-chaotic dynamics of classical Rule 30 into domains of travelling structures with predictable behaviour. We analyse morphological complexity of the automata and classify dynamics of gliders (particle, self-localizations) in memory-enriched Rule 30. We provide formal ways of encoding and classifying glider dynamics using de Bruijn diagrams, soliton reactions and quasi-chemical representations.