Topic
Encoding (memory)
About: Encoding (memory) is a research topic. Over the lifetime, 7547 publications have been published within this topic receiving 120214 citations. The topic is also known as: memory encoding & encoding of memories.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: It was found that images that were more consistent with these distributions were more precisely retrieved, and this schematic influence increased over time, suggesting that schemas form rapidly, but their influence on episodic retrieval is dictated by the need to bolster fading memory representations.
Abstract: Episodic memory retrieval is increasingly influenced by schematic information as memories mature, but it is unclear whether this is due to the slow formation of schemas over time, or the slow forgetting of the episodes. To address this, we separately probed memory for newly learned schemas as well as their influence on episodic memory decisions. In this experiment, participants encoded images from two categories, with the location of images in each category drawn from a different spatial distribution. They could thus learn schemas of category locations by encoding specific episodes. We found that images that were more consistent with these distributions were more precisely retrieved, and this schematic influence increased over time. However, memory for the schema distribution, measured using generalization to novel images, also became less precise over time. This incongruity suggests that schemas form rapidly, but their influence on episodic retrieval is dictated by the need to bolster fading memory representations.
36 citations
•
16 Jun 2004TL;DR: In this paper, a real-time variable bit-rate controller for video encoder and video data transmission system is proposed, where the number of bits allocated to a current picture on the basis of previous encoding results without defining a relation between an encoding rate and the distortion is not forced when features of the current picture are different from those of the previous pictures.
Abstract: A method of controlling an encoding (bit) rate and a method of transmitting video data, and an encoding (bit) rate controller for a video encoder and a video data transmission system employing the methods, wherein the number of bits is allocated to a current picture on the basis of previous encoding results without defining a relation between an encoding rate and the distortion, the limited number of bits is not forced when features of the current picture are different from those of the previous pictures, and a quantizer scale is set adaptively to various features of the current picture without using an additional number of bits corresponding to variation of the quantizer scale, so that it is possible to improve the picture quality of video data to be displayed on a monitor by using such real time variable bit-rate control.
36 citations
••
TL;DR: A unifying framework for developing brain co-processors based on artificial neural networks and deep learning is introduced to address the challenge of multi-channel decoding and encoding in the field of brain-computer interfaces.
35 citations
••
15 Jul 2019TL;DR: Dartagnan is presented, a bounded model checker (BMC) for concurrent programs under weak memory models that matches or even exceeds the performance of the model-specific verification tools Nidhugg and CBMC, as well as theperformance of Herd, a CAT-compatible litmus testing tool.
Abstract: We present Dartagnan, a bounded model checker (BMC) for concurrent programs under weak memory models. Its distinguishing feature is that the memory model is not implemented inside the tool but taken as part of the input. Dartagnan reads CAT, the standard language for memory models, which allows to define x86/TSO, ARMv7, ARMv8, Power, C/C++, and Linux kernel concurrency primitives. BMC with memory models as inputs is challenging. One has to encode into SMT not only the program but also its semantics as defined by the memory model. What makes Dartagnan scale is its relation analysis, a novel static analysis that significantly reduces the size of the encoding. Dartagnan matches or even exceeds the performance of the model-specific verification tools Nidhugg and CBMC, as well as the performance of Herd, a CAT-compatible litmus testing tool. Compared to the unoptimized encoding, the speed-up is often more than two orders of magnitude.
35 citations
••
TL;DR: An adaptive scheme employing pyramid structure is proposed for multiresolution encoding of still pictures by designing a low-entropy pyramid decomposition by means of different reduction/expansion filters and giving encoding priority to important features through a content-driven decision rule.
Abstract: An adaptive scheme employing pyramid structure is proposed for multiresolution encoding of still pictures. Efficiency is increased by designing a low-entropy pyramid decomposition by means of different reduction/expansion filters, and also by giving encoding priority to important features through a content-driven decision rule. Quantization error feed-back performed along the pyramid levels ensures lossless reconstruction capability and improves the robustness of the algorithm.
35 citations