scispace - formally typeset
Search or ask a question

Showing papers on "Encoding (memory) published in 2000"


Journal ArticleDOI
TL;DR: A computational model of human memory for serial order is described (OSCillator-based Associative Recall [OSCAR]; in the model, successive list items become associated to successive states of a dynamic learning-context signal.
Abstract: A computational model of human memory for serial order is described (OSCillator-based Associative Recall [OSCAR]). In the model, successive list items become associated to successive states of a dynamic learning-context signal. Retrieval involves reinstatement of the learning context, successive states of which cue successive recalls. The model provides an integrated account of both item memory and order memory and allows the hierarchical representation of temporal order information. The model accounts for a wide range of serial order memory data, including differential item and order memory, transposition gradients, item similarity effects, the effects of item lag and separation in judgments of relative and absolute recency, probed serial recall data, distinctiveness effects, grouping effects at various temporal resolutions, longer term memory for serial order, list length effects, and the effects of vocabulary size on serial recall.

677 citations


Journal ArticleDOI
TL;DR: The authors surveys writing research and attempts to sketch a principled account of how multiple sources of knowledge, stored in long-term memory, are coordinated during writing within the constraints of working memory.
Abstract: This article surveys writing research and attempts to sketch a principled account of how multiple sources of knowledge, stored in long-term memory, are coordinated during writing within the constraints of working memory. The concept of long-term working memory is applied to the development of writing expertise. Based on research reviewed, it is speculated that lack of fluent language generation processes constrains novice writers within short-term working memory capacity, whereas fluent encoding and extensive knowledge allow skilled writers to take advantage of long-term memory resources via long-term working memory.

399 citations


Journal ArticleDOI
TL;DR: This paper found that when attention is divided at retrieval, interference is created only when the memory and concurrent task compete for access to word-specific representational systems; no such specificity is necessary to create interference at encoding.
Abstract: In 5 divided attention (DA) experiments, students (24 in each experiment) performed visual distracting tasks (e.g., recognition of words, word and digit monitoring) while either simultaneously encoding an auditory word list or engaging in oral free recall of the target word list. DA during retrieval, using either of the word-based distracting tasks, produced relatively larger interference effects than the digit-monitoring task. DA during encoding produced uniformly large interference effects, regardless of the type of distracting task. Results suggest that when attention is divided at retrieval, interference is created only when the memory and concurrent task compete for access to word-specific representational systems; no such specificity is necessary to create interference at encoding. During encoding, memory and concurrent tasks compete primarily for general resources, whereas during retrieval, they compete primarily for representational systems.

258 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that software-implemented EDAC is a low-cost solution that provides protection for code segments and can appreciably enhance the system availability in aLow-radiation space environment.
Abstract: In many computer systems, the contents of memory are protected by an error detection and correction (EDAC) code Bit-flips caused by single event upsets (SEU) are a well-known problem in memory chips; EDAC codes have been an effective solution to this problem These codes are usually implemented in hardware using extra memory bits and encoding/decoding circuitry In systems where EDAC hardware is not available, the reliability of the system can be improved by providing protection through software Codes and techniques that can be used for software implementation of EDAC are discussed and compared The implementation requirements and issues are discussed, and some solutions are presented The paper discusses in detail how system-level and chip-level structures relate to multiple error correction A simple solution is presented to make the EDAC scheme independent of these structures The technique in this paper was implemented and used effectively in an actual space experiment We have observed that SEU corrupt the operating system or programs of a computer system that does not have any EDAC for memory, forcing the system to be reset frequently Protecting the entire memory (code and data) might not be practical in software However this paper demonstrates that software-implemented EDAC is a low-cost solution that provides protection for code segments and can appreciably enhance the system availability in a low-radiation space environment

171 citations


01 Jan 2000

139 citations


Journal ArticleDOI
TL;DR: It is argued that retrieval processes are obligatory or protected, but that they require attentional resources for their execution, and that divided attention at retrieval affected memory performance only minimally.
Abstract: We have recently cast doubt (Craik, Govoni, Naveh-Benjamin, & Anderson, 1996; Naveh-Benjamin, Craik, Guez, & Dori, 1998) on the view that encoding and retrieval processes in human memory are similar. Divided attention at encoding was shown to reduce memory performance significantly, whereas divided attention at retrieval affected memory performance only minimally. In this article we examined this asymmetry further by using more difficult retrieval tasks, which require substantial effort. In one experiment, subjects had to encode and retrieve lists of unfamiliar name-nouns combinations attached to people's photographs, and in the other, subjects had to encode words that were either strong or weak associates of the cues presented with them and then to retrieve those words with either intra- or extra-list cues. The results of both experiments showed that unlike division of attention at encoding, which reduces memory performance markedly, division of attention at retrieval has almost no effect on memory perfo...

136 citations


Journal ArticleDOI
TL;DR: In this paper, Vicente and Wang's critique of the generalizability of the LTWM framework is rejected, and the process-based framework is shown to be superior to their product theory because it can explain interactions of the expertise effect in "contrived" recall under several testing conditions differing in presentation rate, instructions, and memory procedures.
Abstract: K. A. Ericsson and W. Kintsch's (1995) theoretical framework of long-term working memory (LTWM) accounts for how experts acquire encoding and retrieval mechanisms to adapt to real-time demands of working memory during representative interactions with their natural environments. The transfer of the same LTWM mechanisms is shown to account for the expertise effect in unrepresentative "contrived" memory tests. Therefore, K. J. Vicente and J. H. Wang's (1998) critique of the generalizability of the LTWM framework is rejected. Their proposed refutation of LTWM accounts is found to be based on misrepresented facts. The process-based framework of LTWM is shown to be superior to their product theory because it can explain interactions of the expertise effect in "contrived" recall under several testing conditions differing in presentation rate, instructions, and memory procedures.

135 citations


01 Jan 2000
TL;DR: In this article, a theoretical framework for how individuals acquire skills to maintain access to relevant information in long-term working memory (LTWM) during comprehension of text and a wide range of different types of expert performance is proposed.
Abstract: K. A. Ericsson and W. Kintsch's (1995) theoretical framework of long-term working memory (LTWM) accounts for how experts acquire encoding and retrieval mechanisms to adapt to real-time demands of working memory during representative interactions with their natural environments. The transfer of the same LTWM mechanisms is shown to account for the expertise effect in unrepresentative "contrived" memory tests. Therefore, K. J. Vicente and J. H. Wang's (1998) critique of the generalizability of the LTWM framework is rejected. Their proposed refutation of LTWM accounts is found to be based on misrepresented facts. The process-based framework of LTWM is shown to be superior to their product theory because it can explain interactions of the expertise effect in "contrived" recall under several testing conditions differing in presentation rate, instructions, and memory procedures. A few years ago, two of us (Ericsson & Kintsch, 1995) proposed in this journal a theoretical framework for how individuals could acquire skills to maintain access to relevant information in longterm working memory (LTWM) during comprehension of text and a wide range of different types of expert performance. We considered our research to be consistent with pioneering efforts in ecological psychology, as Vicente and Wang's (1998) quote eloquently put it: Skill acquisition consists of changing what one attends to, the goal being to identify diagnostic high-order information that can be used to satisfy task goals. Training of attention is accomplished by abstraction, filtering, and optimization of perceptual search (see E. J. Gibson, 1969, 1991, for more details). (p. 36)

110 citations


Journal ArticleDOI
TL;DR: Contrary to previous theorizations, these data demonstrate that stereotype-inconsistent information is encoded more thoroughly and represented more accurately in memory than stereotype-consistent information when resources are depleted.
Abstract: This research compared free recall and recognition memory for stereotype-consistent and stereotype-inconsistent information as a function of attentional capacity during encoding. Whereas recall was better for consistent information under conditions of limited capacity, recognition accuracy favored inconsistent information in the same conditions. Contrary to previous theorizations, these data demonstrate that stereotype-inconsistent information is encoded more thoroughly and represented more accurately in memory than stereotype-consistent information when resources are depleted. The recall advantage for consistent information appears to be due to retrieval advantages rather than more thorough encoding or representation. Implications of these findings for models of stereotype efficiency are discussed.

110 citations


Patent
30 Nov 2000
TL;DR: In this article, a technique of superposing information on data can be changed in accordance with the importance of the information and required information can be fetched with ease, and a video-encoding system extracts back the auxiliary packets superposed on the V-blanking area and the H-blank area from an input base-band video signal.
Abstract: Encoding parameters of picture and higher layers of importance to a number of applications, and encoding parameters of slice and lower layers of no importance to all applications are converted into auxiliary packets inserted respectively into a V-blanking area and an H-blanking area of a video-data signal output by a history-information-multiplexing apparatus employed in a video-decoding system. On the other hand, a video-encoding system extracts back the auxiliary packets superposed on the V-blanking area and the H-blanking area from an input base-band video signal. As a result, a technique of superposing information on data can be changed in accordance with the importance of the information and required information can be fetched with ease.

107 citations


Journal ArticleDOI
TL;DR: Targeted remediation of memory appears to yield task specific improvement but the gains do not generalize to other memory tasks, and subjects receiving memory remediation failed to independently activate mnemonic encoding strategies learned and used successfully within training tasks to other general measures of verbal learning and memory.
Abstract: Background. Memory deficits are commonly experienced by patients with schizophrenia, often persist even after effective psychotropic treatment of psychotic symptoms and have been demonstrated to interfere with many aspects of successful psychiatric rehabilitation. Because of significant impact on functional outcome, effective remediation of cognitive deficits has been increasingly cited as an essential component of comprehensive treatment. Efforts to remediate memory deficits have met with circumscribed success, leaving uncertain whether schizophrenia patients can be taught, without experimental induction, independently to employ semantic encoding or a range of other mnemonic techniques.Method. We examined the feasibility of using memory and problem solving teaching techniques developed within educational psychology – techniques which promote intrinsic motivation and task engagement through contextualization and personalization of learning activities – to remediate memory deficits in a group of in-patients with chronic schizophrenia spectrum disorders.Results. Although our memory remediation group significantly improved on the memory remediation task, they did not make greater gains on measures of immediate paragraph recall or list learning than the control groups.Conclusions. Targeted remediation of memory appears to yield task specific improvement but the gains do not generalize to other memory tasks. Subjects receiving memory remediation failed to independently activate mnemonic encoding strategies learned and used successfully within training tasks to other general measures of verbal learning and memory.

Journal ArticleDOI
TL;DR: This experiment addressed two questions about negative background television effects on reading comprehension and memory: are these effects due to interference with processes of initial comprehension andMemory encoding, processes of memory retrieval, or both?
Abstract: Previous research has shown negative background television effects on reading comprehension and memory. This experiment addressed two questions about such negative effects: (a) Are these effects due to interference with processes of initial comprehension and memory encoding, processes of memory retrieval, or both? and (b) Are the effects of background TV stronger for recall or recognition memory? Possible compensating positive effects of background TV were also addressed: Can viewing similar background television content during recall as that viewed during reading improve memory through facilitative context effects? Participants read newspaper science articles with background TV or in silence and completed recall and recognition tests after a filled delay either with TV or in silence. Deleterious effects were obtained for recall memory only and resulted solely from the presence of background TV at the time of comprehension / encoding. No facilitative context effects were obtained by reinstating the same p...

Journal ArticleDOI
TL;DR: The amount of semantic clustering performed by the elders showed a decline with age and was positively related to source performance, and results suggest that subtle age-related changes in semantic knowledge may be related to declines in semantics clustering and memory performance.
Abstract: Of the memory deficits associated with aging, elders are most impaired at attributing the source to remembered information. Additionally, aging is marked by a decrease in the use of encoding strategies that are thought to enhance the acquisition and retention of information. We examined how manipulating the encoding strategy during acquisition affected item and source memory in 32 young and 68 elderly participants. Elderly participants were dichotomized into young-old and old-old based upon the median age (74 years). Memory was assessed using Word List A from the California Verbal Learning Test (CVLT) and its alternate form. Encoding strategy was manipulated by semantic clustering. For the Blocked List, words were presented grouped into their semantic categories, whereas for the Unblocked List categories were intermixed within the list. Item and source memory judgments were made 20 minutes after the final CVLT recall trial and again one week later. Results revealed a disproportionate decline in source, compared to item memory in the two older groups. Semantic blocking enhanced item memory for the elders, but not for the young. The amount of semantic clustering performed by the elders showed a decline with age and was positively related to source performance. Results also suggest that subtle age-related changes in semantic knowledge may be related to declines in semantic clustering and memory performance.

Patent
08 Feb 2000
TL;DR: In this paper, a method and system for encoding digital information is described, in which media program information is captured and used to produce a media program file, which is then sent to the selected set of encoding engines to encode the program information in the one or more encoding formats.
Abstract: A method and system for encoding digital information is disclosed. According to the method, media program information is captured and used to produce a media program file. An encoding request is received from a client which requests that the media program information be encoded in one or more encoding formats. A set of encoding engines are selected that can encode the media program information in each of the one or more encoding formats. The media program file is then sent to the selected set of encoding engines to encode the media program information in the one or more encoding formats.

Patent
09 Jun 2000
TL;DR: In this article, a system and method for simultaneously and synchronously encoding a data signal in multiple formats and at multiple bit rates was proposed, where a plurality of encoding stations are controlled by one central controller, which controls the encoding format carried out by each encoding station, and also controls the commencement of the encoding process.
Abstract: A system and method for simultaneously and synchronously encoding a data signal in multiple formats and at multiple bit rates. According to one aspect of the invention, plural encoding stations are controlled by one central controller, which controls the encoding format carried out by each encoding station, and also controls the commencement of the encoding process to ensure that each encoding station simultaneously commences the encoding process, thereby synchronizing the encoded streams.

Journal ArticleDOI
TL;DR: This article shows that priority encoding transmission is intimately related with the broadcast erasure channel with a degraded message set, and shows that the PET approach which consists in time-sharing and interleaving classical erasure-resilient codes achieves the capacity region of this channel.
Abstract: Albanese et al. (see ibid. vol.42, p.1737-44, 1996) introduced priority encoding transmission (PET) for sending hierarchically organized messages over lossy packet-based computer networks. In a PET system, each symbol in the message is assigned a priority which determines the minimal number of codeword symbols that is required to recover that symbol. This article revisits the PET approach using tools from network information theory. We first show that priority encoding transmission is intimately related with the broadcast erasure channel with a degraded message set. Using the information spectrum approach, we provide an informational characterization of the capacity region of general broadcast channels with degraded message set. We show that the PET inequality has an information-theoretical counterpart. The inequality defining the capacity region of the broadcast erasure channel with degraded message sets. Hence the PET approach which consists in time-sharing and interleaving classical erasure-resilient codes achieves the capacity region of this channel. Moreover, we show that the PET approach may achieve the sphere packing exponents. Finally, we observe that on some simple nonstationary broadcast channels, time-sharing may be outperformed. The impact of memory on the optimality of the PET approach remains elusive.

Patent
Chin-Long Chen1
11 May 2000
TL;DR: In this paper, the parity check bits are incorporated into a generalized and generalizable error correction system which produces a significantly simple decoding and error correction structure, and the system provides for SEC-DED code capabilities while at the same time providing capabilities for correcting multiple odd numbers of errors occurring in distinct groups.
Abstract: Advantage is taken of the presence of ordinary parity check bits occurring in the data flow in a computer or other information-handling system to improve error correction capability while at the same time providing simpler decoding. More particularly, the encoding and decoding system, methods, and devices herein include the capability of separating error correction in data bits and in parity check bits. In this regard, it is noted that the present invention therefore provides an improved memory system in which the parity check bits do not have to be stripped off prior to storage of data into a memory system with error correction coding redundancy built in. Instead of these parity check bits being stripped off, they are incorporated into a generalized and generalizable error correction system which produces a significantly simple decoding and error correction structure. The system provides for SEC-DED code capabilities while at the same time providing capabilities for correcting multiple odd numbers of errors occurring in distinct groups. Accordingly, the present invention provides encoding and decoding systems and methods, and a correspondingly improved memory system.

Journal ArticleDOI
TL;DR: It is concluded that normal aging is associated with a qualitatively different pattern of N100 responses during memory retrieval, and a static N100 response during encoding.

Journal ArticleDOI
TL;DR: For instance, this article found that distinct electrical responses have been associated with recollective processing of words and with priming of visual word-form, which can enrich our understanding of both the cognitive structure and neural substrates of human memory.
Abstract: Neuropsychological studies of memory disorders have played a prominent role in the development of theories of memory. To test and refine such theories in future, it will be advantageous to include research that utilizes physiological measures of the neural events responsible for memory. Measures of the electrical activity of the brain in the form of event-related potentials (ERPs) provide one source of such information. Recent results suggest that these real-time measures reflect relevant encoding and retrieval operations. In particular, distinct electrical responses have been associated with recollective processing of words and with priming of visual word-form. This source of evidence can thus enrich our understanding of both the cognitive structure and neural substrates of human memory.

Journal ArticleDOI
TL;DR: A model of sparse distributed memory is developed that is based on phase relations between the incoming signals and an oscillatory mechanism for information processing that includes phase-frequency encoding of input information, natural frequency adaptation among the network oscillators for storage of input signals, and a resonance amplification mechanism that responds to familiar stimuli.
Abstract: A model of sparse distributed memory is developed that is based on phase relations between the incoming signals and an oscillatory mechanism for information processing. This includes phase-frequency encoding of input information, natural frequency adaptation among the network oscillators for storage of input signals, and a resonance amplification mechanism that responds to familiar stimuli. Simulations of this model show different types of dynamics in response to new and familiar stimuli. The application of the model to hippocampal working memory is discussed.

Patent
26 Apr 2000
TL;DR: In this article, a method of encoding a sequence of pictures defining a strategy for the choice of a prediction mode among the three possible ones in the encoding of B-macroblock is presented.
Abstract: In the Improved PB-frames mode, one of the options of the H.263+ Recommendation, a macroblock of a B-frame may be encoded according to a forward, a backward or a bidirectional prediction mode. The invention relates to a method of encoding a sequence of pictures defining a strategy for the choice of a prediction mode among the three possible ones in the encoding of B-macroblock. This strategy is based upon SAD(Sum of Absolute Difference) calculations and motion vectors coherence and allows to use backward prediction when scene cuts occur. The calculations are here performed on original pictures allowing less calculation and reduction in CPU burden. The invention also relates to an encoding system for carrying out said method and including a computer-readable medium storing instructions that allow the implementation of this method.

Patent
15 Sep 2000
TL;DR: In this article, a pitch pre-processing procedure was proposed for processing the input speech signal to form a revised speech signal biased toward an ideal voiced and stationary characteristic, which allowed the encoder to fully capture the benefits of a bandwidth-efficient, long-term predictive procedure for a greater amount of speech components of an input speech input signal than would otherwise be possible.
Abstract: In accordance with one aspect of the invention, a selector supports the selection of a first encoding scheme or the second encoding scheme based upon the detection or absence of the triggering characteristic in the interval of the input speech signal. The first encoding scheme has a pitch pre-processing procedure for processing the input speech signal to form a revised speech signal biased toward an ideal voiced and stationary characteristic. The pre-processing procedure allows the encoder to fully capture the benefits of a bandwidth-efficient, long-term predictive procedure for a greater amount of speech components of an input speech signal than would otherwise be possible. In accordance with another aspect of the invention, the second encoding scheme entails a long-term prediction mode for encoding the pitch on a sub-frame by sub-frame basis. The long-term prediction mode is tailored to where the generally periodic component of the speech is generally not stationary or less than completely periodic and requires greater frequency of updates from the adaptive codebook to achieve a desired perceptual quality of the reproduced speech under a long-term predictive procedure.


Journal ArticleDOI
TL;DR: The recent covariance structural equation model for word-pair associate encoding and retrieval is analysed and the new concept of 'brain traffic' is introduced as an aid to the assessment of how important are various brain modules.

Proceedings ArticleDOI
Neal Glew1
01 Oct 2000
TL;DR: This paper describes a language with a primitive notion of classes and objects and presents an encoding of this language into one with records and functions that uses a new formulation of self quantifiers that is more powerful than previous approaches.
Abstract: An object encoding translates a language with object primitives to one without. Similarly, a class encoding translates classes into other primitives. Both are important theoretically for comparing the expressive power of languages and for transferring results from traditional languages to those with objects and classes. Both are also important foundations for the implementation of object-oriented languages as compilers typically include a phase that performs these translations.This paper describes a language with a primitive notion of classes and objects and presents an encoding of this language into one with records and functions. The encoding uses two techniques often used in compilers for single-inheritance class-based object-oriented languages: the self-application semantics and the method-table technique. To type the output of the encoding, the encoding uses a new formulation of self quantifiers that is more powerful than previous approaches.

Journal ArticleDOI
TL;DR: Investigation of the incidence of several factors contributing to age-related memory decrement shows a differential impairment of conceptual processing between the middle-old and the old-age groups, and the environmental support hypothesis is discussed in terms of the involvement of encoding and retrieval operations required by the memory task.
Abstract: The present study was conducted to investigate the incidence of several factors contributing to age-related memory decrement. Variables manipulated include quality (level of processing encoding conditions), the degree of effort and encoding quantitative elaboration (active/passive encoding conditions), and the influence of retrieval support (free-/cued recall conditions). In support of the environmental support hypothesis, middle-old and old subjects benefited more than young ones from cued recall in all the memory tests. Moreover, the results showed a differential (qualitative vs. quantitative) impairment of conceptual processing between the middle-old and the old-age groups. In the middle-olds, age differences were abolished by deep processing in old adults, age differences were attentuated only with deep and active processing associated with retrieval support. These gradual memory impairments are evaluated according to Mandler's model of memory (1979, In L. G. Nilsson [Ed.], Perspective in memory research. Hillsdale: Lawrence-Erlbaum), and the environmental support hypothesis is discussed in terms of the involvement of encoding and retrieval operations required by the memory task.

Patent
18 Apr 2000
TL;DR: In this article, a method of encoding a sequence of pictures defining a strategy for the choice of a prediction mode among the three possible ones in the encoding of B-macroblock is presented.
Abstract: In the Improved PB-frames mode, one of the options of the H.263+ Recommendation, a macroblock of a B-frame may be encoded according to a forward, a backward or a bidirectional prediction mode. The invention relates to a method of encoding a sequence of pictures defining a strategy for the choice of a prediction mode among the three possible ones in the encoding of B-macroblock. This strategy is based upon SAD (Sum of Absolute Difference) calculations and motion vectors coherence and allows to use backward prediction when scene cuts occur. The calculations are here performed on original pictures allowing less calculation and reduction in CPU burden. The invention also relates to an encoding system for carrying out said method and including a computer-readable medium storing instructions that allow the implementation of this method.

Journal ArticleDOI
TL;DR: The effects of concreteness and relatedness of adjective-noun pairs on free recall, cued recall, and memory integration were studied to focus on dual coding and relational-distinctiveness processing theories as well as task variables that affect integration measures.
Abstract: Extending previous research on the problem, we studied the effects of concreteness and relatedness of adjective-noun pairs on free recall, cued recall, and memory integration. Two experiments varied the attributes in paired associates lists or sentences. Consistent with predictions from dual coding theory and prior results with noun-noun pairs, both experiments showed that the effects of concreteness were strong and independent of relatedness in free recall and cued recall. The generally positive effects of relatedness were absent in the case of free recall of sentences. The two attributes also had independent (additive) effects on integrative memory as measured by conditionalized free recall of pairs. Integration as measured by the increment from free to cued recall occurred consistently only when pairs were high in both concreteness and relatedness. Explanations focused on dual coding and relational-distinctiveness processing theories as well as task variables that affect integration measures. This study further tested alternative hypotheses concerning the effects of concreteness and relational variables on free recall, cued recall, and measures of integrative recall. The theoretical and empirical issues were reviewed in detail by Paivio, Walsh, and Bons (1994). We summarize the pertinent aspects of that background and then present the rationale for the present research. The alternative hypotheses and predictions were based on dual coding theory (e.g., Paivio, 1971, 1991) and Marschark and Hunt's (1989) relational/distinctiveness processing theory. Dual coding theory explains positive effects of word concreteness in target tasks primarily in terms of the following empirically supported assumptions: (a) nonverbal images are more likely to be aroused by concrete than abstract words; (b) the memory traces of the activated images are "stronger" than the verbal traces of the words themselves; (c) the image and verbal traces are mnemonically independent and additive; (d) concrete word pairs promote activation of compound images that function as integrated memory traces; and (e) the integrated image can be redintegrated by presentation of one pair member as a retrieval cue, thereby mediating response recall. The independence-additivity assumption accounts for most of the concreteness effect in free recall and some of the effect in cued recall. The imagery integration-redintegration hypothesis accounts for the findings that the concreteness effect is larger in cued recall than in free recall, and that concrete items are especially effective as retrieval cues. The integrative and retrieval functions of compound images define the conceptual peg hypothesis of imagery effects in paired associate learning, which we discuss further after describing Marschark and Hunt's alternative approach. Marschark and Hunt (1989) proposed that the effects of concreteness on memory arise from relational and distinctive processing of items rather than imagery or dual coding mechanisms. Relational processing entails responding to word pairs or sentences on the basis of inter-item relational information inherent in the items (associative or semantic relations) or activated by experimental procedures (e.g., instructions to relate the items in some way). Distinctive processing entails responding to items on the basis of any information that distinguishes items from each other. Marschark and Hunt reasoned that memory for response words from a list of pairs depends on the activation of both relational and distinctive information at encoding. Relational information that is reactivated at retrieval delineates a search set of word pairs, and distinctive information then permits discrimination of each target pair and response word from the set. Concreteness-induced imagery, though a possible source of both kinds of information, especially enhances distinctive processing. Therefore, given activation of the appropriate relational information, concrete items should be recalled better than abstract items because the former are distinctive. …

Patent
09 Mar 2000
TL;DR: In this paper, the authors present methods and apparatuses for inserting closed-caption and/or other control data into the vertical blanking interval of video image data stream without the use of special encoding hardware.
Abstract: Methods and apparatuses for inserting closed-caption and/or other control data into the vertical blanking interval of a video image data stream without the use of special encoding hardware.

Journal ArticleDOI
TL;DR: The authors further delineate the relationship between perceptual interference and order memory and show that the positive effects of perceptual interference on item memory can be dissociated from its negative impact on order memory.