scispace - formally typeset
Search or ask a question
Topic

Representation (systemics)

About: Representation (systemics) is a research topic. Over the lifetime, 33821 publications have been published within this topic receiving 475461 citations.


Papers
More filters
Patent
23 May 2008
TL;DR: In this paper, a method and apparatus for creating links between a representation and a realization (e.g., text data and corresponding audio data) is provided, by combining a time-stamped version of the representation generated from the realization with structural information from the representation.
Abstract: A method and apparatus for creating links between a representation, (e.g. text data,) and a realization, (e.g. corresponding audio data,) is provided. According to the invention the realization is structured by combining a time-stamped version of the representation generated from the realization with structural information from the representation. Thereby so called hyper links between representation and realization are created. These hyper links are used for performing search operations in realization data equivalent to those which are possible in representation data, enabling an improved access to the realization (e.g. via audio databases).

164 citations

Journal ArticleDOI
Wilma Bucci1
TL;DR: This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique.
Abstract: Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code formulation, and applying an investigative methodology derived from experimental cognitive psychology, a new approach to the verification of interpretations is possible. Some constructions of a patient's story may be seen as more accurate than others, by virtue of their linkage to stored perceptual representations in long-term memory. We can demonstrate that such linking has occurred in functional or operational terms--through evaluating the representation of imagistic content in the patient's speech.

163 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A new framework is presented - task-oriented modeling, learning and recognition which aims at understanding the underlying functions, physics and causality in using objects as “tools”, and any objects can be viewed as a hammer or a shovel.
Abstract: In this paper, we present a new framework - task-oriented modeling, learning and recognition which aims at understanding the underlying functions, physics and causality in using objects as “tools”. Given a task, such as, cracking a nut or painting a wall, we represent each object, e.g. a hammer or brush, in a generative spatio-temporal representation consisting of four components: i) an affordance basis to be grasped by hand; ii) a functional basis to act on a target object (the nut), iii) the imagined actions with typical motion trajectories; and iv) the underlying physical concepts, e.g. force, pressure, etc. In a learning phase, our algorithm observes only one RGB-D video, in which a rational human picks up one object (i.e. tool) among a number of candidates to accomplish the task. From this example, our algorithm learns the essential physical concepts in the task (e.g. forces in cracking nuts). In an inference phase, our algorithm is given a new set of objects (daily objects or stones), and picks the best choice available together with the inferred affordance basis, functional basis, imagined human actions (sequence of poses), and the expected physical quantity that it will produce. From this new perspective, any objects can be viewed as a hammer or a shovel, and object recognition is not merely memorizing typical appearance examples for each category but reasoning the physical mechanisms in various tasks to achieve generalization.

163 citations

Journal ArticleDOI
TL;DR: A description-based approach, which enables a user to encode the structure of a high-level human activity as a formal representation, and a system which reliably recognizes sequences of complex human activities with a high recognition rate.
Abstract: This paper describes a methodology for automated recognition of complex human activities. The paper proposes a general framework which reliably recognizes high-level human actions and human-human interactions. Our approach is a description-based approach, which enables a user to encode the structure of a high-level human activity as a formal representation. Recognition of human activities is done by semantically matching constructed representations with actual observations. The methodology uses a context-free grammar (CFG) based representation scheme as a formal syntax for representing composite activities. Our CFG-based representation enables us to define complex human activities based on simpler activities or movements. Our system takes advantage of both statistical recognition techniques from computer vision and knowledge representation concepts from traditional artificial intelligence. In the low-level of the system, image sequences are processed to extract poses and gestures. Based on the recognition of gestures, the high-level of the system hierarchically recognizes composite actions and interactions occurring in a sequence of image frames. The concept of hallucinations and a probabilistic semantic-level recognition algorithm is introduced to cope with imperfect lower-layers. As a result, the system recognizes human activities including `fighting' and `assault', which are high-level activities that previous systems had difficulties. The experimental results show that our system reliably recognizes sequences of complex human activities with a high recognition rate.

163 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202225
20211,580
20201,876
20191,935
20181,792
20171,391