Topic
Representation (systemics)
About: Representation (systemics) is a research topic. Over the lifetime, 33821 publications have been published within this topic receiving 475461 citations.
Papers published on a yearly basis
Papers
More filters
01 Jan 1990
TL;DR: A powerful conception of representation and computation — drawn from recent work in the neurosciences — is here outlined and its virtues are explained and explored in three important areas: sensory representation, sensorimotor coordination, and microphysical implementation.
Abstract: A powerful conception of representation and computation — drawn from recent work in the neurosciences — is here outlined. Its virtues are explained and explored in three important areas: sensory representation, sensorimotor coordination, and microphysical implementation. It constitutes a highly general conception of cognitive activity that has significant reductive potential.
143 citations
•
25 Jul 2007
TL;DR: In this paper, the authors presented a method to provide a user with an interactive virtual representation of a geographic location expressed to the user through a three-dimensional or a two-dimensional representation and combination thereof generated by a system controlled by an operator.
Abstract: A method of the present invention provides a user with an interactive virtual representation of a geographic location expressed to the user through a three-dimensional or a two dimensional representations and combination thereof generated by a system controlled by an operator. The method creates an interactive virtual tour of the geographic location by correlating a two-dimensional map with a three-dimensional representation of an interactive model to allow the user to synchronously navigate through the two-dimensional map and the interactive model in different directions.
142 citations
••
14 Jun 2020TL;DR: A deep network based on PointNet++ is developed that predicts ANCSH from a single depth point cloud, including part segmentation, normalized coordinates, and joint parameters in the canonical object space, and leveraging the canonicalized joints are demonstrated.
Abstract: This paper addresses the task of category-level pose estimation for articulated objects from a single depth image. We present a novel category-level approach that correctly accommodates object instances previously unseen during training. We introduce Articulation-aware Normalized Coordinate Space Hierarchy (ANCSH) – a canonical representation for different articulated objects in a given category. As the key to achieve intra-category generalization, the representation constructs a canonical object space as well as a set of canonical part spaces. The canonical object space normalizes the object orientation, scales and articulations (e.g. joint parameters and states) while each canonical part space further normalizes its part pose and scale. We develop a deep network based on PointNet++ that predicts ANCSH from a single depth point cloud, including part segmentation, normalized coordinates, and joint parameters in the canonical object space. By leveraging the canonicalized joints, we demonstrate: 1) improved performance in part pose and scale estimations using the induced kinematic constraints from joints; 2) high accuracy for joint parameter estimation in camera space.
142 citations
••
TL;DR: A formal theory of robot perception as a form of abduction pins down the process whereby low-level sensor data is transformed into a symbolic representation of the external world, drawing together aspects such as incompleteness, top-down information flow, active perception, attention, and sensor fusion in a unifying framework.
142 citations
•
26 Apr 2018TL;DR: Results show that the proposed reinforcement learning method can learn task-friendly representations by identifying important words or task-relevant structures without explicit structure annotations, and thus yields competitive performance.
Abstract: Representation learning is a fundamental problem in natural language processing. This paper studies how to learn a structured representation for text classification. Unlike most existing representation models that either use no structure or rely on pre-specified structures, we propose a reinforcement learning (RL) method to learn sentence representation by discovering optimized structures automatically. We demonstrate two attempts to build structured representation: Information Distilled LSTM (ID-LSTM) and Hierarchically Structured LSTM (HS-LSTM). ID-LSTM selects only important, task-relevant words, and HS-LSTM discovers phrase structures in a sentence. Structure discovery in the two representation models is formulated as a sequential decision problem: current decision of structure discovery affects following decisions, which can be addressed by policy gradient RL. Results show that our method can learn task-friendly representations by identifying important words or task-relevant structures without explicit structure annotations, and thus yields competitive performance.
142 citations