scispace - formally typeset
Search or ask a question
Topic

Representation (systemics)

About: Representation (systemics) is a research topic. Over the lifetime, 33821 publications have been published within this topic receiving 475461 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The problem is to represent this network so as to organize it and describe its major outlines using an unfolding variant of smallest space analysis, and the problem is solved by a spherical map: the “Sphere of Influence.”
Abstract: This is a study of network representation: The data represent a set of interlocked directorates—specifically, the network in which the boards of major banks are interlocked with the boards of major industrials. The problem is to represent this network so as to organize it and describe its major outlines. Using an unfolding variant of smallest space analysis, the problem is solved by a spherical map: the “Sphere of Influence.” The sectors of the sphere represent similarly-linked corporations, and the relations among the sectors represent the relations among bank-industrial communities.

184 citations

Journal ArticleDOI
TL;DR: This is, to the authors' knowledge, the first implemented system to explore the use of a purely function-based definition of an object category (that is, no explicit geometric or structural model) to recognize 3D objects.
Abstract: An attempt is made to demonstrate the feasibility of defining an object category in terms of the functional properties shared by all objects in the category. This form of representation should allow much greater generality. A complete system has been implemented that takes the boundary surface description of a 3D object as its input and attempts to recognize whether the object belongs to the category 'chair' and, if so, into which subcategory if falls. This is, to the authors' knowledge, the first implemented system to explore the use of a purely function-based definition of an object category (that is, no explicit geometric or structural model) to recognize 3D objects. System competence has been evaluated on a database of over 100 objects, and the results largely agree with human interpretation of the objects. >

184 citations

Journal Article
TL;DR: In this review, a description is offered of the way actions are represented, how these representations are built, and how their content can be accessed by the agent and by other agents.
Abstract: In this review, a description is offered of the way actions are represented, how these representations are built, and how their content can be accessed by the agent and by other agents. Such a description will appear critical for understanding how an action is attributed to its proper origin, or, in other words, how a subject can make a conscious judgement about who the agent of that action is (an agency judgement). This question is central to the problem of self-consciousness: Action is one of the main channels used for communication between individuals, so that determining the agent of an action contributes to differentiating the self from others.

184 citations

23 Oct 2018
TL;DR: In this article, a self-supervised dense object representation for visual understanding and manipulation is proposed, which can be trained in approximately 20 minutes for a wide variety of previously unseen and potentially non-rigid objects.
Abstract: What is the right object representation for manipulation? We would like robots to visually perceive scenes and learn an understanding of the objects in them that (i) is task-agnostic and can be used as a building block for a variety of manipulation tasks, (ii) is generally applicable to both rigid and non-rigid objects, (iii) takes advantage of the strong priors provided by 3D vision, and (iv) is entirely learned from self-supervision. This is hard to achieve with previous methods: much recent work in grasping does not extend to grasping specific objects or other tasks, whereas task-specific learning may require many trials to generalize well across object configurations or other tasks. In this paper we present Dense Object Nets, which build on recent developments in self-supervised dense descriptor learning, as a consistent object representation for visual understanding and manipulation. We demonstrate they can be trained quickly (approximately 20 minutes) for a wide variety of previously unseen and potentially non-rigid objects. We additionally present novel contributions to enable multi-object descriptor learning, and show that by modifying our training procedure, we can either acquire descriptors which generalize across classes of objects, or descriptors that are distinct for each object instance. Finally, we demonstrate the novel application of learned dense descriptors to robotic manipulation. We demonstrate grasping of specific points on an object across potentially deformed object configurations, and demonstrate using class general descriptors to transfer specific grasps across objects in a class.

184 citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202225
20211,580
20201,876
20191,935
20181,792
20171,391