Topic
Representation (systemics)
About: Representation (systemics) is a research topic. Over the lifetime, 33821 publications have been published within this topic receiving 475461 citations.
Papers published on a yearly basis
Papers
More filters
••
89 citations
•
01 Dec 2006
89 citations
••
TL;DR: This article surveys that part of KR that is concerned with the representation of space and time, with particular reference to the use of such representations in geographical information science.
Abstract: Knowledge Representation (KR) originated as a discipline within Artificial Intelligence, and is concerned with the representation of knowledge in symbolic form so that it can be stored and manipulated on a computer. This article surveys that part of KR that is concerned with the representation of space and time, with particular reference to the use of such representations in geographical information science.
89 citations
•
TL;DR: In this representation instances from the same class are close to each other while instances from different classes are further apart, resulting in statistically significant improvement when compared to other approaches on three datasets from two different domains.
Abstract: Open set recognition problems exist in many domains. For example in security, new malware classes emerge regularly; therefore malware classification systems need to identify instances from unknown classes in addition to discriminating between known classes. In this paper we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from different classes are further apart, resulting in statistically significant improvement when compared to other approaches on three datasets from two different domains.
89 citations
•
TL;DR: A convolutional network capable of generating images of a previously unseen object from arbitrary viewpoints given a single image of this object and an implicit 3D representation of the object class is presented.
Abstract: We present a convolutional network capable of generating images of a previously unseen object from arbitrary viewpoints given a single image of this object. The input to the network is a single image and the desired new viewpoint; the output is a view of the object from this desired viewpoint. The network is trained on renderings of synthetic 3D models. It learns an implicit 3D representation of the object class, which allows it to transfer shape knowledge from training instances to a new object instance. Beside the color image, the network can also generate the depth map of an object from arbitrary viewpoints. This allows us to predict 3D point clouds from a single image, which can be fused into a surface mesh. We experimented with cars and chairs. Even though the network is trained on artificial data, it generalizes well to objects in natural images without any modifications.
89 citations