scispace - formally typeset
Search or ask a question
Topic

Object (computer science)

About: Object (computer science) is a research topic. Over the lifetime, 106024 publications have been published within this topic receiving 1360115 citations. The topic is also known as: obj & Rq.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors used multivoxel pattern analysis to test whether activity patterns in ATLs carry information about conceptual object properties, such as where and how an object is used.
Abstract: Interaction with everyday objects requires the representation of conceptual object properties, such as where and how an object is used. What are the neural mechanisms that support this knowledge? While research on semantic dementia has provided evidence for a critical role of the anterior temporal lobes (ATLs) in object knowledge, fMRI studies using univariate analysis have primarily implicated regions outside the ATL. In the present human fMRI study we used multivoxel pattern analysis to test whether activity patterns in ATLs carry information about conceptual object properties. Participants viewed objects that differed on two dimensions: where the object is typically found (in the kitchen or the garage) and how the object is commonly used (with a rotate or a squeeze movement). Anatomical region-of-interest analyses covering the ventral visual stream revealed that information about the location and action dimensions increased from posterior to anterior ventral temporal cortex, peaking in the temporal pole. Whole-brain multivoxel searchlight analysis confirmed these results, revealing highly significant and regionally specific information about the location and action dimensions in the anterior temporal lobes bilaterally. In contrast to conceptual object properties, perceptual and low-level visual properties of the objects were reflected in activity patterns in posterior lateral occipitotemporal cortex and occipital cortex, respectively. These results provide fMRI evidence that object representations in the anterior temporal lobes are abstracted away from perceptual properties, categorizing objects in semantically meaningful groups to support conceptual object knowledge.

218 citations

Proceedings ArticleDOI
06 Nov 2011
TL;DR: This work provides a way to incorporate structural information in the popular random forest framework for performing low-level, unary classification and provides two possibilities for integrating the structured output predictions into concise, semantic labellings.
Abstract: In this paper we propose a simple and effective way to integrate structural information in random forests for semantic image labelling. By structural information we refer to the inherently available, topological distribution of object classes in a given image. Different object class labels will not be randomly distributed over an image but usually form coherently labelled regions. In this work we provide a way to incorporate this topological information in the popular random forest framework for performing low-level, unary classification. Our paper has several contributions: First, we show how random forests can be augmented with structured label information. In the second part, we introduce a novel data splitting function that exploits the joint distributions observed in the structured label space for learning typical label transitions between object classes. Finally, we provide two possibilities for integrating the structured output predictions into concise, semantic labellings. In our experiments on the challenging MSRC and CamVid databases, we compare our method to standard random forest and conditional random field classification results.

218 citations

Proceedings Article
24 Aug 1981
TL;DR: A viewpoint-independent description of the shape of an object can be generated by imposing a canonical frame of reference on the object and describing the spatial dispositions of the parts relative to this object-based frame.
Abstract: A viewpoint-independent description of the shape of an object can be generated by imposing a canonical frame of reference on the object and describing the spatial dispositions of the parts relative to this object-based frame. When a familiar object is in an unusual orientation, the deciding factor in the choice of the canonical object-based frame may be the fact that relative to this frame the object has a familiar shape description. This may suggest that we first hypothesise an object-based frame and then test the resultant shape description for familiarity. However, it is possible to organise the interactions between units in a parallel network so that the pattern of activity in the network simultaneously converges on a representation of the shape and a representation of the object-based frame of reference. The connections in the network are determined by the constraints inherent in the image formation process.

218 citations

Proceedings ArticleDOI
01 Jun 2021
TL;DR: Pointformer as mentioned in this paper proposes a Transformer backbone for 3D point clouds to learn features effectively, where a Local Transformer module is employed to model interactions among points in a local region, which learns context-dependent region features at an object level.
Abstract: Feature learning for 3D object detection from point clouds is very challenging due to the irregularity of 3D point cloud data. In this paper, we propose Pointformer, a Transformer backbone designed for 3D point clouds to learn features effectively. Specifically, a Local Transformer module is employed to model interactions among points in a local region, which learns context-dependent region features at an object level. A Global Transformer is designed to learn context-aware representations at the scene level. To further capture the dependencies among multi-scale representations, we propose Local-Global Transformer to integrate local features with global features from higher resolution. In addition, we introduce an efficient coordinate refinement module to shift down-sampled points closer to object centroids, which improves object proposal generation. We use Pointformer as the backbone for state-of-the-art object detection models and demonstrate significant improvements over original models on both indoor and outdoor datasets.

218 citations

Journal ArticleDOI
TL;DR: The process of finding the correspondence is formalized by defining a general relational distance measure that computes a numeric distance between any two relational descriptions-a model and an image description, two models, or two image descriptions.
Abstract: Relational models are frequently used in high-level computer vision. Finding a correspondence between a relational model and an image description is an important operation in the analysis of scenes. In this paper the process of finding the correspondence is formalized by defining a general relational distance measure that computes a numeric distance between any two relational descriptions-a model and an image description, two models, or two image descriptions. The distance measure is proved to be a metric, and is illustrated with examples of distance between object models. A variant measure used in our past studies is shown not to be a metric.

218 citations


Network Information
Related Topics (5)
Query optimization
17.6K papers, 474.4K citations
84% related
Programming paradigm
18.7K papers, 467.9K citations
84% related
Software development
73.8K papers, 1.4M citations
83% related
Compiler
26.3K papers, 578.5K citations
83% related
Software system
50.7K papers, 935K citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202238
20213,087
20205,900
20196,540
20185,940
20175,046