scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Logics based on qualitative descriptors for scene understanding

05 Aug 2015-Neurocomputing (Elsevier)-Vol. 161, pp 3-16
TL;DR: An approach for scene understanding based on qualitative descriptors, domain knowledge and logics is proposed, and promising results were obtained.
Abstract: An approach for scene understanding based on qualitative descriptors, domain knowledge and logics is proposed in this paper. Qualitative descriptors, qualitative models of shape, colour, topology and location are used for describing any object in the scene. Two kinds of domain knowledge are provided: (i) categorizations of objects according to their qualitative descriptors, and (ii) semantics for describing the affordances, mobility and other functional properties of target objects. First order logics are obtained for reasoning and scene understanding. Tests were carried out at the Interact@Cartesium scenario and promising results were obtained.
Citations
More filters
01 Jan 2016
TL;DR: For example, what is mathematics an elementary approach to ideas and methods, but end up in harmful downloads as mentioned in this paper, rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some malicious virus inside their desktop computer.
Abstract: Thank you for downloading what is mathematics an elementary approach to ideas and methods. As you may know, people have look hundreds times for their chosen books like this what is mathematics an elementary approach to ideas and methods, but end up in harmful downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some malicious virus inside their desktop computer.

80 citations

Journal ArticleDOI
TL;DR: Multi-feature correspondence is used to define similarity between objects in an everyday object domain that enables the cognitive system OROC to perform creative replacement of objects and creative object composition inside a Creative Cognitive framework (CreaCogs).
Abstract: In creative problem solving, humans perform object replacement and object composition to improvise tools in order to carry out tasks in everyday situations. In this paper, an approach to perform Object Replacement and Object Composition (OROC) inside a Creative Cognitive framework (CreaCogs) is proposed. Multi-feature correspondence is used to define similarity between objects in an everyday object domain. This enables the cognitive system OROC to perform creative replacement of objects and creative object composition. The generative properties of OROC are analysed and proof-of-concept experiments with OROC are reported. An evaluation of the results is carried out by human judges and compared to human performance in the Alternative Uses Test.

56 citations


Cites background from "Logics based on qualitative descrip..."

  • ...…base (KB), f s material refers to the feature space of material in the KB, f s colour refers to the feature space of colours in the KB (i.e. qualitative colour descriptor by Falomir et al. (2015a)), f s size refers to the feature space of height, depth and width of the objects in230 the KB, f…...

    [...]

  • ...Learning may also be applied to customize the knowledge of the system to adapt to a group of users (i.e. like it was done for colours by Sanz et al. (2015)) or to automatically learn categories (i.e. such as style of paintings Falomir et al. (2015b))....

    [...]

  • ...A gradation of colour names could be obtained based on (i) interval distances (Falomir et al., 2013b) or (ii) in a conceptual neighbourhood across the hue, satura-385 tion and lightness property (Falomir et al., 2015a)....

    [...]

01 Jan 2019
TL;DR: The paper addresses the problem of structural plot aspects analysis for both fiction and non-fiction works by proposing an approach based on ontological modeling that can be used for creation of new tools facilitating the detailed and multi-faceted analysis of literary works.
Abstract: The paper addresses the problem of structural plot aspects analysis for both fiction and non-fiction works. For this, an approach based on ontological modeling is proposed. The requirements and the ontology for plot analysis and construction is developed. The structure of plot is represented by contextual graphs, containing scenes and action nodes. The meta-information, describing the specific scene is formalized using contextual ontology being a subset of common ontology of discourse. The proposed approach can be used for creation of new tools facilitating the detailed and multi-faceted analysis of literary works.

44 citations


Cites background from "Logics based on qualitative descrip..."

  • ...Thus [11] proposes to build a scene model based on simple qualitative descriptors (shape, color, location, topology) and use it for reasoning in home automation....

    [...]

Journal ArticleDOI
TL;DR: The problem is tackled by developing a rule-guided classification approach that exploits data mining techniques of Association Classification to extract descriptive (qualitative) rules of specific geographic features and develops a recommendation system able to guide participants to the most appropriate classification.
Abstract: During the last decade, web technologies and location sensing devices have evolved generating a form of crowdsourcing known as Volunteered Geographic Information (VGI). VGI acted as a platform of spatial data collection, in particular, when a group of public participants are involved in collaborative mapping activities: they work together to collect, share, and use information about geographic features. VGI exploits participants’ local knowledge to produce rich data sources. However, the resulting data inherits problematic data classification. In VGI projects, the challenges of data classification are due to the following: (i) data is likely prone to subjective classification, (ii) remote contributions and flexible contribution mechanisms in most projects, and (iii) the uncertainty of spatial data and non-strict definitions of geographic features. These factors lead to various forms of problematic classification: inconsistent, incomplete, and imprecise data classification. This research addresses classification appropriateness. Whether the classification of an entity is appropriate or inappropriate is related to quantitative and/or qualitative observations. Small differences between observations may be not recognizable particularly for non-expert participants. Hence, in this paper, the problem is tackled by developing a rule-guided classification approach. This approach exploits data mining techniques of Association Classification (AC) to extract descriptive (qualitative) rules of specific geographic features. The rules are extracted based on the investigation of qualitative topological relations between target features and their context. Afterwards, the extracted rules are used to develop a recommendation system able to guide participants to the most appropriate classification. The approach proposes two scenarios to guide participants towards enhancing the quality of data classification. An empirical study is conducted to investigate the classification of grass-related features like forest, garden, park, and meadow. The findings of this study indicate the feasibility of the proposed approach.

27 citations

Journal ArticleDOI
TL;DR: A novelty hybrid model which combines data preprocessing technology, individual forecasting algorithm and weight determination theory is presented for obtaining higher accuracy and forecasting ability and shows that the established model not only has obvious advantages over other individual models, but also can be applied as an available technology for electrical system programming.
Abstract: Power load forecasting has an influence of great signification on improving the operational efficiency and economic benefits of the power grid system. Aiming at improving forecast performance, a substantial number of load forecasting models are proposed. However, these models have disregarded the limits of individual prediction models and the necessity of data preprocessing, resulting in poor prediction accuracy. In this article, a novelty hybrid model which combines data preprocessing technology, individual forecasting algorithm and weight determination theory is presented for obtaining higher accuracy and forecasting ability. In this model, an effective data preprocessing method named SSA is adopted to extract the load data characteristics and further improve the prediction performance. In addition, a combined forecasting mechanism composed of BP, SVM, GRNN and ARIMA is successfully established using the weight determination theory, which exceeds the limits of individual prediction models and comparatively improves prediction accuracy. And the thought of combine linear and nonlinear model together can further take the advantage of two kinds of models to forecast power load more effectively. To assess the validity of the combined model, four datasets of 30-minutes power load from Australia are selected for research. The experimental results show that the established model not only has obvious advantages over other individual models, but also can be applied as an available technology for electrical system programming.

24 citations

References
More filters
Journal ArticleDOI
TL;DR: A novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Abstract: This article presents a novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features). SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (specifically, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper encompasses a detailed description of the detector and descriptor and then explores the effects of the most important parameters. We conclude the article with SURF's application to two challenging, yet converse goals: camera calibration as a special case of image registration, and object recognition. Our experiments underline SURF's usefulness in a broad range of topics in computer vision.

12,449 citations


"Logics based on qualitative descrip..." refers methods in this paper

  • ...In order to detect a ‘target’ object in an image, the QIDL approach uses the Speeded-Up 347 Robust Features (SURF) invariant descriptor and detector [60] which was demonstrated to be 348 the fastest detector in the literature and consists of: (i) selecting ‘interest points’ at distinctive 349 locations in the images, which can be found at different viewing conditions; (ii) representing 350...

    [...]

  • ...The object recognition method applied by QIDL which combines qualitative shape descrip513 tion and SURF invariant descriptor and detector [60] is not perfect, as the tests showed, because 514 it is dependent on the point of view and correct segmentation....

    [...]

Journal ArticleDOI
TL;DR: A new hypothesis about the role of focused attention is proposed, which offers a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.
Abstract: A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.

11,452 citations


"Logics based on qualitative descrip..." refers result in this paper

  • ...This is in concordance 184 with psychological studies of cognitive attention [52, 53, 54, 55]....

    [...]

Journal ArticleDOI
TL;DR: The two basic phenomena that define the problem of visual attention can be illustrated in a simple example and selectivity-the ability to filter out un­ wanted information is illustrated.
Abstract: The two basic phenomena that define the problem of visual attention can be illustrated in a simple example. Consider the arrays shown in each panel of Figure 1. In a typical experiment, before the arrays were presented, subjects would be asked to report letters appearing in one color (targets, here black letters), and to disregard letters in the other color (nontargets, here white letters). The array would then be briefly flashed, and the subjects, without any opportunity for eye movements, would give their report. The display mimics our. usual cluttered visual environment: It contains one or more objects that are relevant to current behavior, along with others that are irrelevant. The first basic phenomenon is limited capacity for processing information. At any given time, only a small amount of the information available on the retina can be processed and used in the control of behavior. Subjectively, giving attention to any one target leaves less available for others. In Figure 1, the probability of reporting the target letter N is much lower with two accompa­ nying targets (Figure la) than with none (Figure Ib). The second basic phenomenon is selectivity-the ability to filter out un­ wanted information. Subjectively, one is aware of attended stimuli and largely unaware of unattended ones. Correspondingly, accuracy in identifying an attended stimulus may be independent of the number of nontargets in a display (Figure la vs Ie) (see Bundesen 1990, Duncan 1980).

7,642 citations


"Logics based on qualitative descrip..." refers result in this paper

  • ...This is in concordance 184 with psychological studies of cognitive attention [52, 53, 54, 55]....

    [...]

Journal ArticleDOI
TL;DR: The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.
Abstract: In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.

6,882 citations

Journal ArticleDOI
TL;DR: An efficient segmentation algorithm is developed based on a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image and it is shown that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties.
Abstract: This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions.

5,791 citations


"Logics based on qualitative descrip..." refers methods in this paper

  • ...For extracting qualitative descriptors from the in162 put image, a graph-based region segmentation method [51] is applied and then the closed bound163 ary of the relevant regions detected is extracted....

    [...]