scispace - formally typeset
Search or ask a question
JournalISSN: 1793-351X

International Journal of Semantic Computing 

About: International Journal of Semantic Computing is an academic journal. The journal publishes majorly in the area(s): Semantic computing & Ontology (information science). It has an ISSN identifier of 1793-351X. Over the lifetime, 341 publications have been published receiving 2760 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: C-SPARQL is defined, an extension of SPARQL whose distinguishing feature is the support of continuous queries, i.e. queries registered over RDF data streams and then continuously executed.
Abstract: This article defines C-SPARQL, an extension of SPARQL whose distinguishing feature is the support of continuous queries, i.e. queries registered over RDF data streams and then continuously executed. Queries consider windows, i.e. the most recent triples of such streams, observed while data is continuously flowing. Supporting streams in RDF format guarantees interoperability and opens up important applications, in which reasoners can deal with evolving knowledge over time. C-SPARQL is presented by means of a full specification of the syntax, a formal semantics, and a comprehensive set of examples, relative to urban computing applications, that systematically cover the SPARQL extensions. The expression of meaningful queries over streaming data is strictly connected to the availability of aggregation primitives, thus C-SPARQL also includes extensions in this respect.

269 citations

Journal ArticleDOI
TL;DR: These affective characterization results demonstrate the ability of using multimedia features and physiological responses to predict the expected affect of the user in response to the emotional video content.
Abstract: In this paper, we propose an approach for affective characterization of movie scenes based on the emotions that are actually felt by spectators. Such a representation can be used to characterize the emotional content of video clips in application areas such as affective video indexing and retrieval, and neuromarketing studies. A dataset of 64 different scenes from eight movies was shown to eight participants. While watching these scenes, their physiological responses were recorded. The participants were asked to self-assess their felt emotional arousal and valence for each scene. In addition, content-based audio- and video-based features were extracted from the movie scenes in order to characterize each scene. Degrees of arousal and valence were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based features. We showed that a significant correlation exists between valence-arousal provided by the spectator's self-assessments, and affective grades obtained automatically from either physiological responses or from audio-video features. By means of an analysis of variance (ANOVA), the variation of different participants' self assessments and different gender groups self assessments for both valence and arousal were shown to be significant (p-values lower than 0.005). These affective characterization results demonstrate the ability of using multimedia features and physiological responses to predict the expected affect of the user in response to the emotional video content.

90 citations

Journal ArticleDOI
TL;DR: A relational database representation is described that captures both the inter- and intra-layer dependencies and details of an object-oriented API for efficient, multi-tiered access to this data.
Abstract: The OntoNotes project is creating a corpus of large-scale, accurate, and integrated annotation of multiple levels of the shallow semantic structure in text. Such rich, integrated annotation covering many levels will allow for richer, cross-level models enabling significantly better automatic semantic analysis. At the same time, it demands a robust, efficient, scalable mechanism for storing and accessing these complex inter-dependent annotations. We describe a relational database representation that captures both the inter- and intra-layer dependencies and provide details of an object-oriented API for efficient, multi-tiered access to this data.

63 citations

Journal ArticleDOI
TL;DR: A set of novel formal semantics, such as deductive semantics, concept-algebra-based semantics, and visual semantics, is introduced that forms a theoretical and cognitive foundation for semantic computing.
Abstract: Semantics is the meaning of symbols, notations, concepts, functions, and behaviors, as well as their relations that can be deduced onto a set of predefined entities and/or known concepts. Semantic computing is an emerging computational methodology that models and implements computational structures and behaviors at semantic or knowledge level beyond that of symbolic data. In semantic computing, formal semantics can be classified into the categories of to be, to have, and to do semantics. This paper presents a comprehensive survey of formal and cognitive semantics for semantic computing in the fields of computational linguistics, software science, computational intelligence, cognitive computing, and denotational mathematics. A set of novel formal semantics, such as deductive semantics, concept-algebra-based semantics, and visual semantics, is introduced that forms a theoretical and cognitive foundation for semantic computing. Applications of formal semantics in semantic computing are presented in case studies on semantic cognition of natural languages, semantic analyses of computing behaviors, behavioral semantics of human cognitive processes, and visual semantic algebra for image and visual object manipulations.

61 citations

Journal ArticleDOI
TL;DR: A computational model for the automatic production of combined speech and iconic gesture is presented, and an integrated architecture for this is described, in which the planning of content and the plans of form across both modalities proceed in an interactive manner.
Abstract: A computational model for the automatic production of combined speech and iconic gesture is presented. The generation of multimodal behavior is grounded in processes of multimodal thinking, in which a propositional representation interacts and interfaces with an imagistic representation of visuo-spatial imagery. An integrated architecture for this is described, in which the planning of content and the planning of form across both modalities proceed in an interactive manner. Results from an empirical study are reported that inform the on-the-spot formation of gestures.

61 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
20231
20222
202116
202025
201925
201828