scispace - formally typeset
Search or ask a question

Showing papers by "INESC-ID published in 2013"


Proceedings Article
03 Aug 2013
TL;DR: A number of novel techniques for improving the performance of existing MCS computation algorithms are developed and a novel algorithm is proposed for computing MCSes, which is shown to yield the most efficient and robust solutions for M CS computation.
Abstract: A set of constraints that cannot be simultaneously satisfied is over-constrained. Minimal relaxations and minimal explanations for over-constrained problems find many practical uses. For Boolean formulas, minimal relaxations of over-constrained problems are referred to as Minimal Correction Subsets (MCSes). MCSes find many applications, including the enumeration of MUSes. Existing approaches for computing MCSes either use a Maximum Satisfiability (MaxSAT) solver or iterative calls to a Boolean Satisfiability (SAT) solver. This paper shows that existing algorithms for MCS computation can be inefficient, and so inadequate, in certain practical settings. To address this problem, this paper develops a number of novel techniques for improving the performance of existing MCS computation algorithms. More importantly, the paper proposes a novel algorithm for computing MCSes. Both the techniques and the algorithm are evaluated empirically on representative problem instances, and are shown to yield the most efficient and robust solutions for MCS computation.

150 citations


Journal ArticleDOI
TL;DR: The Systems Biology Markup Language (SBML) Qualitative Models Package (qual) as discussed by the authors is an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks.
Abstract: Background: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. Results: We present the Systems Biology Markup Language (SBML) Qualitative Models Package (“qual”), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. Conclusions: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.

129 citations


Journal ArticleDOI
TL;DR: In this article, a 64-channel ASIC for Time-of-Flight Positron Emission Tomography (TOF PET) imaging has been designed and simulated, which performs timing, digitization and data transmission for 511 keV and lower-energy events due to Compton scattering.
Abstract: A 64-channel ASIC for Time-of-Flight Positron Emission Tomography (TOF PET) imaging has been designed and simulated. The circuit is optimized for the readout of signals pro- duced by the scintillation of a L(Y)SO crystal optically coupled to a silicon photomultiplier (SiPM). Developed in the framework of the EndoTOFPET-US collaboration (1), the ASIC is integrated in the external PET plate and performs timing, digitization and data transmission for 511 keV and lower-energy events due to Compton scattering. Multi-event buffering capability allows event rates up to 100 kHz per channel. The channel cell includes a low input impedance low-noise current conveyor and two trans-impedance amplifier branches separately optimized for energy and time resolution. Two voltage mode discriminators generate respectively a fast trigger for accurate timing and a signal for time-over-threshold calcu- lation, used for charge measurement. The digitization of these signals is done by two low-power TDCs, providing coarse and fine time stamps that are saved into a local register and later managed by a global controller, which builds-up the 40-bit event data and runs the interface with the data acquisition back-end. Running at 160 MHz the chip yields a 50 ps time binning and dissipates u 7 mW per channel (simulated for 40 kHz event rate p/channel) for high capacitance photodetectors (9 mm 2 active area Silicon Photomultiplier with 320 pF terminal capacitance). The minimum SNR of 23.5 dB expected with this capacitance should allow triggering on the first photoelectron to achieve the envisaged timing performance for a TOF-PET system.

123 citations


Posted Content
TL;DR: The collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models.
Abstract: Background: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. Results: We present the System Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the cooperative development of the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyze qualitative models. Conclusion: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.

108 citations


Journal ArticleDOI
TL;DR: There is an urgent need to re-engineer health systems to improve public health through behavior change, and technology-supported behavioral change interventions will be a part of 21st-century health care.
Abstract: It is now known that nearly half of the toll that illness takes in developed countries is linked to four unhealthy behaviors: smoking, excess alcohol intake, poor diet, and physical inactivity. These common risk behaviors cause preventable, delayed illness that then manifests as chronic disease, requiring extended medical care with associated financial costs. Chronic disease already accounts for 75% of U.S. health-care costs, foreshadowing an unsustainable financial burden for the aging population [1]. We are facing an urgent need to re-engineer health systems to improve public health through behavior change, and technology-supported behavioral change interventions will be a part of 21st-century health care. As new technical capabilities to observe behavior continuously in context make it possible to tailor interventions in real time, the way we understand and try to influence behavior will change fundamentally.

100 citations


Proceedings ArticleDOI
01 Jan 2013
TL;DR: Nine challenges for PCG research are identified, namely multi-level multicontent PCG, PCG-based game design and generating complete games, which are likely to take us closer to realising the three grand goals.
Abstract: This chapter discusses the challenges and opportunities of procedural content generation (PCG) in games. It starts with defining three grand goals of PCG, namely multi-level multicontent PCG, PCG-based game design and generating complete games. The way these goals are defined, they are not feasible with current technology. Therefore we identify nine challenges for PCG research. Work towards meeting these challenges is likely to take us closer to realising the three grand goals. In order to help researchers get started, we also identify five actionable steps, which PCG researchers could get started working on immediately.

86 citations


Proceedings ArticleDOI
04 Jun 2013
TL;DR: Results in terms of equal error rate, half total error rate (HTER) and detection error trade-off (DET) confirm that the best performing systems are based on total variability modeling, and are the fusion of several sub-systems.
Abstract: Automatic face recognition in unconstrained environments is a challenging task. To test current trends in face recognition algorithms, we organized an evaluation on face recognition in mobile environment. This paper presents the results of 8 different participants using two verification metrics. Most submitted algorithms rely on one or more of three types of features: local binary patterns, Gabor wavelet responses including Gabor phases, and color information. The best results are obtained from UNILJ-ALP, which fused several image representations and feature types, and UC-HU, which learns optimal features with a convolutional neural network. Additionally, we assess the usability of the algorithms in mobile devices with limited resources.

70 citations


Proceedings Article
01 Aug 2013
TL;DR: This work has been able to extract over 1M Chinese-English parallel segments from Sina Weibo (the Chinese counterpart of Twitter) using only their public APIs, and automatically extracted parallel data yields substantial translation quality improvements in translating microblog text and modest improved in translating edited news commentary.
Abstract: In the ever-expanding sea of microblog data, there is a surprising amount of naturally occurring parallel text: some users create post multilingual messages targeting international audiences while others “retweet” translations. We present an efficient method for detecting these messages and extracting parallel segments from them. We have been able to extract over 1M Chinese-English parallel segments from Sina Weibo (the Chinese counterpart of Twitter) using only their public APIs. As a supplement to existing parallel training data, our automatically extracted parallel data yields substantial translation quality improvements in translating microblog text and modest improvements in translating edited news commentary. The resources in described in this paper are available at http://www.cs.cmu.edu/ lingwang/utopia.

70 citations


Journal ArticleDOI
TL;DR: This work presents an on-line system designed to behave as a virtual therapist incorporating automatic speech recognition technology that permits aphasia patients to perform word naming training exercises and focuses on the study of the automatic word naming detector module.

60 citations


Proceedings Article
01 Oct 2013
TL;DR: It is shown that normalizing English tweets and then translating improves translation quality (compared to translating unnormalized text) using three standard web translation services as well as a phrase-based translation system trained on parallel microblog data.
Abstract: Compared to the edited genres that have played a central role in NLP research, microblog texts use a more informal register with nonstandard lexical items, abbreviations, and free orthographic variation. When confronted with such input, conventional text analysis tools often perform poorly. Normalization — replacing orthographically or lexically idiosyncratic forms with more standard variants — can improve performance. We propose a method for learning normalization rules from machine translations of a parallel corpus of microblog messages. To validate the utility of our approach, we evaluate extrinsically, showing that normalizing English tweets and then translating improves translation quality (compared to translating unnormalized text) using three standard web translation services as well as a phrase-based translation system trained on parallel microblog data.

54 citations


Journal ArticleDOI
TL;DR: Methods to design memoryless reverse converters for the proposed moduli sets with large dynamic ranges, up to (8n+1)-bit are proposed, resulting in an improvement of the RNS arithmetic computation, at the cost of lower reverse conversion performance.
Abstract: In the last years, investigation on residue number systems (RNS) has targeted parallelism and larger dynamic ranges. In this paper, we start from the moduli set {2n,2n-1,2n+1,2n-2(n+1)/2+1,2n+2(n+1)/2+1} , with an equivalent 5n -bit dynamic range, and propose horizontal and vertical extensions in order to improve the parallelism and increase the dynamic range. The vertical extensions increase the value of the power-of-2 modulus in the five-moduli set. With the horizontal extensions, new six channel sets are allowed by introducing the 2n+1+1 or 2n-1+1 moduli. This paper proposes methods to design memoryless reverse converters for the proposed moduli sets with large dynamic ranges, up to (8n+1)-bit. Due to the complexity of the reverse conversion, both the Chinese Remainder Theorem and the Mixed Radix Conversion are applied in the proposed methods to derive efficient reverse converters. Experimental results suggest that the proposed vertical extensions allow to reduce the area-delay-product up to 1.34 times in comparison with the related state-of-the-art. The horizontal extensions allow larger and more balanced moduli sets, resulting in an improvement of the RNS arithmetic computation, at the cost of lower reverse conversion performance.

Proceedings Article
01 Aug 2013
TL;DR: Empirical results reveal that the proposed graph-based semisupervised joint model of Chinese word segmentation and part-of-speech tagging can yield better results than the supervised baselines and other competitive semi-supervised CRFs in this task.
Abstract: This paper introduces a graph-based semisupervised joint model of Chinese word segmentation and part-of-speech tagging. The proposed approach is based on a graph-based label propagation technique. One constructs a nearest-neighbor similarity graph over all trigrams of labeled and unlabeled data for propagating syntactic information, i.e., label distributions. The derived label distributions are regarded as virtual evidences to regularize the learning of linear conditional random fields (CRFs) on unlabeled data. An inductive character-based joint model is obtained eventually. Empirical results on Chinese tree bank (CTB-7) and Microsoft Research corpora (MSR) reveal that the proposed model can yield better results than the supervised baselines and other competitive semi-supervised CRFs in this task.

Book ChapterDOI
02 Sep 2013
TL;DR: This paper proposes a new approach to WIP, called Speed-Amplitude-Supported Walking-in-Place (SAS-WIP), which allows people, when walking along linear paths, to control their virtual speed based on footstep amplitude and speed metrics.
Abstract: Walking in Place (WIP) is an important locomotion technique used in virtual environments. This paper proposes a new approach to WIP, called Speed-Amplitude-Supported Walking-in-Place (SAS-WIP), which allows people, when walking along linear paths, to control their virtual speed based on footstep amplitude and speed metrics. We argue that our approach allows users to better control the virtual distance covered by the footsteps, achieve higher average speeds and experience less fatigue than when using state-of-the-art methods based on footstep frequency, called GUD-WIP.

Journal ArticleDOI
Levent Aksoy1, Cristiano Lazzari1, E. Costa, Paulo Flores1, José Monteiro1 
TL;DR: This paper addresses the problem of optimizing the gate-level area in digit-serial MCM designs and introduces high-level synthesis algorithms, design architectures, and a computer-aided design tool.
Abstract: In the last two decades, many efficient algorithms and architectures have been introduced for the design of low-complexity bit-parallel multiple constant multiplications (MCM) operation which dominates the complexity of many digital signal processing systems. On the other hand, little attention has been given to the digit-serial MCM design that offers alternative low-complexity MCM operations albeit at the cost of an increased delay. In this paper, we address the problem of optimizing the gate-level area in digit-serial MCM designs and introduce high-level synthesis algorithms, design architectures, and a computer-aided design tool. Experimental results show the efficiency of the proposed optimization algorithms and of the digit-serial MCM architectures in the design of digit-serial MCM operations and finite impulse response filters.

Proceedings ArticleDOI
25 Aug 2013
TL;DR: The proposed method not only offers an elegant solution for the problem of fusion and calibration of multiple detectors, but also provides consistent improvements over a baseline approach based on majority voting, according to experiments on the MediaEval 2012 Spoken Web Search task.
Abstract: The combination of several heterogeneous systems is known to provide remarkable performance improvements in verification and detection tasks. In Spoken Term Detection (STD), two important issues arise: (1) how to define a common set of detected candidates, and (2) how to combine system scores to produce a single score per candidate. In this paper, a discriminative calibration/fusion approach commonly applied in speaker and language recognition is adopted for STD. Under this approach, we first propose several heuristics to hypothesize scores for systems that do not detect a given candidate. In this way, the original problem of several unaligned detection candidates is converted into a verification task. As for other verification tasks, system weights and offsets are then estimated through linear logistic regression. As a result, the combined scores are well calibrated, and the detection threshold is automatically given by application parameters (priors and costs). The proposed method not only offers an elegant solution for the problem of fusion and calibration of multiple detectors, but also provides consistent improvements over a baseline approach based on majority voting, according to experiments on the MediaEval 2012 Spoken Web Search (SWS) task involving 8 heterogeneous systems developed at two different laboratories. Index Terms: Spoken Term Detection, Majority Voting, Discriminative Calibration and Fusion, MediaEval 2012 SWS.

Journal ArticleDOI
01 Nov 2013-Fuel
TL;DR: In this article, the authors demonstrate that cork pellets have higher calorific value when compared with other biomass pellets; typically, approximately 20MJ/kg with 3% volume of ashes, which is equivalent to that obtained from the combustion of pellets produced from combined forest and agricultural waste with a bulk density of 750kg/m 3, which offers real advantages in terms of logistics.

Proceedings ArticleDOI
09 Sep 2013
TL;DR: A proposal is made comprising an extensible architecture that consists of a core domain-independent ontology that can be extended through the integration of domain-specific ontologies focusing on specific concerns focusing on Specific concerns.
Abstract: Enterprise architecture aligns business and information technology through the management of different elements and domains. An architecture description encompasses a wide and heterogeneous spectrum of areas, such as business processes, metrics, application components, people and technological infrastructure. Views express the elements and relationships of one or more domains from the perspective of specific system concerns relevant to one or more of its stakeholders. As a result, each view needs to be expressed in the description language that best suits its concerns. However, enterprise architecture languages tend to advocate a rigid "one-model fits all" approach where an all-encompassing description language describes several architectural domains. This approach hinders extensibility and adds complexity to the overall description language. On the other hand, integrating multiple models raises several challenges at the level of model coherence, consistency and trace ability. Moreover, EA models should be computable so that the effort involved in their analysis is manageable. This work advocates the employment of ontologies and associated techniques in EA for contributing to the solving of the aforementioned issues. Thus, a proposal is made comprising an extensible architecture that consists of a core domain-independent ontology that can be extended through the integration of domain-specific ontologies focusing on specific concerns. The proposal is demonstrated through a real-world evaluation scenario involving the analysis of the models according to the requirements of the scenario stakeholders.

Journal ArticleDOI
TL;DR: In this paper, the authors present a technique to use this new information to achieve better performance and to follow legislation changes in the band above 30 MHz, and a study of the viability of using impulsive noise reduction techniques to further increase performance.
Abstract: Power-line communication (PLC) allows establishing digital communications without adding any new wires. It will turn one's house or neighborhood grid into a smart grid. PLC has some issues, namely, high noise at low frequencies and varying characteristic impedance. This paper addresses these issues to improve the signal-to-noise ratio by increasing the signal or reducing the noise. PLC MODEMs are subject to legislations that limit the signals in the line. The radiated signal is proportional to the current, but not to the input current, since the current forms a standing wave along the line. However, better performance can be achieved if the input current is measured. The receiver circuit of the transmitting MODEM can be used to estimate the input impedance. This paper presents a technique to use this new information to achieve better performance and to follow legislation changes in the band above 30 MHz. A study of the viability of using impulsive noise reduction techniques to further increase performance is also presented. The short noise pulses result in high correlation between the noises in different carriers. Impulse position detection should result in an increase in capacity.

Book ChapterDOI
08 Jul 2013
TL;DR: In this article, a two-phase proof system for propositional satisfiability is presented, which expands the SAT formula in the first phase and applies propositional resolution in the second phase.
Abstract: Over the years, proof systems for propositional satisfiability (SAT) have been extensively studied. Recently, proof systems for quantified Boolean formulas (QBFs) have also been gaining attention. Q-resolution is a calculus enabling producing proofs from DPLL-based QBF solvers. While DPLL has become a dominating technique for SAT, QBF has been tackled by other complementary and competitive approaches. One of these approaches is based on expanding variables until the formula contains only one type of quantifier; upon which a SAT solver is invoked. This approach motivates the theoretical analysis carried out in this paper. We focus on a two phase proof system, which expands the formula in the first phase and applies propositional resolution in the second. Fragments of this proof system are defined and compared to Q-resolution.

Journal Article
TL;DR: A two phase proof system is focused on, which expands the formula in the first phase and applies propositional resolution in the second, and fragments of this proof system are defined and compared to Q-resolution.
Abstract: Over the years, proof systems for propositional satisfiability (SAT) have been extensively studied. Recently, proof systems for quantified Boolean formulas (QBFs) have also been gaining attention. Q-resolution is a calculus enabling producing proofs from DPLL-based QBF solvers. While DPLL has become a dominating technique for SAT, QBF has been tackled by other complementary and competitive approaches. One of these approaches is based on expanding variables until the formula contains only one type of quantifier; upon which a SAT solver is invoked. This approach motivates the theoretical analysis carried out in this paper. We focus on a two phase proof system, which expands the formula in the first phase and applies propositional resolution in the second. Fragments of this proof system are defined and compared to Q-resolution.

Proceedings ArticleDOI
27 May 2013
TL;DR: In this paper, the authors present the key features of a model for software agents that handles twoparty and multi-issue negotiation, and describe two novel negotiation strategies for promoting demand response, a "volume management" strategy for end-use consumers, and a "price management" for producers/retailers, involving a retailer agent and a commercial customer.
Abstract: Two major goals of electricity markets are ensuring a secure and efficient operation and decreasing the cost of energy. To achieve these goals, three major market models have been considered: pools, bilateral contracts and hybrid markets. Pool prices tend to change quickly and variations are usually highly unpredictable. In this way, market participants can enter into bilateral contracts to hedge against pool price volatility.Multi-agent electricity markets-that is, energy management tools based on software agents-have received some attention lately and a number of prominent simulators have been proposed in the literature. However, despite the power and elegance of existing tools, they often lack generality and flexibility, mainly because they are limited to particular features of market players. This paper describes on-going work that uses the potential of agent-based technology to develop a computational tool to support bilateral contracting in electricity markets. Specifically, the purpose of the paper is threefold: (i) to present the key features of a model for software agents that handles twoparty and multi-issue negotiation, (ii) to describe two novel negotiation strategies for promoting demand response, a "volume management" strategy for end-use consumers, and a "price management" strategy for producers/retailers, and (iii) to describe a case study on forward bilateral contracts, involving a retailer agent and a commercial customer.

Posted Content
TL;DR: In this article, a suite of commonly-used preprocessing techniques for quantified boolean formulas (QBFs) is presented and evaluated in the state-of-the-art QBF preprocessor bloqqer.
Abstract: QBFs (quantified boolean formulas), which are a superset of propositional formulas, provide a canonical representation for PSPACE problems. To overcome the inherent complexity of QBF, significant effort has been invested in developing QBF solvers as well as the underlying proof systems. At the same time, formula preprocessing is crucial for the application of QBF solvers. This paper focuses on a missing link in currently-available technology: How to obtain a certificate (e.g. proof) for a formula that had been preprocessed before it was given to a solver? The paper targets a suite of commonly-used preprocessing techniques and shows how to reconstruct certificates for them. On the negative side, the paper discusses certain limitations of the currently-used proof systems in the light of preprocessing. The presented techniques were implemented and evaluated in the state-of-the-art QBF preprocessor bloqqer.

Journal ArticleDOI
TL;DR: It is demonstrated how uncertainty propagation allows the computation of minimum mean square error (MMSE) estimates in the feature domain for various feature extraction methods using short-time Fourier transform (STFT) domain distortion models.
Abstract: In this paper we demonstrate how uncertainty propagation allows the computation of minimum mean square error (MMSE) estimates in the feature domain for various feature extraction methods using short-time Fourier transform (STFT) domain distortion models. In addition to this, a measure of estimate reliability is also attained which allows either feature re-estimation or the dynamic compensation of automatic speech recognition (ASR) models. The proposed method transforms the posterior distribution associated to a Wiener filter through the feature extraction using the STFT Uncertainty Propagation formulas. It is also shown that non-linear estimators in the STFT domain like the Ephraim-Malah filters can be seen as special cases of a propagation of the Wiener posterior. The method is illustrated by developing two MMSE-Mel-frequency Cepstral Coefficient (MFCC) estimators and combining them with observation uncertainty techniques. We discuss similarities with other MMSE-MFCC estimators and show how the proposed approach outperforms conventional MMSE estimators in the STFT domain on the AURORA4 robust ASR task.

01 Jan 2013
TL;DR: The INESC-ID's Spoken Language Systems Laboratory (L 2 F) primary system developed for the Spoken Web Search task of the Mediaeval 2013 evaluation campaign consists of the fusion of six individual sub-systems exploiting three language-dependent phonetic classiers as mentioned in this paper.
Abstract: The INESC-ID’s Spoken Language Systems Laboratory (L 2 F) primary system developed for the Spoken Web Search task of the Mediaeval 2013 evaluation campaign consists of the fusion of six individual sub-systems exploiting 3 dierent language-dependent phonetic classiers. For each phonetic classier, an acoustic keyword spotting (AKWS) sub-system based on connectionist speech recognition and a dynamic time warping (DTW) based sub-system have been developed. The diversity in terms of phonetic classiers and methods, together with the ecient fusion and calibration approach applied for heterogeneous sub-systems, are the key elements of the L 2 F submission. Besides the primary submission, two additional systems based on the fusion of only the AKWS and the DTW sub-systems have been developed for comparison purposes. A nal multi-site system formed by the fusion of the L2F and the GTTS primary submissions has been also submitted to explore the potential of the fusion approach for very heterogeneous systems.

Proceedings ArticleDOI
15 Jul 2013
TL;DR: RSL-IL is presented, a Requirements Specification Language that tackles the requirements formalization problem by providing a minimal set of constructs that enables the representation of requirements in a way that makes them formal enough for being tractable by a computer.
Abstract: Despite being the most suitable language to communicate requirements, the intrinsic ambiguity of natural language often undermines requirements quality criteria, specially clearness and consistency. Several proposals have been made to increase the rigor of requirements representations through conceptual models, which encompass different perspectives to completely describe the system. However, this multi-representation strategy warrants significant human effort to produce and reuse such models, as well as to enforce their consistency. This paper presents RSL-IL, a Requirements Specification Language that tackles the requirements formalization problem by providing a minimal set of constructs. To cope with the most typical Requirements Engineering concerns, RSL-IL constructs are internally organized into viewpoints. Since these constructs are tightly integrated, RSL-IL enables the representation of requirements in a way that makes them formal enough for being tractable by a computer. Given that RSL-IL provides a stable intermediate representation that can improve the quality and enables requirements reuse, it can be regarded as a requirements interlingua. Also, RSL-IL can be used as a source language within the context of model-to-model transformations to produce specific conceptual models. To illustrate how RSL-IL can be applied in a real project, this paper provides a running example based on a case study.

Proceedings ArticleDOI
13 May 2013
TL;DR: This survey provides an overview of web archive search architectures designed for time-travel search, i.e. full-text search on the web within a user-specified time interval, and discusses which search architecture is more suitable for a large-scale web archive.
Abstract: Web archives already hold more than 282 billion documents and users demand full-text search to explore this historical information. This survey provides an overview of web archive search architectures designed for time-travel search, i.e. full-text search on the web within a user-specified time interval. Performance, scalability and ease of management are important aspects to take in consideration when choosing a system architecture. We compare these aspects and initialize the discussion of which search architecture is more suitable for a large-scale web archive.

Proceedings ArticleDOI
15 Jul 2013
TL;DR: RSL-PL can improve the quality of requirements specifications, as well as the productivity of requirements engineers, by mitigating the continuous effort that is often required to ensure requirements quality criteria, such as clearness, consistency, and completeness.
Abstract: Software requirements are traditionally documented in natural language (NL). However, despite being easy to understand and having high expressivity, this approach often leads to well-known requirements quality problems. In turn, dealing with these problems warrants a significant amount of human effort, causing requirements development activities to be error-prone and time-consuming. This paper introduces RSL-PL, a language that enables the definition of linguistic patterns typically found in well-formed individual NL requirements, according to the field's best practices. The linguistic features encoded within RSL-PL patterns enable the usage of information extraction techniques to automatically perform the linguistic analysis of NL requirements. Thus, in this paper we argue that RSL-PL can improve the quality of requirements specifications, as well as the productivity of requirements engineers, by mitigating the continuous effort that is often required to ensure requirements quality criteria, such as clearness, consistency, and completeness.

Journal Article
TL;DR: This work was partially supported by national funds through FCT – Fundacao para a Ciencia e Tecnologia, under project PEst-OE/EEI/LA0021/2011, and by DCTI – ISCTEIUL – Lisbon University Institute.
Abstract: This work was partially supported by national funds through FCT – Fundacao para a Ciencia e Tecnologia, under project PEst-OE/EEI/LA0021/2011, and by DCTI – ISCTEIUL – Lisbon University Institute.

Journal ArticleDOI
TL;DR: This paper describes a new approach to implementing open-source and interoperable intelligent tutors through standardization that has the advantage of yielding tutors that are fully conformant to e-learning standards and that are free of external resource dependencies.
Abstract: Because of interoperability issues, intelligent tutoring systems are difficult to deploy in current educational platforms without additional work. This limitation is significant because tutoring systems require considerable time and resources for their implementation. In addition, because these tutors have a high educational value, it is desirable that they could be shared, used by many stakeholders, and easily loaded onto different platforms. This paper describes a new approach to implementing open-source and interoperable intelligent tutors through standardization. In contrast to other methods, our technique does not require using nonstandardized peripheral systems or databases, which would restrict the interoperability of learning objects. Thus, our approach has the advantage of yielding tutors that are fully conformant to e-learning standards and that are free of external resource dependencies. According to our method, "atomic" tutoring systems are grouped to create "molecular" tree structures that cover course modules. In addition, given the interoperability of our technique, tutors can also be combined to create courses that have distinct granularities, topics, and target students. The key to our method is the focus on assuring what defines a tutor in terms of behavior and functionalities (inner loops and outer loops). Our proof of concept was developed using SCORM standards. This paper presents the implementation details of our technique, including the theoretical concepts, technical specifications, and practical examples.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: A framework for multicast subgroup formation in satellite networks to split the multicast satellite users in subgroups, according to the experienced channel conditions, allows to convey multicast content without requiring additional data coding and guarantees high throughput and user fairness.
Abstract: This paper deals with the management of multicast transmissions in Satellite-Long Term Evolution (S-LTE) networks. With the purpose of offering high session quality by exploiting the multi-user diversity, we propose a framework for multicast subgroup formation in satellite networks. In particular, the idea at the basis of the proposed solution is to split the multicast satellite users in subgroups, according to the experienced channel conditions. Such an approach allows to convey multicast content without requiring additional data coding and guarantees high throughput and user fairness. The comparison among the results achieved through the designed framework and those present in literature demonstrates the effectiveness of the proposed approach for providing efficient multicast transmissions via satellite.