scispace - formally typeset
Search or ask a question

Showing papers by "INESC-ID published in 2014"


Journal ArticleDOI
TL;DR: The YEASTRACT information system was revisited and new information was added on the experimental conditions in which those associations take place and on whether the TF is acting on its target genes as activator or repressor, allowing the selection of specific environmental conditions, experimental evidence or positive/negative regulatory effect.
Abstract: The YEASTRACT (http://www.yeastract.com) information system is a tool for the analysis and prediction of transcription regulatory associations in Saccharomyces cerevisiae. Last updated in June 2013, this database contains over 200,000 regulatory associations between transcription factors (TFs) and target genes, including 326 DNA binding sites for 113 TFs. All regulatory associations stored in YEASTRACT were revisited and new information was added on the experimental conditions in which those associations take place and on whether the TF is acting on its target genes as activator or repressor. Based on this information, new queries were developed allowing the selection of specific environmental conditions, experimental evidence or positive/negative regulatory effect. This release further offers tools to rank the TFs controlling a gene or genome-wide response by their relative importance, based on (i) the percentage of target genes in the data set; (ii) the enrichment of the TF regulon in the data set when compared with the genome; or (iii) the score computed using the TFRank system, which selects and prioritizes the relevant TFs by walking through the yeast regulatory network. We expect that with the new data and services made available, the system will continue to be instrumental for yeast biologists and systems biology researchers.

248 citations


Proceedings ArticleDOI
01 Jun 2014
TL;DR: It is shown that it is possible to reliably discriminate whether a syntactic construction is meant literally or metaphorically using lexical semantic features of the words that participate in the construction.
Abstract: We show that it is possible to reliably discriminate whether a syntactic construction is meant literally or metaphorically using lexical semantic features of the words that participate in the construction. Our model is constructed using English resources, and we obtain state-of-the-art performance relative to previous work in this language. Using a model transfer approach by pivoting through a bilingual dictionary, we show our model can identify metaphoric expressions in other languages. We provide results on three new test sets in Spanish, Farsi, and Russian. The results support the hypothesis that metaphors are conceptual, rather than lexical, in nature.

217 citations


Journal ArticleDOI
TL;DR: In this paper, a review of the literature covering the various types of interfaces developed for electrochemical energy storage systems is presented, including standard, multilevel and multiport technology.

203 citations


Journal ArticleDOI
01 Oct 2014-Energy
TL;DR: In this paper, a comprehensive study and analysis of energy storage technologies' main assets, research issues, global market figures, economic benefits and technical applications is provided for islanded systems and micro-grids, addressing their particular requirements, the most appropriate technologies and existing operating projects throughout the world.

201 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of the current situation of the production of pellets, mainly with mixed biomass types, and the possible uses they have, with the main emphasis on the review of different combustion processes.

126 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose an Energy Management Maturity Model that can be used to guide organizations in their energy management implementation efforts to incrementally achieve compliance with energy management standards such as ISO 50001.

105 citations



Journal ArticleDOI
TL;DR: In this article, a load frequency control (LFC) problem is presented using different optimization algorithms for two types of power system configurations: (i) hybrid configuration of thermal power system (TPS) integrated with DG, comprising wind turbine generators (WTGs), diesel engine generators (DEGs), fuel cells (FCs), aqua-electrolyzer (AE) and battery energy storage system (BESS), and two area interconnected power system with DG connected in area-1.

87 citations


Proceedings Article
01 May 2014
TL;DR: The corpus is composed of several sequences obtained by convolution of dry acoustic events with more than 9000 impulse responses measured in a real apartment equipped with 40 microphones, suitable for various multi-microphone signal processing and distant speech recognition tasks.
Abstract: This paper describes a multi-microphone multi-language acoustic corpus being developed under the EC project Distant-speech Interaction for Robust Home Applications (DIRHA). The corpus is composed of several sequences obtained by convolution of dry acoustic events with more than 9000 impulse responses measured in a real apartment equipped with 40 microphones. The acoustic events include in-domain sentences of different typologies uttered by native speakers in four different languages and non-speech events representing typical domestic noises. To increase the realism of the resulting corpus, background noises were recorded in the real home environment and then added to the generated sequences. The purpose of this work is to describe the simulation procedure and the data sets that were created and used to derive the corpus. The corpus contains signals of different characteristics making it suitable for various multi-microphone signal processing and distant speech recognition tasks.

79 citations


Proceedings ArticleDOI
20 Oct 2014
TL;DR: In this article, the authors compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains, showing that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from highperformance computing with intensive floating-point calculations.
Abstract: Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems.

77 citations


Journal ArticleDOI
TL;DR: In this article, the performance of linear, decoupled and direct power controllers (DPC) for three-phase matrix converters operating as unified power flow controllers (UPFC) is compared.
Abstract: This paper presents the design and compares the performance of linear, decoupled and direct power controllers (DPC) for three-phase matrix converters operating as unified power flow controllers (UPFC). A simplified steady-state model of the matrix converter-based UPFC fitted with a modified Venturini high-frequency pulse width modulator is first used to design the linear controllers for the transmission line active (P) and reactive (Q) powers. In order to minimize the resulting cross coupling between P and Q power controllers, decoupled linear controllers (DLC) are synthesized using inverse dynamics linearization. DPC are then developed using sliding-mode control techniques, in order to guarantee both robustness and decoupled control. The designed P and Q power controllers are compared using simulations and experimental results. Linear controllers show acceptable steady-state behavior but still exhibit coupling between P and Q powers in transient operation. DLC are free from cross coupling but are parameter sensitive. Results obtained by DPC show decoupled power control with zero error tracking and faster responses with no overshoot and no steady-state error. All the designed controllers were implemented using the same digital signal processing hardware.

Journal Article
TL;DR: A new and elegant proof technique for showing lower bounds in QBF proof systems based on strategy extraction is exhibited, which provides a direct transfer of circuit lower bounds to lengths of proofs lower bounds.
Abstract: Proof systems for quantified Boolean formulas (QBFs) provide a theoretical underpinning for the performance of important QBF solvers. However, the proof complexity of these proof systems is currently not well understood and in particular lower bound techniques are missing. In this paper we exhibit a new and elegant proof technique for showing lower bounds in QBF proof systems based on strategy extraction. This technique provides a direct transfer of circuit lower bounds to lengths of proofs lower bounds. We use our method to show the hardness of a natural class of parity formulas for Q-resolution and universal Q-resolution. Variants of the formulas are hard for even stronger systems as long-distance Q-resolution and extensions. With a completely different lower bound argument we show the hardness of the prominent formulas of Kleine Buning et al. [34] for the strong expansion-based calculus IR-calc. Our lower bounds imply new exponential separations between two different types of resolution-based QBF calculi: proof systems for CDCL-based solvers (Q-resolution, long-distance Q-resolution) and proof systems for expansion-based solvers (forallExp+Res and its generalizations IR-calc and IRM-calc). The relations between proof systems from the two different classes were not known before.

Proceedings ArticleDOI
20 Oct 2014
TL;DR: Teachers saw a role for the tutor in acting as an engaging tool for all, preferably in groups, and gathering information about students' learning progress without taking over the teachers' responsibility for the actual assessment.
Abstract: In this paper, we describe the results of an interview study conducted across several European countries on teachers' views on the use of empathic robotic tutors in the classroom. The main goals of the study were to elicit teachers' thoughts on the integration of the robotic tutors in the daily school practice, understanding the main roles that these robots could play and gather teachers' main concerns about this type of technology. Teachers' concerns were much related to the fairness of access to the technology, robustness of the robot in students' hands and disruption of other classroom activities. They saw a role for the tutor in acting as an engaging tool for all, preferably in groups, and gathering information about students' learning progress without taking over the teachers' responsibility for the actual assessment. The implications of these results are discussed in relation to teacher acceptance of ubiquitous technologies in general and robots in particular.

Journal ArticleDOI
TL;DR: In this paper, a Stochastic Network Constrained Unit Commitment associated with Demand Response Programs (SNCUCDR) is presented to schedule both generation units and responsive loads in power systems with high penetration of wind power.

Journal ArticleDOI
TL;DR: In this article, the singular value decomposition (SVD) was used to derive a model-order reduction of the electromagnetic scattering problem, where the inputs are current distributions operating in the presence of a scatterer, and the outputs are their corresponding scattered fields.
Abstract: We consider model-order reduction of systems occurring in electromagnetic scattering problems, where the inputs are current distributions operating in the presence of a scatterer, and the outputs are their corresponding scattered fields. Using the singular-value decomposition (SVD), we formally derive minimal-order models for such systems. We then use a discrete empirical interpolation method (DEIM) to render the minimal-order models more suitable to numerical computation. These models consist of a set of elementary sources and a set of observation points, both interior to the scatterer, and located automatically by the DEIM. A single matrix then maps the values of any incident field at the observation points to the amplitudes of the sources needed to approximate the corresponding scattered field. Similar to a Green's function, these models can be used to quickly analyze the interaction of the scatterer with other nearby scatterers or antennas.

Proceedings ArticleDOI
08 Dec 2014
TL;DR: This work proposes a cloud-native industrial EMS solution with cloud computing capabilities that is expected to generate useful knowledge in a shorter time period, enabling organizations to react quicker to changes of events and detect hidden patterns that compromise efficiency.
Abstract: Industrial organizations use Energy Management Systems (EMS) to monitor, control, and optimize their energy consumption. Industrial EMS are complex and expensive systems due to the unique requirements of performance, reliability, and interoperability. Moreover, industry is facing challenges with current EMS implementations such as cross-site monitoring of energy consumption and CO2 emissions, integration between energy and production data, and meaningful energy efficiency benchmarking. Additionally, big data has emerged because of recent advances in field instrumentation that led to the generation of large quantities of machine data, with much more detail and higher sampling rates. This created a challenge for real-time analytics. In order to address all these needs and challenges, we propose a cloud-native industrial EMS solution with cloud computing capabilities. Through this innovative approach we expect to generate useful knowledge in a shorter time period, enabling organizations to react quicker to changes of events and detect hidden patterns that compromise efficiency.

Journal ArticleDOI
M.M. Silva1, Luis B. Oliveira1
TL;DR: Simulation and experimental results that confirm that the RC-G circuit is suitable for implementation of the TIAs in the front-end of a PET scanner using SiPMs at the input.
Abstract: A transimpedance amplifier (TIA) in the front-end of a radiation detector is required to convert the current pulse produced by a light-detector to a voltage pulse with amplitude and shape suitable for the subsequent processing. We consider in this paper the specifications of a positron emission tomography (PET) scanner for medical imaging. The conventional approach is to use an avalanche photo-diode (APD) as the light-detector and a feedback TIA. We point out here that, when the APD is replaced by the more recent silicon photomultiplier (SiPM), a feedback TIA is not suitable, and we propose the use of a regulated common-gate (RC-G) TIA. We derive the transimpedance function of the RC-G TIA considering the parasitic capacitances that have a dominant effect on the pulse shaping. We use the result obtained to establish TIA design guidelines, and we show that these should be different with an APD and with a SiPM at the input. We identify the dominant noise source in the RC-G TIA, and we derive a closed form equation for the output noise rms voltage. A prototype TIA was designed for UMC 130 nm CMOS technology. We present simulation and experimental results that confirm that the RC-G circuit is suitable for implementation of the TIAs in the front-end of a PET scanner using SiPMs at the input.

Proceedings ArticleDOI
29 Mar 2014
TL;DR: Results suggest that mid-air interactions, combining direct manipulation with six degrees of freedom for the dominant hand, are both more satisfying and efficient than the alternatives tested.
Abstract: Stereoscopic tabletops offer unique visualization capabilities, enabling users to perceive virtual objects as if they were lying above the surface. While allowing virtual objects to coexist with user actions in the physical world, interaction with these virtual objects above the surface presents interesting challenges. In this paper, we aim to understand which approaches to 3D virtual object manipulations are suited to this scenario. To this end, we implemented five different techniques based on the literature. Four are mid-air techniques, while the remainder relies on multi-touch gestures, which act as a baseline. Our setup combines affordable non-intrusive tracking technologies with a multi-touch stereo tabletop, providing head and hands tracking, to improve both depth perception and seamless interactions above the table. We conducted a user evaluation to find out which technique appealed most to participants. Results suggest that mid-air interactions, combining direct manipulation with six degrees of freedom for the dominant hand, are both more satisfying and efficient than the alternatives tested.

Journal ArticleDOI
01 May 2014-Energy
TL;DR: In this article, the optimal weekly scheduling of a pumped storage hydro (PSH) unit in a price-taker and price-maker scenario was investigated in a liberalized electricity market under a pricemaker context.

Journal ArticleDOI
TL;DR: This work aims at investigating conflict in HBAS and creating a solution to detect and resolve them, and proposes a formal framework based on constraint solving that enables detecting and solving conflict situations automatically.

Journal ArticleDOI
TL;DR: A new hybrid evolutionary-adaptive methodology for electricity prices forecasting in the short-term, i.e., between 24 and 168 h ahead, successfully combining mutual information, wavelet transform, evolutionary particle swarm optimization, and the adaptive neuro-fuzzy inference system is proposed.

Proceedings ArticleDOI
19 Oct 2014
TL;DR: This paper presents a study where data regarding student performance and gaming preferences, from a gamified engineering course, was collected and analyzed, and performed cluster analysis to understand what different kinds of students could be observed in the authors' gamified experience, and how their behavior could be correlated to their gaming characteristics.
Abstract: Gamified education is a novel concept, and early trials show its potential to engage students and improve their performance. However, little is known about how different students learn with gamification, and how their gaming habits influence their experience. In this paper we present a study where data regarding student performance and gaming preferences, from a gamified engineering course, was collected and analyzed. We performed cluster analysis to understand what different kinds of students could be observed in our gamified experience, and how their behavior could be correlated to their gaming characteristics. We identified four main student types: the Achievers, the Regular students, the Halfhearted students, and the Underachievers, all representing different strategies towards the course and with different gaming preferences. Here we will thoroughly describe each student type and address how different gaming preferences might have impacted the students' learning experience.

Journal ArticleDOI
TL;DR: The proposed dynamic load balancing algorithm allows efficient and concurrent video encoding across several heterogeneous devices by relying on realistic run-time performance modeling and module-device execution affinities when distributing the computations.
Abstract: The high computational demands and overall encoding complexity make the processing of high definition video sequences hard to be achieved in real-time. In this manuscript, we target an efficient parallelization and RD performance analysis of H.264/AVC inter-loop modules and their collaborative execution in hybrid multi-core CPU and multi-GPU systems. The proposed dynamic load balancing algorithm allows efficient and concurrent video encoding across several heterogeneous devices by relying on realistic run-time performance modeling and module-device execution affinities when distributing the computations. Due to an online adjustment of load balancing decisions, this approach is also self-adaptable to different execution scenarios. Experimental results show the proposed algorithm's ability to achieve real-time encoding for different resolutions of high-definition sequences in various heterogeneous platforms. Speed-up values of up to 2.6 were obtained when compared to the video inter-loop encoding on a single GPU device, and up to 8.5 when compared to a highly optimized multi-core CPU execution. Moreover, the proposed algorithm also provides an automatic tuning of the encoding parameters, in order to meet strict encoding constraints.

Book ChapterDOI
23 Apr 2014
TL;DR: A novel algorithm for tree based GP is presented, that incorporates some ideas on the representation of the solution space in higher dimensions and lays some foundations on addressing multi-class classification problems using GP, which may lead to further research in this direction.
Abstract: Classification problems are of profound interest for the machine learning community as well as to an array of application fields. However, multi-class classification problems can be very complex, in particular when the number of classes is high. Although very successful in so many applications, GP was never regarded as a good method to perform multi-class classification. In this work, we present a novel algorithm for tree based GP, that incorporates some ideas on the representation of the solution space in higher dimensions. This idea lays some foundations on addressing multi-class classification problems using GP, which may lead to further research in this direction. We test the new approach on a large set of benchmark problems from several different sources, and observe its competitiveness against the most successful state-of-the-art classifiers.

Journal ArticleDOI
TL;DR: This article reviews how constant multiplications can be designed using shifts and adders/subtractors that are maximally shared through a high-level synthesis algorithm based on some optimization criteria and shows how constant multiplierless realization of each filter form can be realized under a shift-adds architecture.
Abstract: Finite impulse response (FIR) filtering is a ubiquitous operation in digital signal processing systems and is generally implemented in full custom circuits due to high-speed and low-power design requirements. The complexity of an FIR filter is dominated by the multiplication of a large number of filter coefficients by the filter input or its time-shifted versions. Over the years, many high-level synthesis algorithms and filter architectures have been introduced in order to design FIR filters efficiently. This article reviews how constant multiplications can be designed using shifts and adders/subtractors that are maximally shared through a high-level synthesis algorithm based on some optimization criteria. It also presents different forms of FIR filters, namely, direct, transposed, and hybrid and shows how constant multiplications in each filter form can be realized under a shift-adds architecture. More importantly, it explores the impact of the multiplierless realization of each filter form on area, delay, and power dissipation of both custom (ASIC) and reconfigurable (FPGA) circuits by carrying out experiments with different bitwidths of filter input, design libraries, reconfigurable target devices, and optimization criteria in high-level synthesis algorithms.

Journal ArticleDOI
TL;DR: The results show that Geometric Selective Harmony Search outperforms the other studied methods with statistical significance in almost all the considered benchmark problems.

Journal ArticleDOI
TL;DR: In this article, a magneto-resistive (MR) sensor-based EC probe was designed for the detection and characterization of surface breaking defects and two compatible techniques targeting the reduction of the inductive coupling on the measured signals and thus enabling higher frequency operation were evaluated.
Abstract: Magneto-resistive (MR) sensors have been applied for eddy currents testing (ECT) usually in the inspection of buried defects (at low frequency operation) but they can also be used for the detection of surface breaking defects using higher frequencies. Although the MR sensors bandwidth is high (up to hundreds of MHz), the operating frequency of eddy current probes using these MR sensors is usually much more limited. The presence of inductive coupling in the sensors interconnections results in an additional voltage contribution to the measured signal whose frequency is equal to the primary magnetic field and whose amplitude therefore increases with frequency. Depending on the probe design, this undesired voltage contribution can surpass the sensor response when the selected operating frequency is moderately high. In this paper, a MR sensors based EC probe designed for the detection and characterization of surface breaking defects is presented. The probe and the measurement setup were designed to evaluate two compatible techniques targeting the reduction of the inductive coupling on the measured signals and thus enabling higher frequency operation. One measurement technique relies on using two sensors in a differential measurement while the other technique employs heterodyning principles to generate a low frequency component to recover the magnetic field detected by the sensors. The results using the employed techniques improved by more than 20 dB the signal to noise ratio at which defects can be detected and by around 70 times the relative variation of the measured signal when operating at 100 kHz. An application result in friction stir welding samples demonstrates the ability to detect crack defects with depth around 400 μm.

Book ChapterDOI
25 Aug 2014
TL;DR: A novel calculus is defined, which is resolution- based and enables unification of the principal existing resolution-based QBF calculi, namely Q-resolution, long-distance Q- resolution and the expansion-based calculus ∀Exp+Res.
Abstract: Several calculi for quantified Boolean formulas (QBFs) exist, but relations between them are not yet fully understood. This paper defines a novel calculus, which is resolution-based and enables unification of the principal existing resolution-based QBF calculi, namely Q-resolution, long-distance Q-resolution and the expansion-based calculus ∀Exp+Res. All these calculi play an important role in QBF solving. This paper shows simulation results for the new calculus and some of its variants. Further, we demonstrate how to obtain winning strategies for the universal player from proofs in the calculus. We believe that this new proof system provides an underpinning necessary for formal analysis of modern QBF solvers.

Journal ArticleDOI
TL;DR: The annotation of epidemiology resources with EPO will help researchers to gain a better understanding of global epidemiological events by enhancing data integration and sharing.
Abstract: Epidemiology is a data-intensive and multi-disciplinary subject, where data integration, curation and sharing are becoming increasingly relevant, given its global context and time constraints. The semantic annotation of epidemiology resources is a cornerstone to effectively support such activities. Although several ontologies cover some of the subdomains of epidemiology, we identified a lack of semantic resources for epidemiology-specific terms. This paper addresses this need by proposing the Epidemiology Ontology (EPO) and by describing its integration with other related ontologies into a semantic enabled platform for sharing epidemiology resources. The EPO follows the OBO Foundry guidelines and uses the Basic Formal Ontology (BFO) as an upper ontology. The first version of EPO models several epidemiology and demography parameters as well as transmission of infection processes, participants and related procedures. It currently has nearly 200 classes and is designed to support the semantic annotation of epidemiology resources and data integration, as well as information retrieval and knowledge discovery activities. EPO is under active development and is freely available at https://code.google.com/p/epidemiology-ontology/ . We believe that the annotation of epidemiology resources with EPO will help researchers to gain a better understanding of global epidemiological events by enhancing data integration and sharing.

Journal ArticleDOI
TL;DR: In this article, the authors describe the development of a wireless sensor network (WSN) capable of collecting geophysical measurements on remote active volcanoes, which can be used by recording data locally for later analysis or by continuously transmitting it in real time to a remote laboratory for real-time analyses.
Abstract: . Monitoring of volcanic activity is important for learning about the properties of each volcano and for providing early warning systems to the population. Monitoring equipment can be expensive, and thus the degree of monitoring varies from volcano to volcano and from country to country, with many volcanoes not being monitored at all. This paper describes the development of a wireless sensor network (WSN) capable of collecting geophysical measurements on remote active volcanoes. Our main goals were to create a flexible, easy-to-deploy and easy-to-maintain, adaptable, low-cost WSN for temporary or permanent monitoring of seismic tremor. The WSN enables the easy installation of a sensor array in an area of tens of thousands of m2, allowing the location of the magma movements causing the seismic tremor to be calculated. This WSN can be used by recording data locally for later analysis or by continuously transmitting it in real time to a remote laboratory for real-time analyses. We present a set of tests that validate different aspects of our WSN, including a deployment on a suspended bridge for measuring its vibration.