Institution
INESC-ID
Nonprofit•Lisbon, Portugal•
About: INESC-ID is a nonprofit organization based out in Lisbon, Portugal. It is known for research contribution in the topics: Computer science & Context (language use). The organization has 932 authors who have published 2618 publications receiving 37658 citations.
Topics: Computer science, Context (language use), Field-programmable gate array, Control theory, Adaptive control
Papers published on a yearly basis
Papers
More filters
••
08 Mar 2010TL;DR: The purpose of this paper is to present a novel programmable nanometer aging sensor that allows several levels of circuit failure prediction and exhibits low sensitivity to PVT (Process, power supply Voltage and Temperature) variations.
Abstract: Electronic systems for safety-critical automotive applications must operate for many years in harsh environments. Reliability issues are worsening with device scaling down, while performance and quality requirements are increasing. One of the key reliability issues is long-term performance degradation due to aging. For safe operation, aging monitoring should be performed on chip, namely using built-in aging sensors (activated from time to time). The purpose of this paper is to present a novel programmable nanometer aging sensor. The proposed aging sensor allows several levels of circuit failure prediction and exhibits low sensitivity to PVT (Process, power supply Voltage and Temperature) variations. Simulation results with a 65 nm sensor design are presented, that ascertain the usefulness of the proposed solution.
27 citations
••
TL;DR: A novel approach developed for static index pruning that takes into account the locality of occurrences of words in the text is used, concluding that even an extremely simple locality-based pruning method can be competitive when compared to complex methods that do not rely on locality information.
Abstract: This article discusses a novel approach developed for static index pruning that takes into account the locality of occurrences of words in the text. We use this new approach to propose and experiment on simple and effective pruning methods that allow a fast construction of the pruned index. The methods proposed here are especially useful for pruning in environments where the document database changes continuously, such as large-scale web search engines. Extensive experiments are presented showing that the proposed methods can achieve high compression rates while maintaining the quality of results for the most common query types present in modern search engines, namely, conjunctive and phrase queries. In the experiments, our locality-based pruning approach allowed reducing search engine indices to 30p of their original size, with almost no reduction in precision at the top answers. Furthermore, we conclude that even an extremely simple locality-based pruning method can be competitive when compared to complex methods that do not rely on locality information.
27 citations
••
26 Sep 2010TL;DR: Most common triphone state clustering procedures for Gaussian models are compared and applied to a connectionist speech recognizer and developed systems with clustered context-dependent triphones show above 20% relative word error rate reduction compared to a baseline hybrid system.
Abstract: Speech recognition based on connectionist approaches is one of the most successful alternatives to widespread Gaussian systems. One of the main claims against hybrid recognizers is the increased complexity for context-dependent phone modeling, which is a key aspect in medium to large size vocabulary tasks. In this paper, we investigate the use of context-dependent triphone models in a connectionist speech recognizer. Thus, most common triphone state clustering procedures for Gaussian models are compared and applied to our hybrid recognizer. The developed systems with clustered context-dependent triphones show above 20% relative word error rate reduction compared to a baseline hybrid system in two selected WSJ evaluation test sets. Additionally, the recent porting efforts of the proposed context modelling approaches to a LVCSR system for English Broadcast News transcription are reported. Index Terms: speech recognition, context modeling, connectionist system
27 citations
••
31 Oct 2005TL;DR: This paper shows how citation-based information and structural content can be combined to improve classification of text documents into predefined categories and indicates that GP can discover similarity functions superior to those based solely on a single type of evidence.
Abstract: This paper shows how citation-based information and structural content (e.g., title, abstract) can be combined to improve classification of text documents into predefined categories. We evaluate different measures of similarity -- five derived from the citation information of the collection, and three derived from the structural content -- and determine how they can be fused to improve classification effectiveness. To discover the best fusion framework, we apply Genetic Programming (GP) techniques. Our experiments with the ACM Computing Classification Scheme, using documents from the ACM Digital Library, indicate that GP can discover similarity functions superior to those based solely on a single type of evidence. Effectiveness of the similarity functions discovered through simple majority voting is better than that of content-based as well as combination-based Support Vector Machine classifiers. Experiments also were conducted to compare the performance between GP techniques and other fusion techniques such as Genetic Algorithms (GA) and linear fusion. Empirical results show that GP was able to discover better similarity functions than GA or other fusion techniques.
27 citations
••
TL;DR: The main goal of this work is to study the impact of earlier errors in the last modules of the pipeline system, which includes audio preprocessing, speech recognition, and topic segmentation and indexation.
Abstract: This paper describes ongoing work on selective dissemination of broadcast news. Our pipeline system includes several modules: audio preprocessing, speech recognition, and topic segmentation and indexation. The main goal of this work is to study the impact of earlier errors in the last modules. The impact of audio preprocessing errors is quite small on the speech recognition module, but quite significant in terms of topic segmentation. On the other hand, the impact of speech recognition errors on the topic segmentation and indexation modules is almost negligible. The diagnostic of the errors in these modules is a very important step for the improvement of the prototype of a media watch system described in this paper.
27 citations
Authors
Showing all 967 results
Name | H-index | Papers | Citations |
---|---|---|---|
João Carvalho | 126 | 1278 | 77017 |
Jaime G. Carbonell | 72 | 496 | 31267 |
Chris Dyer | 71 | 240 | 32739 |
Joao P. S. Catalao | 68 | 1039 | 19348 |
Muhammad Bilal | 63 | 720 | 14720 |
Alan W. Black | 61 | 413 | 19215 |
João Paulo Teixeira | 60 | 636 | 19663 |
Bhiksha Raj | 51 | 359 | 13064 |
Joao Marques-Silva | 48 | 289 | 9374 |
Paulo Flores | 48 | 321 | 7617 |
Ana Paiva | 47 | 472 | 9626 |
Miadreza Shafie-khah | 47 | 450 | 8086 |
Susana Cardoso | 44 | 400 | 7068 |
Mark J. Bentum | 42 | 226 | 8347 |
Joaquim Jorge | 41 | 290 | 6366 |