scispace - formally typeset
Search or ask a question
Institution

INESC-ID

NonprofitLisbon, Portugal
About: INESC-ID is a nonprofit organization based out in Lisbon, Portugal. It is known for research contribution in the topics: Computer science & Context (language use). The organization has 932 authors who have published 2618 publications receiving 37658 citations.


Papers
More filters
Proceedings ArticleDOI
04 Dec 2007
TL;DR: This paper proposes MH-MAC, a new MAC protocol for wireless sensor networks capable of handling applications that generate infrequent huge peaks of traffic, and includes simulation results with the energy consumption, latency and throughput for the operation modes of MH- MAC.
Abstract: This paper proposes MH-MAC, a new MAC protocol for wireless sensor networks capable of handling applications that generate infrequent huge peaks of traffic. Existing protocols are not adapted to this kind of applications. Asynchronous protocols are energy efficient for the long inactive periods, but fail to cope with the bandwidth and latency requirements of the traffic peaks when more than two nodes are sending data to a common sink. Synchronous protocols that support contention free slots provide good throughput for handling the load peaks, but consume unnecessary energy maintaining clocks synchronized for very long idle periods. MH-MAC is a multimode hybrid protocol that can be configured by the application to run in asynchronous mode or in synchronous mode, with or without contention, providing the best possible trade-off. MH-MAC is a single-hop MAC, which supports multi-hop applications through a cross-layering API. The paper includes simulation results with the energy consumption, latency and throughput for the operation modes of MH-MAC, showing the asynchronous-synchronous trade-offs and the state transition overhead.

57 citations

Journal ArticleDOI
TL;DR: A fuzzy load allocation model is generated, which is then corrected by a fuzzy state estimator procedure in order to generate a crisp power flow compatible set of load allocations, coherent with available real time measurements recorded in the SCADA.
Abstract: This paper describes a load allocation model to be used in a distribution management system (DMS) environment. A process of rough allocation is initiated, based on information on actual measurements and on data about installed capacity and power and energy consumption at LV substations. This process generates a fuzzy load allocation, which is then corrected by a fuzzy state estimator procedure in order to generate a crisp power flow compatible set of load allocations, coherent with available real time measurements recorded in the SCADA.

57 citations

Journal ArticleDOI
TL;DR: In this paper, a Stochastic Network Constrained Unit Commitment associated with Demand Response Programs (SNCUCDR) is presented to schedule both generation units and responsive loads in power systems with high penetration of wind power.

57 citations

Proceedings ArticleDOI
31 May 2005
TL;DR: An exact algorithm is proposed that maximizes the sharing of partial terms in multiple constant multiplication (MCM) operations and is cast into a 0-1 integer linear programming (ILP) problem by requiring that the output is asserted while minimizing the total number of AND gates that evaluate to one.
Abstract: In this paper, we propose an exact algorithm that maximizes the sharing of partial terms in multiple constant multiplication (MCM) operations. We model this problem as a Boolean network that covers all possible partial terms which may be used to generate the set of coefficients in the MCM instance. The PIs to this network are shifted versions of the MCM input. An AND gate represents an adder or a subtracter, i.e., an AND gate generates a new partial term. All partial terms that have the same numerical value are ORed together. There is a single output which is an /spl and/ over all the coefficients in the MCM. We cast this problem into a 0-1 integer linear programming (ILP) problem by requiring that the output is asserted while minimizing the total number of AND gates that evaluate to one. A SAT-based solver is used to obtain the exact solution. We argue that for many real problems the size of the problem is within the capabilities of current SAT solvers. We present results using binary, CSD and MSD representations. Two main conclusions can be drawn from the results. One is that, in many cases, existing heuristics perform well, computing the best solution, or one close to it. The other is that the flexibility of the MSD representation does not have a significant impact in the solution obtained.

56 citations

Proceedings Article
01 May 2012
TL;DR: This article investigated the use of additional semantic features and pre-processing steps to improve automatic key phrase extraction, including signal words and freebase categories, which led to significant improvements in the accuracy of the results.
Abstract: Fast and effective automated indexing is critical for search and personalized services. Key phrases that consist of one or more words and represent the main concepts of the document are often used for the purpose of indexing. In this paper, we investigate the use of additional semantic features and pre-processing steps to improve automatic key phrase extraction. These features include the use of signal words and freebase categories. Some of these features lead to significant improvements in the accuracy of the results. We also experimented with 2 forms of document pre-processing that we call light filtering and co-reference normalization. Light filtering removes sentences from the document, which are judged peripheral to its main content. Co-reference normalization unifies several written forms of the same named entity into a unique form. We also needed a “Gold Standard” ― a set of labeled documents for training and evaluation. While the subjective nature of key phrase selection precludes a true “Gold Standard”, we used Amazon's Mechanical Turk service to obtain a useful approximation. Our data indicates that the biggest improvements in performance were due to shallow semantic features, news categories, and rhetorical signals (nDCG 78.47% vs. 68.93%). The inclusion of deeper semantic features such as Freebase sub-categories was not beneficial by itself, but in combination with pre-processing, did cause slight improvements in the nDCG scores.

56 citations


Authors

Showing all 967 results

NameH-indexPapersCitations
João Carvalho126127877017
Jaime G. Carbonell7249631267
Chris Dyer7124032739
Joao P. S. Catalao68103919348
Muhammad Bilal6372014720
Alan W. Black6141319215
João Paulo Teixeira6063619663
Bhiksha Raj5135913064
Joao Marques-Silva482899374
Paulo Flores483217617
Ana Paiva474729626
Miadreza Shafie-khah474508086
Susana Cardoso444007068
Mark J. Bentum422268347
Joaquim Jorge412906366
Network Information
Related Institutions (5)
Carnegie Mellon University
104.3K papers, 5.9M citations

88% related

Eindhoven University of Technology
52.9K papers, 1.5M citations

88% related

Microsoft
86.9K papers, 4.1M citations

88% related

Vienna University of Technology
49.3K papers, 1.3M citations

86% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202311
202252
202196
2020131
2019133
2018126