scispace - formally typeset
Search or ask a question

Showing papers by "INESC-ID published in 2006"


Journal ArticleDOI
TL;DR: In this article, the authors report on the emergent narrative concept aiming at the definition of a narrative theory adapted to the VR medium (whether a game or VR application), and discuss designing unscripted dramas with affectively driven intelligent autonomous characters based on the development of the FearNot! system for education against bullying.
Abstract: In this article, we report on the emergent narrative concept aiming at the definition of a narrative theory adapted to the VR medium (whether a game or VR application). The inherent freedom of movement proper to VR - an indisputable element of immersion - collides with the Aristotelian vision of articulated plot events with respect to the given timeline associated with the story in display. This narrative paradox can only be observed in interactive VR applications and it doesn't seem possible to resolve it through the use of existing narrative theories. Interactivity is the novel element that storytellers must address. The authors discuss designing unscripted dramas with affectively driven intelligent autonomous characters based on the development of the FearNot! system for education against bullying

133 citations


Journal ArticleDOI
TL;DR: The Clear-PEM imaging system for positron emission mammography, under development by the PEM Consortium within the framework of the Crystal Clear Collaboration at CERN, is presented in this paper.
Abstract: The design and evaluation of the imaging system Clear-PEM for positron emission mammography, under development by the PEM Consortium within the framework of the Crystal Clear Collaboration at CERN, is presented. The proposed apparatus is based on fast, segmented, high atomic number radiation sensors with depth-of-interaction measurement capabilities, and state-of-the-art data acquisition techniques. The camera consists of two compact and planar detector heads with dimensions 16.5/spl times/14.5 cm/sup 2/ for breast and axilla imaging. Low-noise integrated electronics provide signal amplification and analog multiplexing based on a new data-driven architecture. The coincidence trigger and data acquisition architecture makes extensive use of pipeline processing structures and multi-event memories for high efficiency up to a data acquisition rate of one million events/s. Experimental validation of the detection techniques, namely the basic properties of the radiation sensors and the ability to measure the depth-of-interaction of the incoming photons, are presented. System performance in terms of detection sensitivity, count-rates and reconstructed image spatial resolution were also evaluated by means of a detailed Monte Carlo simulation and an iterative image reconstruction algorithm.

131 citations


01 Jan 2006
TL;DR: This paper addresses the problem of encoding Sudoku puzzles into conjunctive normal form (CNF), and subsequently solving them using polynomial-time propositional satisfiability (SAT) inference techniques, and introduces two straightforward SAT encodings for Sudoku: the minimal encoding and the extended encoding.
Abstract: Sudoku is a very simple and well-known puzzle that has achieved international popularity in the recent past. This paper addresses the problem of encoding Sudoku puzzles into conjunctive normal form (CNF), and subsequently solving them using polynomial-time propositional satisfiability (SAT) inference techniques. We introduce two straightforward SAT encodings for Sudoku: the minimal encoding and the extended encoding. The minimal encoding suffices to characterize Sudoku puzzles, whereas the extended encoding adds redundant clauses to the minimal encoding. Experimental results demonstrate that, for thousands of very hard puzzles, inference techniques struggle to solve these puzzles when using the minimal encoding. However, using the extended encoding, unit propagation is able to solve about half of our set of puzzles. Nonetheless, for some puzzles more sophisticated inference techniques are required.

119 citations


Proceedings Article
06 Jun 2006
TL;DR: An implementation in the synthetic characters of the FearNot! anti-bullying education demonstrator is discussed and how far this provides an adequate mechanism for believable behaviour is discussed.
Abstract: This paper discusses the requirements of planning for believable synthetic characters and examines the relationship between appraisal and planning as components of an affective agent architecture. It discusses an implementation in the synthetic characters of the FearNot! anti-bullying education demonstrator and how far this provides an adequate mechanism for believable behaviour.

104 citations


Book ChapterDOI
20 Mar 2006
TL;DR: In this paper, an exact algorithm for motif extraction based on suffix trees is presented, which is shown to be more than two times faster than the best known exact algorithm in terms of average case complexity.
Abstract: We present in this paper an exact algorithm for motif extraction. Efficiency is achieved by means of an improvement in the algorithm and data structures that applies to the whole class of motif inference algorithms based on suffix trees. An average case complexity analysis shows a gain over the best known exact algorithm for motif extraction. A full implementation was developed and made available online. Experimental results show that the proposed algorithm is more than two times faster than the best known exact algorithm for motif extraction.

102 citations


Proceedings ArticleDOI
23 May 2006
TL;DR: This paper investigates and annihilate site level mutual reinforcement relationships, abnormal support coming from one site towards another, as well as complex link alliances between web sites, and shows a very strong increase in the quality of the output rankings after having applied the techniques.
Abstract: The currently booming search engine industry has determined many online organizations to attempt to artificially increase their ranking in order to attract more visitors to their web sites. At the same time, the growth of the web has also inherently generated several navigational hyperlink structures that have a negative impact on the importance measures employed by current search engines. In this paper we propose and evaluate algorithms for identifying all these noisy links on the web graph, may them be spam or simple relationships between real world entities represented by sites, replication of content, etc. Unlike prior work, we target a different type of noisy link structures, residing at the site level, instead of the page level. We thus investigate and annihilate site level mutual reinforcement relationships, abnormal support coming from one site towards another, as well as complex link alliances between web sites. Our experiments with the link database of the TodoBR search engine show a very strong increase in the quality of the output rankings after having applied our techniques.

54 citations


Proceedings ArticleDOI
11 Jun 2006
TL;DR: A comparative study of digital library citations and Web links, in the context of automatic text classification, shows that there are in fact differences between citations and links in this context and proposes a simple and effective way of combining a traditional text based classifier with a citation-link based classifiers.
Abstract: It is well known that links are an important source of information when dealing with Web collections However, the question remains on whether the same techniques that are used on the Web can be applied to collections of documents containing citations between scientific papers In this work we present a comparative study of digital library citations and Web links, in the context of automatic text classification We show that there are in fact differences between citations and links in this context For the comparison, we run a series of experiments using a digital library of computer science papers and a Web directory In our reference collections, measures based on co-citation tend to perform better for pages in the Web directory, with gains up to 37% over text based classifiers, while measures based on bibliographic coupling perform better in a digital library We also propose a simple and effective way of combining a traditional text based classifier with a citation-link based classifier This combination is based on the notion of classifier reliability and presented gains of up to 14% in micro-averaged F1 in the Web collection However, no significant gain was obtained in the digital library Finally, a user study was performed to further investigate the causes for these results We discovered that misclassifications by the citation-link based classifiers are in fact difficult cases, hard to classify even for humans

49 citations


Book ChapterDOI
Marco Vala1, João Dias1, Ana Paiva1
21 Aug 2006
TL;DR: This work proposes smart bodies, a model and a collection of animations which are provided by a graphics engine that operate at a higher level and do not have to deal with low-level body geometry or physics, and used in FearNot!, an anti-bullying application.
Abstract: Interactive virtual environments (IVEs) are inhabited by synthetic characters that guide and engage children in a wide variety of activities, like playing games or learning new things. To build those environments, we need believable autonomous synthetic characters that are able to think and act in very dynamic environments. These characters have often able minds that are limited by the actions that the body can do. In one hand, we have minds capable of creating interesting non-linear behaviour; on the other hand, we have bodies that are limited by the number of animations they can perform. This usually leads to a large planning effort to anticipate possible situations and define which animations are necessary. When we aim at non-linear narrative and non-deterministic plots, there is an obvious gap between what minds can think and what bodies can do. We propose smart bodies as way to fill this gap between minds and bodies. A smart body extends the notion of standard body since it is enriched with semantic information and can do things on its own. The mind still decides what the character should do, but the body chooses how it is done. Smart bodies, like standard bodies, have a model and a collection of animations which are provided by a graphics engine. But they also have access to knowledge about other elements in the world like locations, interaction information and particular attributes. At this point, the notions of interaction spot and action trigger come into play. Interaction spots are specific positions around smart bodies or items where other smart bodies can do particular interactions. Action triggers define automatic reactions which are triggered by smart bodies when certain actions or interactions occur. We use both these constructs to create abstract references for physical elements, to act as a resource and pre-condition mechanisms, and to simulate physics using rule-based reactions. Smart bodies use all this information to create high-level actions which are used by the minds. Thus, minds operate at a higher level and do not have to deal with low-level body geometry or physics. Smart bodies were used in FearNot!, an anti-bullying application. In FearNot! children experience virtual stories generated in real-time where they can witness (from a third-person perspective) a series of bullying situations towards a character. Clearly, in such an emergent narrative scenario, minds need to work at a higher-level of abstraction without worrying with bodies and how a particular action is carried out at low-level. Smart bodies provided this abstraction layer. We performed a small study to validate our work in FearNot! with positive results. We believe there may be other applications where smart bodies have much to offer, particularly when using unscripted and non-linear narrative approaches.

47 citations


Proceedings Article
16 Jul 2006
TL;DR: This paper describes the emotivector, an anticipatory mechanism coupled with a sensor that uses the history of the sensor to anticipate the next sensor state and interprets the mismatch between the prediction and the sensed value.
Abstract: Although anticipation is an important part of creating believable behaviour, it has had but a secondary role in the field of life-like characters. In this paper, we show how a simple anticipatory mechanism can be used to control the behaviour of a synthetic character implemented as a software agent, without disrupting the user's suspension of disbelief. We describe the emotivector, an anticipatory mechanism coupled with a sensor, that: (1) uses the history of the sensor to anticipate the next sensor state; (2) interprets the mismatch between the prediction and the sensed value, by computing its attention grabbing potential and associating a basic qualitative sensation with the signal; (3) sends its interpretation along with the signal. When a signal from the sensor reaches the processing module of the agent, it carries recommendations such as: "you should seriously take this signal into consideration, as it is much better than we had expected" or "Just forget about this one, it is as bad as we predicted". We delineate several strategies to manage several emotivectors at once and show how one of these strategies (meta-anticipation) transparently introduces the concept of uncertainty. Finally, we describe an experiment in which an emotivector-controlled synthetic character interacts with the user in the context of a word-puzzle game and present the evaluation supporting the adequacy of our approach.

43 citations


Book ChapterDOI
12 Sep 2006
TL;DR: This paper presents the development of an ontology for the cooking domain, to be integrated in a dialog system.
Abstract: An effective solution to the problem of extending a dialogue system to new knowledge domains requires a clear separation between the knowledge and the system: as ontologies are used to conceptualize information, they can be used as a means to improve the separation between the dialogue system and the domain information. This paper presents the development of an ontology for the cooking domain, to be integrated in a dialog system. The ontology comprehends four main modules covering the key concepts of the cooking domain – actions, food, recipes, and utensils – and three auxiliary modules – units and measures, equivalencies and plate types.

42 citations


Proceedings Article
01 Jan 2006
TL;DR: The different domain adaptation steps that lowered the error rate to 45%, with very little transcribed adaptation material, are described, and the exploratory study of spontaneous speech phenomena in European Portuguese, namely concerning filled pauses is described.
Abstract: Classroom lectures may be very challenging for automatic speech recognizers, because the vocabulary may be very specific and the speaking style very spontaneous. Our first experiments using a recognizer trained for Broadcast News resulted in word error rates near 60%, clearly confirming the need for adaptation to the specific topic of the lectures, on one hand, and for better strategies for handling spontaneous speech. This paper describes our efforts in these two directions: the different domain adaptation steps that lowered the error rate to 45%, with very little transcribed adaptation material, and the exploratory study of spontaneous speech phenomena in European Portuguese, namely concerning filled pauses.

Journal ArticleDOI
TL;DR: Monte Carlo simulation results evaluating the trigger performance, as well as results of hardware simulations are presented, showing the correctness of the design and the implementation approach.
Abstract: The Clear-PEM detector system is a compact positron emission mammography scanner with about 12000 channels aiming at high sensitivity and good spatial resolution. Front-end, Trigger, and Data Acquisition electronics are crucial components of this system. The on-detector front-end is implemented as a data-driven synchronous system that identifies and selects the analog signals whose energy is above a predefined threshold. The off-detector trigger logic uses digitized front-end data streams to compute pulse amplitudes and timing. Based on this information it generates a coincidence trigger signal that is used to initiate the conditioning and transfer of the relevant data to the data acquisition computer. To minimize dead-time, the data acquisition electronics makes extensive use of pipeline processing structures and derandomizer memories with multievent capacity. The system operates at 100-MHz clock frequency, and is capable of sustaining a data acquisition rate of 1 million events per second with an efficiency above 95%, at a total single photon background rate of 10 MHz. The basic component of the front-end system is a low-noise amplifier-multiplexer chip presently under development. The off-detector system is designed around a dual-bus crate backplane for fast intercommunication between the system boards. The trigger and data acquisition logic is implemented in large FPGAs with 4 million gates. Monte Carlo simulation results evaluating the trigger performance, as well as results of hardware simulations are presented, showing the correctness of the design and the implementation approach

Journal ArticleDOI
TL;DR: The authors present a method for sample point selection in multipoint projection-based model-order reduction, which is based on resampling schemes to estimate error and can be coupled with recently proposed order reduction schemes to efficiently produce accurate models.
Abstract: Multipoint projection methods have gained much notoriety in model-order reduction of linear, nonlinear, and parameter-varying systems. A well-known difficulty with such methods lies in the need for clever point selection to attain model compactness and accuracy. In this paper, the authors present a method for sample point selection in multipoint projection-based model-order reduction. The proposed technique, which is borrowed from the statistical modeling area, is based on resampling schemes to estimate error and can be coupled with recently proposed order reduction schemes to efficiently produce accurate models. Two alternative implementations are presented: 1) a rigorous linear-matrix-inequality-based technique and 2) a simpler, more efficient, heuristic search. The goal of this paper is to answer two questions. First, can this alternative metric be effective in selecting sample points in the sense of placing points in regions of high error without recourse to evaluation of the larger system? Second, if the metric is effective in this sense, under what conditions are substantial improvements in the model reduction efficiency achieved? Results are shown that indicate that the metric is indeed effective in a variety of settings, therefore opening the possibility for performing adaptive error control

Journal ArticleDOI
TL;DR: The main aspects of the design and test (D&T) of a reconfigurable architecture for the Data Acquisition Electronics (DAE) system of the Clear-PEM detector are presented in this paper.
Abstract: The main aspects of the design and test (D&T) of a reconfigurable architecture for the Data Acquisition Electronics (DAE) system of the Clear-PEM detector are presented in this paper. The application focuses medical imaging using a compact PEM (Positron Emission Mammography) detector with 12288 channels, targeting high sensitivity and spatial resolution. The DAE system processes data frames that come from a front-end (FE) electronics, identifies the relevant data and transfers it to a PC for image processing. The design is supported in a novel D&T methodology, in which hierarchy, modularity and parallelism are extensively exploited to improve design and testability features. Parameterization has also been used to improve design flexibility. Nominal frequency is 100 MHz. The DAE must respond to a data acquisition rate of 1 million relevant events (coincidences) per second, under a total single photon background rate in the detector of 10 MHz. Trigger and data acquisition logic is implemented in eight 4-million, one 2-million and one 1-million gate FPGAs (Xilinx Virtex II). Functional Built-In Self Test (BIST) and Debug features are incorporated in the design to allow on-board FPGA testing and self-testing during product lifetime.

Book ChapterDOI
21 Aug 2006
TL;DR: In this paper, the authors consider the issues involved in taking educational role-play into a virtual environment with intelligent graphical characters, who implement a cognitive appraisal system and autonomous action selection, and discuss issues in organizing emergent narratives with respect to a Story Facilitator as well as the impact on the authoring process.
Abstract: We consider the issues involved in taking educational role-play into a virtual environment with intelligent graphical characters, who implement a cognitive appraisal system and autonomous action selection. Issues in organizing emergent narratives are discussed with respect to a Story Facilitator as well as the impact on the authoring process.

Book ChapterDOI
21 Aug 2006
TL;DR: The interpretation of qualitative data collected using the Classroom Discussion Forum technique identifies that the use of fairly naive synthetic characters for achieving empathic engagement appears to be an appropriate approach for FearNot.
Abstract: This paper discusses FearNot, a virtual learning environment populated by synthetic characters aimed at the 8-12 year old age group for the exploration of bullying and coping strategies. Currently, FearNot is being redesigned from a lab-based prototype into a classroom tool. In this paper we focus on informing the design of the characters and of the virtual learning environment through our interpretation of qualitative data gathered about interaction with FearNot by 345 children. The paper focuses on qualitative data collected using the Classroom Discussion Forum technique and discusses its implications for the redesign of the media used for FearNot. The interpretation of the data identifies that the use of fairly naive synthetic characters for achieving empathic engagement appears to be an appropriate approach. Results do indicate a focus for redesign, with a clear need for improved transitions for animations; identification and repair of inconsistent graphical elements; and for a greater cast of characters and range of sets to achieve optimal engagement levels.

Proceedings ArticleDOI
28 Aug 2006
TL;DR: The optimization of a generic NoC is considered to improve area and performance of NoC based architectures for dedicated applications to support a set of applications considering the optimization of area, and performance.
Abstract: Complex Systems-on-Chip (SoC) with multiple interconnected stand-alone designs require high scalability and bandwidth. Network-on-Chip (NoC) is a scalable communication infrastructure able to tackle the communication needs of these SoCs. In this paper, we consider the optimization of a generic NoC to improve area and performance of NoC based architectures for dedicated applications. The generic NoC can be tailored to an application by changing the number of routers, by configuring each router to specific traffic requirements, and by choosing the set of links between routers and cores. The optimization algorithm determines the appropriate NoC and routers configuration to support a set of applications considering the optimization of area, and performance. The final solution will consist of an heterogeneous NoC with improved quality. The approach has been tested under different operating conditions assuming implementations on an FPGA.

Journal ArticleDOI
TL;DR: A family of kernel density functions is described that accommodates the fractal nature of iterative function representations of symbolic sequences and, consequently, enables the exact investigation of sequence motifs of arbitrary lengths in that scale-independent representation.
Abstract: The use of Chaos Game Representation (CGR) or its generalization, Universal Sequence Maps (USM), to describe the distribution of biological sequences has been found objectionable because of the fractal structure of that coordinate system. Consequently, the investigation of distribution of symbolic motifs at multiple scales is hampered by an inexact association between distance and sequence dissimilarity. A solution to this problem could unleash the use of iterative maps as phase-state representation of sequences where its statistical properties can be conveniently investigated. In this study a family of kernel density functions is described that accommodates the fractal nature of iterative function representations of symbolic sequences and, consequently, enables the exact investigation of sequence motifs of arbitrary lengths in that scale-independent representation. Furthermore, the proposed kernel density includes both Markovian succession and currently used alignment-free sequence dissimilarity metrics as special solutions. Therefore, the fractal kernel described is in fact a generalization that provides a common framework for a diverse suite of sequence analysis techniques.

Journal ArticleDOI
João M. Lemos1
TL;DR: A confrontation between data-driven and model-driven adaptive control algorithms for distributed collector solar fields is made using experimental results and algorithm structure and a trade-off is shown to exist; i.e. by incorporating more information about plant dynamics, the control algorithms yield an increased performance, but they become plant dependent.
Abstract: Distributed collector solar fields are spatially distributed technical systems which aim at collecting and storing energy from solar radiation They are formed by mirrors which concentrate direct incident sunlight in a pipe where an oil is able to accumulate thermic energy flows From the control point of view, the objective consists of making the outlet oil temperature to track a reference signal by manipulating the oil flow, in the possible presence of fast disturbances caused by passing clouds Significant levels of uncertainty motivate the use of adaptive methods A confrontation between data-driven and model-driven adaptive control algorithms for distributed collector solar fields is made using experimental results and algorithm structure A trade-off is shown to exist; ie by incorporating more information about plant dynamics, the control algorithms yield an increased performance, but they become plant dependent

Journal ArticleDOI
TL;DR: In this article, a high-level C++ simulation tool was developed for data acquisition performance analysis and validated at bit-level against FPGA VHDL testbenches.
Abstract: The Clear-PEM detector is a positron emission mammography scanner based on a high-granularity avalanche photodiode readout with 12 288 channels. The front-end sub-system is instrumented with low-noise 192:2 channel amplifier-multiplexer ASICs and free-running sampling ADCs. The off-detector trigger, implemented in a FPGA based architecture, computes the pulses amplitude and timing required for coincidence validation from the front-end data streams. A high-level C++ simulation tool was developed for data acquisition performance analysis and validated at bit-level against FPGA VHDL testbenches. In this work, simulation studies concerning the performance of the on-line/off-line energy and time extraction algorithms and the foreseen detector energy and time resolution are presented. Time calibration and trigger efficiency are also discussed

Journal ArticleDOI
TL;DR: The results from this 345 children study highlight that children are able to recognize and interpret affect in synthetic characters and are empathically engaged with the characters in the scenarios.
Abstract: This paper is concerned with the simulation of human-likess capabilities in synthetic characters within the domain of Personal and Social Education. Our aim was to achieve socially meaningful and engaging interactions with children in the 8–12 age group to enable an exploration of bullying and coping strategies. We consider the engagement between the interacting partners, focusing particularly on the affective and empathic aspects of this relationship. We have used Theory of Mind methods to enable us to evaluate children's understanding of social scenarios and the thinking of others. The results from this 345 children study highlight that children are able to recognize and interpret affect in synthetic characters and are empathically engaged with the characters in the scenarios.

Book ChapterDOI
11 Oct 2006
TL;DR: A new compressed self-index able to locate the occurrences of P in O((m+occ)logn) time, where occ is the number of occurrences and σ the size of the alphabet of T, and is very competitive in practice by comparing it against the LZ-Index, the FM-index and a compressed suffix array.
Abstract: A compressed full-text self-index for a text T, of size u, is a data structure used to search patterns P, of size m, in T that requires reduced space, ie that depends on the empirical entropy (Hk, H0) of T, and is, furthermore, able to reproduce any substring of T In this paper we present a new compressed self-index able to locate the occurrences of P in O((m+occ)logn) time, where occ is the number of occurrences and σ the size of the alphabet of T The fundamental improvement over previous LZ78 based indexes is the reduction of the search time dependency on m from O(m2) to O(m) To achieve this result we point out the main obstacle to linear time algorithms based on LZ78 data compression and expose and explore the nature of a recurrent structure in LZ-indexes, the $\mathcal{T}_{78}$ suffix tree We show that our method is very competitive in practice by comparing it against the LZ-Index, the FM-index and a compressed suffix array

01 Jan 2006
TL;DR: An architecture to realize an RF quadrature oscillator, in which a frequency generated by a Direct Digital Synthesis (DDS) system is added to (or subtracted from) the frequencygenerated by a Phase-Locked Loop (PLL).
Abstract: We propose an architecture to realize an RF quadrature oscillator, in which a frequency generated by a Direct Digital Synthesis (DDS) system is added to (or subtracted from) the frequency generated by a Phase-Locked Loop (PLL). The DDS system is easily reconfigurable to change the channel spacing and bandwidth, and allows the implementation of several digital modulation schemes. A computer program was developed to calculate the parameters of the DDS system, based on the specifications supplied by the user, and to generate the VHDL code of the digital part of the system. The DDS is designed to obtain outputs in quadrature with a minimum ROM area. The DDS is implemented in a FPGA and has excellent quadrature relation throughout the frequency band of the system.

Book ChapterDOI
11 Oct 2006
TL;DR: This work addresses text indexing for approximate matching, given a text which undergoes some preprocessing to generate an index, and can later query this index to identify the places where a string occurs up to a certain number of errors k (edition distance).
Abstract: In this work, the problem we address is text indexing for approximate matching. Given a text $\mathcal{T}$ which undergoes some preprocessing to generate an index, we can later query this index to identify the places where a string occurs up to a certain number of errors k (edition distance). The indexing structure occupies space $\mathcal{O}(n\log^kn)$ in the average case, independent of alphabet size. This structure can be used to report the existence of a match with k errors in $\mathcal{O}(3^k m^{k+1})$ and to report the occurrences in $\mathcal{O}(3^k m^{k+1} + \mbox{\it ed})$ time, where m is the length of the pattern and ed and the number of matching edit scripts. The construction of the structure has time bound by $\mathcal{O}(kN|\Sigma|)$, where N is the number of nodes in the index and |Σ| the alphabet size.

Proceedings ArticleDOI
24 Jan 2006
TL;DR: This paper proposes an approach to the design space exploration of a configurable SoC (CSoC) platform based on a network on chip (NoC) architecture for the execution of dataflow dominated embedded systems.
Abstract: The constant increase of gate capacity and performance of configurable hardware chips made it possible to implement systems-on-chip (SoC) able to tackle the demanding requirements of many embedded systems. In this paper, we propose an approach to the design space exploration of a configurable SoC (CSoC) platform based on a network on chip (NoC) architecture for the execution of dataflow dominated embedded systems. The approach has been validated with the design of a color image compression algorithm in an FPGA.

Proceedings ArticleDOI
30 Aug 2006
TL;DR: Experimental results show that very-low power adaptive motion estimators have been achieved to encode QCIF video sequences.
Abstract: Motion estimation is the most demanding operation of a video encoder, corresponding to at least 80% of the overall computational cost. With the proliferation of portable handheld devices that support digital video coding, data-adaptive motion estimation algorithms have been required to dynamically configure the search pattern not only to avoid unnecessary computations and memory accesses but also to save energy. This paper proposes an application specific instruction set processor (ASIP) to implement data-adaptive motion estimation algorithms, that is characterized by a specialized data-path and minimum and optimized instruction set. Due to its low-power nature, this architecture is specially adequate to develop motion estimators for portable, mobile and battery supplied devices. A cycle-based accurate simulator was also developed for the proposed ASIP and fast and data-adaptive search algorithms have been implemented, namely, the four-step search and the motion vector field adaptive search algorithms. Based on the proposed ASIP and the considered adaptive algorithms, several motion estimators were synthesized in 0.13mum CMOS technology. Experimental results show that very-low power adaptive motion estimators have been achieved to encode QCIF video sequences

Proceedings ArticleDOI
21 May 2006
TL;DR: It is shown that the system of two oscillators is described by two differential equations where the coefficients in one equation have the perturbations defined by the second oscillator, and vice versa.
Abstract: A new coupling mechanism is used to synchronize two Van der Pol oscillators. This coupling uses the second harmonic appearing in common mode current of each oscillator. The common mode current is measured by a current mirror, and is amplified by a current amplifier. The amplifier introduces negative feedback, so that the current in the current mirror measuring diode of the first oscillator is nearly equal to the common mode current of the second oscillator, hence the coupling is established. It is shown that the system of two oscillators is described by two differential equations where the coefficients in one equation have the perturbations defined by the second oscillator, and vice versa. The coupling amplifier gain is defined. The developed concepts are demonstrated on a 5 GHz CMOS LC oscillator with quadrature outputs. The oscillator phase noise is lower than -116 dBc/Hz at 1-MHz offset.

Proceedings ArticleDOI
29 Jul 2006
TL;DR: In this paper, the authors proposed an efficient terminal independent mobility architecture (eTIMIP - enhanced TIMIP) which is compliant with the macro-mobility standard and which uses an overlay network to provide transparent micromobility support in all existing networks.
Abstract: All the proposed IP mobility protocols assume that the mobile nodes always have a mobility-aware IP stack. On the other hand, efficient micro-mobility solutions entail specific topologies and mobile-aware routers, requiring major changes in the existing infra-structures. Major advantages are foreseen if mobility can be supported using the existing legacy infra-structure, on both client and network sides, allowing a smooth upgrade process. This paper describes such kind of solution, by proposing an efficient terminal independent mobility architecture (eTIMIP - enhanced TIMIP) which is compliant with the macro-mobility standard and which uses an overlay network to provide transparent micro-mobility support in all existing networks, using an enhanced version of the previously proposed TIMIP protocol. Simulation results have revealed the efficiency, the transparency and the reliability of the proposed architecture through comparison to other proposals.

Rui Amaral1, Hugo Meinedo1, Diamantino Caseiro1, Isabel Trancoso1, João Paulo Neto1 
01 Jan 2006
TL;DR: The main focus of the paper is on the impact of the errors made by the earlier modules in the last ones, which are in the opinion an essential diagnostic tool for the improvement of the overall pipeline system.
Abstract: This paper describes the latest progress in our work on Broadcast News for European Portuguese. The central modules of our media watch system that matches the topic of each news story against the user preferences registered in the system are: audio pre-processing, speech recognition and topic segmentation and indexation. The main focus of the paper is on the impact of the errors made by the earlier modules in the last ones. This impact is in our opinion an essential diagnostic tool for the improvement of the overall pipeline system.

Book ChapterDOI
13 May 2006
TL;DR: An Input and Output Manager block that combines speech, synthetic talking face, text and graphical interfaces is presented that is analyzed in the framework of the project Interactive Home of the Future.
Abstract: In this paper we described our initial work on the development of an embodied conversational agent platform. In the present stage our main focus it is on the development of a multimodal input interface to the system. In this paper we will present an Input and Output Manager block that combines speech, synthetic talking face, text and graphical interfaces. The system support speech input through an ASR and speech output through a TTS, synchronized with an animated face. The graphical and text input are feed through a Text Manger that it is a constituent component of the Input and Output Manager block. All the blocks are tailored for the European Portuguese language. The system is analyzed in the framework of the project Interactive Home of the Future.