scispace - formally typeset
Search or ask a question

Showing papers by "INESC-ID published in 2009"


Book ChapterDOI
29 Jun 2009
TL;DR: Weighted Boolean Optimization (WBO) is proposed, a new unified framework that aggregates and extends PBO and MaxSAT and a new unsatisfiability-based algorithm for WBO, based on recent unsatisfiable algorithms for MaxSat.
Abstract: The Pseudo-Boolean Optimization (PBO) and Maximum Satisfiability (MaxSAT) problems are natural optimization extensions of Boolean Satisfiability (SAT). In the recent past, different algorithms have been proposed for PBO and for MaxSAT, despite the existence of straightforward mappings from PBO to MaxSAT, and vice-versa. This papers proposes Weighted Boolean Optimization (WBO), a new unified framework that aggregates and extends PBO and MaxSAT. In addition, the paper proposes a new unsatisfiability-based algorithm for WBO, based on recent unsatisfiability-based algorithms for MaxSAT. Besides standard MaxSAT, the new algorithm can also be used to solve weighted MaxSAT and PBO, handling pseudo-Boolean constraints either natively or by translation to clausal form. Experimental results illustrate that unsatisfiability-based algorithms for MaxSAT can be orders of magnitude more efficient than existing dedicated algorithms. Finally, the paper illustrates how other algorithms for either PBO or MaxSAT can be extended to WBO.

161 citations


Proceedings ArticleDOI
16 Nov 2009
TL;DR: D2STM is presented, a replicated STM whose consistency is ensured in a transparent manner, even in the presence of failures, and which permits to achieve remarkable performance gains even for negligible increases of the transaction abort rate.
Abstract: At current date the problem of how to build distributed and replicated Software Transactional Memory (STM) to enhance both dependability and performance is still largely unexplored. This paper fills this gap by presenting D2STM, a replicated STM whose consistency is ensured in a transparent manner, even in the presence of failures. Strong consistency is enforced at transaction commit time by a non-blocking distributed certification scheme, which we name BFC (Bloom Filter Certification). BFC exploits a novel Bloom Filter-based encoding mechanism that permits to significantly reduce the overheads of replica coordination at the cost of a user tunable increase in the probability of transaction abort. Through an extensive experimental study based on standard STM benchmarks we show that the BFC scheme permits to achieve remarkable performance gains even for negligible (e.g. 1%) increases of the transaction abort rate.

144 citations


Journal ArticleDOI
TL;DR: The biological limit of detection of a spin-valve-based magnetoresistive biochip applied to the detection of 20 mer ssDNA hybridization events is presented and two reactional variables and their impact on the biomolecular recognition efficiency are discussed.

114 citations


Proceedings Article
10 May 2009
TL;DR: This paper discusses the development of a believable agent-based educational application designed to develop inter-cultural empathy for 13--14 year old students and considers the role of interaction modalities in supporting an empathic engagement with culturally-specific characters.
Abstract: This paper discusses the development of a believable agent-based educational application designed to develop inter-cultural empathy for 13--14 year old students. It considers relevant work in cultural taxonomy and adaptation to other cultures as well as work showing that users are sensitive to the perceived culture of believable interactive characters. It discusses how an existing affective agent architecture was developed to model culturally-specific agent behaviour. Finally, it considers the role of interaction modalities in supporting an empathic engagement with culturally-specific characters.

107 citations


Journal ArticleDOI
TL;DR: The results showed that there were significant differences in the performance of the algorithms being evaluated, and the new proposed measure for jitter, LocJitt, performed in general is equal to or better than the commonly used tools of MDVP and Praat.
Abstract: This work is focused on the evaluation of different methods to estimate the amount of jitter present in speech signals. The jitter value is a measure of the irregularity of a quasiperiodic signal and is a good indicator of the presence of pathologies in the larynx such as vocal fold nodules or a vocal fold polyp. Given the irregular nature of the speech signal, each jitter estimation algorithm relies on its own model making a direct comparison of the results very difficult. For this reason, the evaluation of the different jitter estimation methods was target on their ability to detect pathological voices. Two databases were used for this evaluation: a subset of the MEEI database and a smaller database acquired in the scope of this work. The results showed that there were significant differences in the performance of the algorithms being evaluated. Surprisingly, in the largest database the best results were not achieved with the commonly used relative jitter, measured as a percentage of the glottal cycle, but with absolute jitter values measured in microseconds. Also, the new proposed measure for jitter, LocJitt, performed in general is equal to or better than the commonly used tools of MDVP and Praat.

84 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: This paper describes experiments with SVM and HMM-based classifiers, using a 290-hour corpus of sound effects, and reports promising results, despite the difficulties posed by the mixtures of audio events that characterize real sounds.
Abstract: Audio event detection is one of the tasks of the European project VIDIVIDEO. This paper focuses on the detection of non-speech events, and as such only searches for events in audio segments that have been previously classified as non-speech. Preliminary experiments with a small corpus of sound effects have shown the potential of this type of corpus for training purposes. This paper describes our experiments with SVM and HMM-based classifiers, using a 290-hour corpus of sound effects. Although we have only built detectors for 15 semantic concepts so far, the method seems easily portable to other concepts. The paper reports experiments with multiple features, different kernels and several analysis windows. Preliminary experiments on documentaries and films yielded promising results, despite the difficulties posed by the mixtures of audio events that characterize real sounds.

84 citations


Journal ArticleDOI
TL;DR: BiGGEsTS is a free open source graphical software tool for revealing local coexpression of genes in specific intervals of time, while integrating meaningful information on gene annotations.
Abstract: The ability to monitor changes in expression patterns over time, and to observe the emergence of coherent temporal responses using expression time series, is critical to advance our understanding of complex biological processes. Biclustering has been recognized as an effective method for discovering local temporal expression patterns and unraveling potential regulatory mechanisms. The general biclustering problem is NP-hard. In the case of time series this problem is tractable, and efficient algorithms can be used. However, there is still a need for specialized applications able to take advantage of the temporal properties inherent to expression time series, both from a computational and a biological perspective. BiGGEsTS makes available state-of-the-art biclustering algorithms for analyzing expression time series. Gene Ontology (GO) annotations are used to assess the biological relevance of the biclusters. Methods for preprocessing expression time series and post-processing results are also included. The analysis is additionally supported by a visualization module capable of displaying informative representations of the data, including heatmaps, dendrograms, expression charts and graphs of enriched GO terms. BiGGEsTS is a free open source graphical software tool for revealing local coexpression of genes in specific intervals of time, while integrating meaningful information on gene annotations. It is freely available at: http://kdbio.inesc-id.pt/software/biggests . We present a case study on the discovery of transcriptional regulatory modules in the response of Saccharomyces cerevisiae to heat stress.

78 citations


Journal ArticleDOI
01 Mar 2009
TL;DR: In this paper, the authors discuss alternative conditional distributional models for the daily returns of the US, German and Portuguese main stock market indexes, considering ARMA-GARCH models driven by Normal, Student's t and stable Paretian distributed innovations.
Abstract: As GARCH models and stable Paretian distributions have been revisited in the recent past with the papers of Hansen and Lunde (J Appl Econom 20: 873–889, 2005) and Bidarkota and McCulloch (Quant Finance 4: 256–265, 2004), respectively, in this paper we discuss alternative conditional distributional models for the daily returns of the US, German and Portuguese main stock market indexes, considering ARMA-GARCH models driven by Normal, Student’s t and stable Paretian distributed innovations. We find that a GARCH model with stable Paretian innovations fits returns clearly better than the more popular Normal distribution and slightly better than the Student’s t distribution. However, the Student’s t outperforms the Normal and stable Paretian distributions when the out-of-sample density forecasts are considered.

77 citations


Journal ArticleDOI
27 May 2009-Sensors
TL;DR: The developed platform is portable and capable of operating autonomously for nearly eight hours and the noise level of the described platform is one order of magnitude lower than the one presented by the previously used measurement set-up.
Abstract: This paper presents a prototype of a platform for biomolecular recognition detection. The system is based on a magnetoresistive biochip that performs biorecognition assays by detecting magnetically tagged targets. All the electronic circuitry for addressing, driving and reading out signals from spin-valve or magnetic tunnel junctions sensors is implemented using off-the-shelf components. Taking advantage of digital signal processing techniques, the acquired signals are processed in real time and transmitted to a digital analyzer that enables the user to control and follow the experiment through a graphical user interface. The developed platform is portable and capable of operating autonomously for nearly eight hours. Experimental results show that the noise level of the described platform is one order of magnitude lower than the one presented by the previously used measurement set-up. Experimental results also show that this device is able to detect magnetic nanoparticles with a diameter of 250 nm at a concentration of about 40 fM. Finally, the biomolecular recognition detection capabilities of the platform are demonstrated by performing a hybridization assay using complementary and non-complementary probes and a magnetically tagged 20mer single stranded DNA target.

76 citations


Proceedings ArticleDOI
15 Jun 2009
TL;DR: This work proposes a solution to evaluate redundancy strategies in the context of heterogeneous environments such as data grids based on a simulation engine that can be used not only to support the process of designing the preservation environment and related policies, but also later on to observe and control the deployed system.
Abstract: Digital preservation aims at maintaining digital objects accessible over a long period of time, regardless of the challenges of organizational or technological changes or failures. In particular, data produced in e-Science domains could be reliably stored in today's data grids, taking advantage of the natural properties of this kind of infrastructure to support redundancy. However, to achieve reliability we must take into account failure interdependency. Taking into account the fact that correlated failures can affect multiple components and potentially cause complete loss of data, we propose a solution to evaluate redundancy strategies in the context of heterogeneous environments such as data grids. This solution is based on a simulation engine that can be used not only to support the process of designing the preservation environment and related policies, but also later on to observe and control the deployed system.

53 citations


Journal ArticleDOI
TL;DR: This paper focuses on indexed approximate string matching (ASM), which is of great interest, say, in bioinformatics, and study ASM algorithms for Lempel-Ziv compressed indexes and for compressed suffix trees/arrays, which are competitive and provide useful space-time tradeoffs compared to classical indexes.
Abstract: A compressed full-text self-index for a text T is a data structure requiring reducedspace and able to search for patterns P in T . It can also reproduce any substring of T , thusactually replacing T . Despite the recent explosion of interest on compressed indexes, therehas not been much progress on functionalities beyond the basic exact search. In this paperwe focus on indexed approximate string matching (ASM), which is of great interest, say,in bioinformatics. We study ASM algorithms for Lempel-Ziv compressed indexes and forcompressed suffix trees/arrays. Most compressed self-indexes belong to one of these classes.We start by adapting the classical method of partitioning into exact search to self-indexes, andoptimize it over a representative of either class of self-index. Then, we show that a Lempel-Ziv index can be seen as an extension of the classical q -samples index. We give new insightson this type of index, which can be of independent interest, and then apply them to a Lempel-Ziv index. Finally, we improve hierarchical verification, a successful technique for sequentialsearching, so as to extend the matches of pattern pieces to the left or right. Most compressedsuffix trees/arrays support the required bidirectionality, thus enabling the implementation ofthe improved technique. In turn, the improved verification largely reduces the accesses to thetext, which are expensive in self-indexes. We show experimentally that our algorithms arecompetitive and provide useful space-time tradeoffs compared to classical indexes.

Proceedings ArticleDOI
Gomes Goncalo1, Sarmento Helena1
18 Jun 2009
TL;DR: A location system for persons and objects in an indoor environment, where wireless nodes can include sensors and provide unique identifiers, using ZigBee technology to be effective, flexible and easily adaptable to various locations is described.
Abstract: This paper describes a location system for persons and objects in an indoor environment, where wireless nodes can include sensors and provide unique identifiers. The system nodes, using ZigBee technology, can function as RFID tags, having each one a unique EPC identification number. Sensors can be associated with the wireless nodes Zigbee to create applications for home, health and traffic control. Location systems are analysed with emphasis on indoor location systems.The implemented location algorithm includes a propagation model based on the wall attenuation factor propagation model together with triangulation. A variety of tests were carried out in an indoor environment. Results demonstrate that the location system is viable, showing itself to be effective, flexible and easily adaptable to various locations.

Book ChapterDOI
07 Sep 2009
TL;DR: This paper proposes the embedding of social software features, such as collaboration and wiki-like features, in the modeling and execution tools of business processes, which will foster people empowerment in the bottom-up design and execution ofbusiness processes.
Abstract: In today’s changing environments, organizational design must take into account the fact that business processes are incomplete by nature and that they should be managed in such a way that they do not restrain human intervention. In this paper we propose the embedding of social software features, such as collaboration and wiki-like features, in the modeling and execution tools of business processes. These features will foster people empowerment in the bottom-up design and execution of business processes. We conclude this paper by identifying some research issues about the implementation of the tool and its methodological impact on Business Process Management.

Proceedings Article
10 May 2009
TL;DR: This article defined the concept of ritual and integrated it into an existing agent architecture for synthetic characters and showed that users do indeed identify the differences in the two cultures and most importantly that they ascribe the differences to cultural factors.
Abstract: There is currently an ongoing demand for richer Intelligent Virtual Environments (IVEs) populated with social intelligent agents. As a result, many agent architectures are taking into account a plenitude of social factors to drive their agents' behaviour. However, cultural aspects have been largely neglected so far, even though they are a crucial aspect of human societies. This is largely due to the fact that culture is a very complex term that has no consensual definition among scholars. However, there are studies that point out some common and relevant components that distinguish cultures such as rituals and values. In this article, we focused on the use of rituals in synthetic characters to generate cultural specific behaviour. To this end, we defined the concept of ritual and integrated it into an existing agent architecture for synthetic characters. A ritual is seen as a symbolic social activity that is carried out in a predetermined fashion. This concept is modelled in the architecture as a special type of goal with a pre-defined plan. Using the architecture described, and in order to assess if it is possible to express different cultural behaviour in synthetic characters, we created two groups of agents that only differed in their rituals. An experiment was then conducted using these two scenarios in order to evaluate if users could identify different cultural behaviour in the two groups of characters. The results show that users do indeed identify the differences in the two cultures and most importantly that they ascribe the differences to cultural factors.

Proceedings Article
Miguel Bugalho1, Jose Portelo1, Isabel Trancoso1, Thomas Pellegrini1, Alberto Abad1 
01 Jan 2009
TL;DR: The experiments with SVM classifiers, and different features, using a 290-hour corpus of sound effects, which allowed us to build detectors for almost 50 semantic concepts, showed that the task is much harder in real-life videos, which so often include overlapping audio events.
Abstract: This paper describes our work on audio event detection, one of our tasks in the European project VIDIVIDEO. Preliminary experiments with a small corpus of sound effects have shown the potential of this type of corpus for training purposes. This paper describes our experiments with SVM classifiers, and different features, using a 290-hour corpus of sound effects, which allowed us to build detectors for almost 50 semantic concepts. Although the performance of these detectors on the development set is quite good (achieving an average F-measure of 0.87), preliminary experiments on documentaries and films showed that the task is much harder in real-life videos, which so often include overlapping audio events. Index Terms: event detection, audio segmentation

Proceedings ArticleDOI
25 Oct 2009
TL;DR: Studies that go beyond a laboratorial setting are presented, exploring the methods' effectiveness and learnability as well as its influence on the users' daily lives, revealing itself both as easy to learn and improve.
Abstract: NavTap is a navigational method that enables blind users to input text in a mobile device by reducing the associated cognitive load.In this paper, we present studies that go beyond a laboratorial setting, exploring the methods' effectiveness and learnability as well as its influence on the users' daily lives. Eight blind users participated in designing the prototype (3 weeks) while five took part in the studies along 16 more weeks. Results gathered in controlled weekly sessions and real life usage logs enabled us to better understand NavTap's advantages and limitations. The method revealed itself both as easy to learn and improve. Indeed, users were able to better control their mobile devices to send SMS and use other tasks that require text input such as managing a phonebook, from day one, in real-life settings.While individual user profiles play an important role in determining their evolution, even less capable users (with age-induced impairments or cognitive difficulties), were able to perform the assigned tasks (sms, directory) both in the laboratory and in everyday use, showing continuous improvement to their skills. According to interviews, none were able to input text before. Nav-Tap dramatically changed their relation with mobile devices and noticeably improved their social interaction capabilities.

Proceedings ArticleDOI
08 Dec 2009
TL;DR: This article focuses on empathy between synthetic characters and proposes an analytical approach that consists in a generic computational model of empathy, supported by recent neuropsychological studies, implemented into an affective agent architecture.
Abstract: Empathy is often seen as the capacity to perceive, understand and experience others' emotions. This concept has been incorporated in virtual agents to achieve better believability, social interaction and user engagement. However, this has been mostly done to achieve empathic relations with the users. Instead, in this article we focus on empathy between synthetic characters and propose an analytical approach that consists in a generic computational model of empathy, supported by recent neuropsychological studies. The proposed model of empathy was implemented into an affective agent architecture. To evaluate the implementation a small scenario was defined and we asked a group of users to visualize it with the empathy model and another group to visualize it without the model. The results obtained confirmed that our model was capable of producing significant effects in the perception of the emergent empathic responses.

Journal ArticleDOI
TL;DR: The proposed parameter estimation methodology was applied to actual time series data from the glycolytic pathway of the bacterium Lactococcus lactis and led to ensembles of models with different network topologies, suggesting that the proposed method may serve as a powerful exploration tool for testing hypotheses and the design of new experiments.
Abstract: The major difficulty in modeling biological systems from multivariate time series is the identification of parameter sets that endow a model with dynamical behaviors sufficiently similar to the experimental data. Directly related to this parameter estimation issue is the task of identifying the structure and regulation of ill-characterized systems. Both tasks are simplified if the mathematical model is canonical, i.e., if it is constructed according to strict guidelines. In this report, we propose a method for the identification of admissible parameter sets of canonical S-systems from biological time series. The method is based on a Monte Carlo process that is combined with an improved version of our previous parameter optimization algorithm. The method maps the parameter space into the network space, which characterizes the connectivity among components, by creating an ensemble of decoupled S-system models that imitate the dynamical behavior of the time series with sufficient accuracy. The concept of sloppiness is revisited in the context of these S-system models with an exploration not only of different parameter sets that produce similar dynamical behaviors but also different network topologies that yield dynamical similarity. The proposed parameter estimation methodology was applied to actual time series data from the glycolytic pathway of the bacterium Lactococcus lactis and led to ensembles of models with different network topologies. In parallel, the parameter optimization algorithm was applied to the same dynamical data upon imposing a pre-specified network topology derived from prior biological knowledge, and the results from both strategies were compared. The results suggest that the proposed method may serve as a powerful exploration tool for testing hypotheses and the design of new experiments.

Journal ArticleDOI
TL;DR: Results show that under the target scenario conditions, LEMMA presents lower interference between assigned time-slots and lower end-to-end latency, while matching its best contender in terms of energy-efficiency.


Journal ArticleDOI
TL;DR: An efficient algorithm is proposed that identifies the most interesting region to cut circular genomes in order to improve phylogenetic analysis when using standard multiple sequence alignment algorithms, and leads to more realistic phylogenetic comparisons between species.
Abstract: The comparison of homologous sequences from different species is an essential approach to reconstruct the evolutionary history of species and of the genes they harbour in their genomes. Several complete mitochondrial and nuclear genomes are now available, increasing the importance of using multiple sequence alignment algorithms in comparative genomics. MtDNA has long been used in phylogenetic analysis and errors in the alignments can lead to errors in the interpretation of evolutionary information. Although a large number of multiple sequence alignment algorithms have been proposed to date, they all deal with linear DNA and cannot handle directly circular DNA. Researchers interested in aligning circular DNA sequences must first rotate them to the "right" place using an essentially manual process, before they can use multiple sequence alignment tools.

Posted Content
TL;DR: In this article, the package upgradeability problem is related to multilevel optimization, and new algorithms for BMO are proposed to solve optimization problems that existing MaxSAT and PB solvers would otherwise be unable to solve.
Abstract: Many combinatorial optimization problems entail a number of hierarchically dependent optimization problems. An often used solution is to associate a suitably large cost with each individual optimization problem, such that the solution of the resulting aggregated optimization problem solves the original set of hierarchically dependent optimization problems. This paper starts by studying the package upgradeability problem in software distributions. Straightforward solutions based on Maximum Satisfiability (MaxSAT) and pseudo-Boolean (PB) optimization are shown to be ineffective, and unlikely to scale for large problem instances. Afterwards, the package upgradeability problem is related to multilevel optimization. The paper then develops new algorithms for Boolean Multilevel Optimization (BMO) and highlights a large number of potential applications. The experimental results indicate that the proposed algorithms for BMO allow solving optimization problems that existing MaxSAT and PB solvers would otherwise be unable to solve.

Book ChapterDOI
TL;DR: A voter verifiable code voting solution which, without revealing the voter's vote, allows the voter to verify, at the end of the election, that her vote was cast and counted as intended by just performing the match of a few small strings.
Abstract: Code voting is a technique used to address the secure platform problem of remote voting. A code voting system consists in secretly sending, e.g. by mail, code sheets to voters that map their choices to entry codes in their ballot. While voting, the voter uses the code sheet to know what code to enter in order to vote for a particular candidate. In effect, the voter does the vote encryption and, since no malicious software on the PC has access to the code sheet it is not able to change the voter's intention. However, without compromising the voter's privacy, the vote codes are not enough to prove that the vote is recorded and counted as cast by the election server. We present a voter verifiable code voting solution which, without revealing the voter's vote, allows the voter to verify, at the end of the election, that her vote was cast and counted as intended by just performing the match of a few small strings. Moreover, w.r.t. a general code voting system, our solution comes with only a minor change in the voting interaction.

Journal ArticleDOI
TL;DR: In this article, the performance of the Clear-PEM front-end ASIC for readout of S8550 Hamamatsu APDs coupled to LYSO:Ce crystal matrices is evaluated.
Abstract: In the framework of the Clear-PEM project for the construction of a high-resolution scanner for breast cancer imaging, a very compact and dense frontend electronics system has been developed for readout of multi-pixel S8550 Hamamatsu APDs. The frontend electronics are instrumented with a mixed-signal Application-Specific Integrated Circuit (ASIC), which incorporates 192 low-noise charge pre-amplifiers, shapers, analog memory cells and digital control blocks. Pulses are continuously stored in memory cells at clock frequency. Channels above a common threshold voltage are readout for digitization by off-chip free-sampling ADCs. The ASIC has a size of 7.3 × 9.8 mm 2 and was implemented in a AMS 0.35 μ m CMOS technology. In this paper the experimental characterization of the Clear-PEM frontend ASIC, reading out multi-pixel APDs coupled to LYSO:Ce crystal matrices, is presented. The chips were mounted on a custom test board connected to six APD arrays and to the data acquisition system. Six 32-pixel LYSO:Ce crystal matrices coupled on both sides to APD arrays were readout by two test boards. All 384 channels were operational. The chip power consumption is 660 mW (3.4 mW per channel). A very stable behavior of the chip was observed, with an estimated ENC of 1200 – 1300 e - at APD gain 100. The inter-channel noise dispersion and mean baseline variation is less than 8% and 0.5%, respectively. The spread in the gain between different channels is found to be 1.5%. Energy resolution of 16.5% at 511 keV and 12.8% at 662 keV has been measured. Timing measurements between the two APDs that readout the same crystal is extracted and compared with detailed Monte Carlo simulations. At 511 keV the measured single photon time RMS resolution is 1.30 ns, in very good agreement with the expected value of 1.34 ns.

Proceedings ArticleDOI
24 Jun 2009
TL;DR: The purpose of this paper is to present a novel methodology for electronic systems aging monitoring, and to introduce a new architecture for an aging sensor, that takes into account power supply voltage and temperature variations and allows several levels of failure prediction.
Abstract: Complex electronic systems for safety or mission-critical applications (automotive, space) must operate for many years in harsh environments. Reliability issues are worsening with device scaling down, while performance and quality requirements are increasing. One of the key reliability issues is to monitor long-term performance degradation due to aging in such harsh environments. For safe operation, or for preventive maintenance, it is desirable that such monitoring may be performed on chip. On-line built-in aging sensors (activated from time to time) can be an adequate solution for this problem. The purpose of this paper is to present a novel methodology for electronic systems aging monitoring, and to introduce a new architecture for an aging sensor. Aging monitoring is carried out by observing the degrading timing response of the digital system. The proposed solution takes into account power supply voltage and temperature variations and allows several levels of failure prediction. Simulation results are presented, that ascertain the usefulness of the proposed methodology.

Proceedings ArticleDOI
04 Apr 2009
TL;DR: A comparative study of interaction metaphors for large-scale displays is presented, finding that the point metaphor achieves better results on all tests, and there is evidence that grab and mouse remain valid for specific tasks.
Abstract: Large-scale displays require new interaction techniques because of their physical size. There are technologies that tackle the problem of interaction with such devices by providing natural interaction to larger surfaces. We argue, however, that large-scale displays offer physical freedom that is not yet being applied to interaction. To better understand how distance affects user interaction, we present a comparative study of interaction metaphors for large-scale displays. We present three metaphors: Grab, Point and Mouse. The metaphors were included in our tests as we felt that each would be more suited to a specific distance: this is the focus of our tests. We then asked the users to solve a puzzle using those metaphors from different distances. We discovered that the point metaphor achieves better results on all tests. However, there is evidence that grab and mouse remain valid for specific tasks.

Proceedings ArticleDOI
04 Apr 2009
TL;DR: This work builds a prototype that generates familiar and adequate instructions, behaving like a blind companion, one with similar capabilities that understands his "friend" and speaks the same language, while gathering overall user satisfaction.
Abstract: For the majority of blind people, walking in unknown places is a very difficult, or even impossible, task to perform, when without help. The adoption of the white cane is the main aid to a blind user's mobility. However, the major difficulties arise in the orientation task. The lack of reference points and the inability to access visual cues are its main causes. We aim to overcome this issue allowing users to walk through unknown places, by receiving a familiar and easily understandable feedback. Our preliminary contributions are in understanding, through user studies, how blind users explore an unknown place, their difficulties, capabilities and needs. We also analyzed how these users create their own mental maps, verbalize a route and communicate with each other. Structuring and generalizing this information, we were able to create a prototype that generates familiar and adequate instructions, behaving like a blind companion, one with similar capabilities that understands his "friend" and speaks the same language. We evaluated the system with the target population, validating our approach and orientation guidelines, while gathering overall user satisfaction.

Journal ArticleDOI
TL;DR: In this article, the authors describe the development of an adaptive control law based on the exact feedback linearization and Lyapunov adaptation of the process dynamics applied to a solar furnace.

Proceedings Article
11 Jul 2009
TL;DR: New algorithms for Boolean Multilevel Optimization (BMO) are developed and experimental results indicate that algorithms for BMO allow solving optimization problems that existing MaxSAT and PB solvers would otherwise be unable to solve.
Abstract: Many combinatorial optimization problems entail a number of hierarchically dependent optimization problems. An often used solution is to associate a suitably large cost with each individual optimization problem, such that the solution of the resulting aggregated optimization problem solves the original set of optimization problems. This paper starts by studying the package upgradeability problem in software distributions. Straightforward solutions based on Maximum Satisfiability (MaxSAT) and pseudo-Boolean (PB) optimization are shown to be ineffective, and unlikely to scale for large problem instances. Afterwards, the package upgradeability problem is related to multilevel optimization. The paper then develops new algorithms for Boolean Multilevel Optimization (BMO) and highlights a number of potential applications. The experimental results indicate that algorithms for BMO allow solving optimization problems that existing MaxSAT and PB solvers would otherwise be unable to solve.

Journal ArticleDOI
Rui Neves Madeira, J. Luis Sousa, V. Fernao Pires, Luís Esteves, O. P. Dias1 
TL;DR: This system will provide to the students a tutorial, a set of exercises with several oriented questions, interactive animations, assessment tools, a chat room and a game for evaluation purposes, and has a module to be used with a mobile device.