scispace - formally typeset
Search or ask a question

Showing papers by "fondazione bruno kessler published in 2008"


Proceedings ArticleDOI
22 Sep 2008
TL;DR: The IRSTLM toolkit supports distribution of ngram collection and smoothing over a computer cluster, language model compression through probability quantization, lazy-loading of huge language models from disk.
Abstract: Research in speech recognition and machine translation is boosting the use of large scale n-gram language models. We present an open source toolkit that permits to efficiently handle language models with billions of n-grams on conventional machines. The IRSTLM toolkit supports distribution of ngram collection and smoothing over a computer cluster, language model compression through probability quantization, lazy-loading of huge language models from disk. IRSTLM has been so far successfully deployed with the Moses toolkit for statistical machine translation and with the FBK-irst speech recognition system. Efficiency of the tool is reported on a speech transcription task of Italian political speeches using a language model of 1.1 billion four-grams.

354 citations


Proceedings ArticleDOI
09 Apr 2008
TL;DR: A novel state-based testing approach specifically designed to exercise Ajax Web applications that evaluates the approach on a case study in terms of fault revealing capability and the amount of manual interventions involved in constructing and refining the model required.
Abstract: Ajax supports the development of rich-client Web applications, by providing primitives for the execution of asynchronous requests and for the dynamic update of the page structure and content. Often, Ajax Web applications consist of a single page whose elements are updated in response to callbacks activated asynchronously by the user or by a server message. These features give rise to new kinds of faults that are hardly revealed by existing Web testing approaches. In this paper, we propose a novel state-based testing approach, specifically designed to exercise Ajax Web applications. The document object model (DOM) of the page manipulated by the Ajax code is abstracted into a state model. Callback executions triggered by asynchronous messages received from the Web server are associated with state transitions. Test cases are derived from the state model based on the notion of semantically interacting events. We evaluate the approach on a case study in terms of fault revealing capability. We also measure the amount of manual interventions involved in constructing and refining the model required by this approach.

258 citations


Journal ArticleDOI
Néstor Armesto1, Nicolas Borghini2, Sangyong Jeon3, Urs Achim Wiedemann4  +191 moreInstitutions (63)
TL;DR: A compilation of predictions for the forthcoming Heavy Ion Program at the Large Hadron Collider, as presented at the CERN Theory Institute 'Heavy Ion Collisions at the LHC - Last Call for Predictions', held from 14th May to 10th June 2007, can be found in this article.
Abstract: This writeup is a compilation of the predictions for the forthcoming Heavy Ion Program at the Large Hadron Collider, as presented at the CERN Theory Institute 'Heavy Ion Collisions at the LHC - Last Call for Predictions', held from 14th May to 10th June 2007.

234 citations


Journal ArticleDOI
12 Mar 2008-PLOS ONE
TL;DR: This IBM, which is based on country-specific demographic data, could be suitable for the real-time evaluation of measures to be undertaken in the event of the emergence of a new pandemic influenza virus.
Abstract: Background Individual-based models can provide the most reliable estimates of the spread of infectious diseases. In the present study, we evaluated the diffusion of pandemic influenza in Italy and the impact of various control measures, coupling a global SEIR model for importation of cases with an individual based model (IBM) describing the Italian epidemic. Methodology/Principal Findings We co-located the Italian population (57 million inhabitants) to households, schools and workplaces and we assigned travel destinations to match the 2001 census data. We considered different R0 values (1.4; 1.7; 2), evaluating the impact of control measures (vaccination, antiviral prophylaxis -AVP-, international air travel restrictions and increased social distancing). The administration of two vaccine doses was considered, assuming that first dose would be administered 1-6 months after the first world case, and different values for vaccine effectiveness (VE). With no interventions, importation would occur 37–77 days after the first world case. Air travel restrictions would delay the importation of the pandemic by 7–37 days. With an R0 of 1.4 or 1.7, the use of combined measures would reduce clinical attack rates (AR) from 21–31% to 0.3–4%. Assuming an R0 of 2, the AR would decrease from 38% to 8%, yet only if vaccination were started within 2 months of the first world case, in combination with a 90% reduction in international air traffic, closure of schools/workplaces for 4 weeks and AVP of household and school/work close contacts of clinical cases. Varying VE would not substantially affect the results. Conclusions This IBM, which is based on country-specific demographic data, could be suitable for the real-time evaluation of measures to be undertaken in the event of the emergence of a new pandemic influenza virus. All preventive measures considered should be implemented to mitigate the pandemic.

188 citations


Journal ArticleDOI
TL;DR: The atomic structure around the metal-binding site in samples where amyloid-β (Aβ) peptides are complexed with either Cu(II) or Zn(II), and the histidine residues coordinated to the metal in the various peptides studied are determined.

183 citations


Journal ArticleDOI
TL;DR: Several kernel functions to model parse tree properties in kernel-based machines, for example, perceptrons or support vector machines are proposed and tree kernels allow for a general and easily portable feature engineering method which is applicable to a large family of natural language processing tasks.
Abstract: The availability of large scale data sets of manually annotated predicate-argument structures has recently favored the use of machine learning approaches to the design of automated semantic role labeling (SRL) systems. The main research in this area relates to the design choices for feature representation and for effective decompositions of the task in different learning models. Regarding the former choice, structural properties of full syntactic parses are largely employed as they represent ways to encode different principles suggested by the linking theory between syntax and semantics. The latter choice relates to several learning schemes over global views of the parses. For example, re-ranking stages operating over alternative predicate-argument sequences of the same sentence have shown to be very effective. In this article, we propose several kernel functions to model parse tree properties in kernel-based machines, for example, perceptrons or support vector machines. In particular, we define different kinds of tree kernels as general approaches to feature engineering in SRL. Moreover, we extensively experiment with such kernels to investigate their contribution to individual stages of an SRL architecture both in isolation and in combination with other traditional manually coded features. The results for boundary recognition, classification, and re-ranking stages provide systematic evidence about the significant impact of tree kernels on the overall accuracy, especially when the amount of training data is small. As a conclusive result, tree kernels allow for a general and easily portable feature engineering method which is applicable to a large family of natural language processing tasks.

171 citations


Journal ArticleDOI
TL;DR: The vapor-based procedure was found to yield more uniform layers characterized by fewer and smaller aggregates as compared with liquid-treated substrates, suggesting a similar reactivity and accessibility of the functional groups on the surface.

128 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the most fundamental non-Abelian semisuperfluid strings in high-density color superconductivity have normalizable orientational zero modes in the internal space associated with the color-flavor locking symmetry broken in the presence of the strings.
Abstract: The most fundamental strings in high-density color superconductivity are the non-Abelian semisuperfluid strings which have color-gauge flux tubes but behave as superfluid vortices in the energetic point of view. We show that in addition to the usual translational zero modes, these vortices have normalizable orientational zero modes in the internal space, associated with the color-flavor locking symmetry broken in the presence of the strings. The interaction among two parallel non-Abelian semisuperfluid strings is derived for general relative orientational zero modes to show the universal repulsion. This implies that the previously known superfluid vortices, formed by spontaneously broken $U(1{)}_{\mathrm{B}}$, are unstable to decay. Moreover, our result proves the stability of color superconductors in the presence of external color-gauge fields.

119 citations


Journal ArticleDOI
TL;DR: It was shown that FTIR microspectroscopy is a rapid and accurate tool to simultaneously probe the major biochemical events associated with the autolytic process and the intrinsically higher sensitivity of ATR with respect to transmission spectra in analyzing autolysis was also demonstrated.

96 citations


Book ChapterDOI
07 Jan 2008
TL;DR: This paper formally defines the notion of (minimal) explanation of (un)realizability, proposes algorithms to compute such explanations, and provides a preliminary experimental evaluation.
Abstract: Realizability - checking whether a specification can be implemented by an open system - is a fundamental step in the design flow. However, if the specification turns out not to be realizable, there is no method to pinpoint the causes for unrealizability. In this paper, we address the open problem of providing diagnostic information for realizability: we formally define the notion of (minimal) explanation of (un)realizability, we propose algorithms to compute such explanations, and provide a preliminary experimental evaluation.

86 citations


Journal ArticleDOI
TL;DR: In this article, Tauc-Lorentz (TL), Forouhi-Bloomer (FB) and modified FB models were applied to the interband absorption of amorphous carbon films.
Abstract: Parametrization models of optical constants, namely Tauc-Lorentz (TL), Forouhi-Bloomer (FB) and modified FB models, were applied to the interband absorption of amorphous carbon films The optical constants were determined by means of transmittance and reflectance measurements in the visible range The studied films were prepared by rf sputtering and characterized for their chemical properties The analytical models were also applied to other optical data published in the literature pertaining to films produced by various deposition techniques The different approaches used to determine important physical parameters of the interband transition yielded different results A figure-of-merit was introduced to check the applicability of the models and the results showed that FB modified for an energy dependence of the dipole matrix element adequately represents the interband transition in the amorphous carbons Further, the modified FB model shows a relative superiority over the TL ones for concerning the determination of the band gap energy, as it is the only one to be validated by an independent, though indirect, gap measurement by x-ray photoelectron spectroscopy Finally, the application of the modified FB model allowed us to establish some important correlations between film structure and optical absorption properties

Journal ArticleDOI
TL;DR: A feedback cooling technique applied to simultaneously cool the three electromechanical normal modes of the ton-scale resonant-bar gravitational wave detector AURIGA could allow to approach the quantum ground state of a kilogram-scale mechanical resonator.
Abstract: We apply a feedback cooling technique to simultaneously cool the three electromechanical normal modes of the ton-scale resonant-bar gravitational wave detector AURIGA. The measuring system is based on a dc superconducting quantum interference device (SQUID) amplifier, and the feedback cooling is applied electronically to the input circuit of the SQUID. Starting from a bath temperature of 4.2 K, we achieve a minimum temperature of 0.17 mK for the coolest normal mode. The same technique, implemented in a dedicated experiment at subkelvin bath temperature and with a quantum limited SQUID, could allow to approach the quantum ground state of a kilogram-scale mechanical resonator.

Proceedings ArticleDOI
01 Apr 2008
TL;DR: A concern-oriented framework that supports the instantiation and comparison of concern measures and subsumes the definition of a core terminology and criteria is defined in order to lay down a rigorous process to foster thedefinition of meaningful and well-founded concern measures.
Abstract: Aspect-oriented design needs to be systematically assessed with respect to modularity flaws caused by the realization of driving system concerns, such as tangling, scattering, and excessive concern dependencies. As a result, innovative concern metrics have been defined to support quantitative analyses of concern's properties. However, the vast majority of these measures have not yet being theoretically validated and managed to get accepted in the academic or industrial settings. The core reason for this problem is the fact that they have not been built by using a clearly-defined terminology and criteria. This paper defines a concern-oriented framework that supports the instantiation and comparison of concern measures. The framework subsumes the definition of a core terminology and criteria in order to lay down a rigorous process to foster the definition of meaningful and well-founded concern measures. In order to evaluate the framework generality, we demonstrate the framework instantiation and extension to a number of concern measures suites previously used in empirical studies of aspect-oriented software maintenance.

Journal ArticleDOI
02 Dec 2008
TL;DR: The simulated impedance and phase plots of polymer, working in thickness mode, have been compared with measured data and the equivalent circuit parameters are derived from analogies between lossy electrical transmission line and acoustic wave propagation.
Abstract: This work presents the transmission line equivalent model for lossy piezoelectric polymers and its SPICE implementation. The model includes the mechanical/viscoelastic, dielectric/electrical, and piezoelectric/electromechanical losses in a novel way by using complex elastic, dielectric, and piezoelectric constants obtained from the measured impedances of PVDF and PVDF-TrFE samples by nonlinear regression technique. The equivalent circuit parameters are derived from analogies between a lossy electrical transmission line and acoustic wave propagation. The simulated impedance and phase plots of various samples, working in thickness mode, have been shown to agree well with the measured data.

Journal ArticleDOI
TL;DR: An approach for computing a consensus translation from the outputs of multiple machine translation (MT) systems by weighted majority voting on a confusion network, similarly to the well-established ROVER approach of Fiscus for combining speech recognition hypotheses.
Abstract: This paper describes an approach for computing a consensus translation from the outputs of multiple machine translation (MT) systems. The consensus translation is computed by weighted majority voting on a confusion network, similarly to the well-established ROVER approach of Fiscus for combining speech recognition hypotheses. To create the confusion network, pairwise word alignments of the original MT hypotheses are learned using an enhanced statistical alignment algorithm that explicitly models word reordering. The context of a whole corpus of automatic translations rather than a single sentence is taken into account in order to achieve high alignment quality. The confusion network is rescored with a special language model, and the consensus translation is extracted as the best path. The proposed system combination approach was evaluated in the framework of the TC-STAR speech translation project. Up to six state-of-the-art statistical phrase-based translation systems from different project partners were combined in the experiments. Significant improvements in translation quality from Spanish to English and from English to Spanish in comparison with the best of the individual MT systems were achieved under official evaluation conditions.

Proceedings Article
01 May 2008
TL;DR: The aligned ontology was used to semantically annotate original data obtained from the tourism web sites and natural language questions and its alignment with the upper ontologies - WordNet and SUMO is described.
Abstract: With the appearance of Semantic Web technologies, it becomes possible to develop novel, sophisticated question answering systems, where ontologies are usually used as the core knowledge component. In the EU-funded project, QALL-ME, a domain-specific ontology was developed and applied for question answering in the domain of tourism, along with the assistance of two upper ontologies for concept expansion and reasoning. This paper focuses on the development of the QALL-ME ontology in the tourism domain and its alignment with the upper ontologies - WordNet and SUMO. The design of the ontology is presented in the paper, and a semi-automatic alignment procedure is described with some alignment results given as well. Furthermore, the aligned ontology was used to semantically annotate original data obtained from the tourism web sites and natural language questions. The storage schema of the annotated data and the data access method for retrieving answers from the annotated data are also reported in the paper.

Proceedings ArticleDOI
06 May 2008
TL;DR: In this paper, the authors compare the performance of different sound source localization techniques in real-time implementation and compare them to a sub-optimal LS search method using adaptive eigenvalue decomposition.
Abstract: Comparing the different sound source localization techniques, proposed in the literature during the last decade, represents a relevant topic in order to establish advantages and disadvantages of a given approach in a real-time implementation. Traditionally, algorithms for sound source localization rely on an estimation of time difference of arrival (TDOA) at microphone pairs through GCC-PHAT When several microphone pairs are available the source position can be estimated as the point in space that best fits the set of TDOA measurements by applying global coherence field (GCF), also known as SRP-PHAT, or oriented global coherence field (OGCF). A first interesting analysis compares the performance of GCF and OGCF to a suboptimal LS search method. In a second step, Adaptive Eigenvalue Decomposition is implemented as an alternative to GCC-PHAT in TDOA estimation. Comparative experiments are conducted on signals acquired by a linear array during WOZ experiments in an interactive-TV scenario. Changes in performance according to different SNR levels are reported.

Journal ArticleDOI
TL;DR: In this article, the authors report on the latest results from the development of 3D silicon radiation detectors at Fondazione Bruno Kessler of Trento (FBK), Italy (formerly ITC-IRST), which involves columnar electrodes of both doping types, etched from alternate wafer sides, stopping a short distance (d) from the opposite surface.
Abstract: We report on the latest results from the development of 3-D silicon radiation detectors at Fondazione Bruno Kessler of Trento (FBK), Italy (formerly ITC-IRST). Building on the results obtained from previous devices (3-D Single-Type-Column), a new detector concept has been defined, namely 3-D-DDTC (Double-sided Double-Type Column), which involves columnar electrodes of both doping types, etched from alternate wafer sides, stopping a short distance (d) from the opposite surface. Simulations prove that, if d is kept small with respect to the wafer thickness, this approach can yield charge collection properties comparable to those of standard 3-D detectors, with the advantage of a simpler fabrication process. Two wafer layouts have been designed with reference to this technology, and two fabrication runs have been performed. Technological and design aspects are reported in this paper, along with simulation results and initial results from the characterization of detectors and test structures belonging to the first 3-D-DDTC batch.

Journal ArticleDOI
TL;DR: The results show that state-based testing is complementary to the existing Web testing techniques and can reveal faults otherwise unnoticed or hard to reveal with the other techniques.
Abstract: Asynchronous Javascript And XML (AJAX) is a recent technology used to develop rich and dynamic Web applications. Different from traditional Web applications, AJAX applications consist of a single page whose elements are updated dynamically in response to callbacks activated asynchronously by the user or by a server message. On the one hand, AJAX improves the responsiveness and usability of a Web application, but on the other hand, it makes the testing phase more difficult. In this paper, our state-based testing technique, developed to test AJAX-based applications, is compared to existing Web testing techniques, such as white-box and black-box ones. To this aim, an experiment based on two case studies has been conducted to evaluate effectiveness and test effort involved in the compared Web testing techniques. In particular, the capability of each technique to reveal injected faults of different fault categories is analyzed in detail. The associated effort was also measured. The results show that state-based testing is complementary to the existing Web testing techniques and can reveal faults otherwise unnoticed or hard to reveal with the other techniques.

Journal ArticleDOI
TL;DR: The room temperature photoluminescence from single microdisks shows the characteristic modal structure of whispering-gallery modes, and a modification of mode linewidth by a factor 13 as a function of pump power is observed.
Abstract: We report on visible light emission from Si-nanocrystal based optically active microdisk resonators. The room temperature photoluminescence (PL) from single microdisks shows the characteristic modal structure of whispering-gallery modes. The emission is both TE and TM-polarized in 300 nm thick microdisks, while thinner ones (135 nm) support only TE-like modes. Thinner disks have the advantage to filter out higher order radial mode families, allowing for measuring only the most intense first order modal structure. We reveal subnanometer linewidths and corresponding quality factors as high as 2800, limited by the spectral resolution of the experimental setup. Moreover, we observe a modification of mode linewidth by a factor 13 as a function of pump power. The origin of this effect is attributed to an excited carrier absorption loss mechanism.

Journal ArticleDOI
TL;DR: In this paper, the authors reported the parallel fabrication of miniaturized chemical sensors by the direct integration of nanostructured transition metal oxide films onto micro-hotplate platforms based on micromachined suspended membranes.
Abstract: We report the parallel fabrication of miniaturized chemical sensors by the direct integration of nanostructured transition metal oxide films onto micro-hotplate platforms based on micromachined suspended membranes This has been achieved by local deposition on a 10 × 10 membrane wafer of a supersonic cluster beam through a microfabricated auto-aligning silicon shadow mask The sensing properties of the obtained devices were tested with respect to various gaseous species For reducing and oxidizing species such as ethanol and NO2, very good performance in terms of linearity and sensitivity was observed These results demonstrate the feasibility of the coupling of a bottom-up nanofabrication technique such as supersonic cluster beam deposition to a top-down microfabricated platform for a direct and parallel integration methodology of nanomaterials in MEMS

Book ChapterDOI
20 Oct 2008
TL;DR: A new algorithm is proposed to compute an over-approximation of the set of reachable states of a program by replacing loops in the control flow graph by their abstract transformer, able to generate diagnostic information in case of property violations.
Abstract: Existing program analysis tools that implement abstraction rely on saturating procedures to compute over-approximations of fixpoints. As an alternative, we propose a new algorithm to compute an over-approximation of the set of reachable states of a program by replacing loops in the control flow graph by their abstract transformer. Our technique is able to generate diagnostic information in case of property violations, which we call leaping counterexamples. We have implemented this technique and report experimental results on a set of large ANSI-C programs using abstract domains that focus on properties related to string-buffers.

Proceedings ArticleDOI
18 Nov 2008
TL;DR: A CMOS interface for a piston-type MEMS capacitive microphone performs a capacitance-to-voltage conversion by bootstrapping the sensor through a voltage pre-amplifier, feeding a third-order sigma-delta modulator.
Abstract: A CMOS interface for a piston-type MEMS capacitive microphone is presented. It performs a capacitance-to-voltage conversion by bootstrapping the sensor through a voltage pre-amplifier, feeding a third-order sigma-delta modulator. The bootstrapping performs active parasitic compensation, improving the readout sensitivity by ~12 dB. The total current consumption is 460 uA at 1.8 V-supply. The digital output achieves 80 dBA-DR, with 63 dBA peak-SNR, using 0.35 um 2P/4M CMOS technology. The paper includes electrical and acoustic measurement results for the interface.

Journal ArticleDOI
TL;DR: A model where multiple pandemic waves are triggered by coinfection with ARI is proposed and studied, which agrees well with mortality excess data during the 1918 pandemic influenza, thereby providing indications for potential pandemic mitigation.

Proceedings ArticleDOI
12 May 2008
TL;DR: A new method using acoustic maps to deal with the case of two simultaneous speakers, based on a two step analysis of a coherence map, which allows one to localize both speakers.
Abstract: An interface for distant-talking control of home devices requires the possibility of identifying the positions of multiple users. Acoustic maps, based either on global coherence field (GCF) or oriented global coherence field (OGCF), have already been exploited successfully to determine position and head orientation of a single speaker. This paper proposes a new method using acoustic maps to deal with the case of two simultaneous speakers. The method is based on a two step analysis of a coherence map: first the dominant speaker is localized; then the map is modified by compensating for the effects due to the first speaker and the position of the second speaker is detected. Simulations were carried out to show how an appropriate analysis of OGCF and GCF maps allows one to localize both speakers. Experiments proved the effectiveness of the proposed solution in a linear microphone array set up.

Proceedings ArticleDOI
16 Dec 2008
TL;DR: In this article, the authors present wafer level deposition of thin polyvinylidene fluoride-trifluoroethylene P(VDF-TrFE) films by spin coating and their further patterning by dry etching.
Abstract: This work presents wafer level deposition of thin polyvinylidene fluoride-trifluoroethylene P(VDF-TrFE) films by spin coating and their further patterning by dry etching. Uniform and controlled thicknesses were obtained over a large area (4 inch Si wafer) by varying the concentration of solution and the spinnerpsilas speed. Absence of any standard method makes it difficult to etch the polymer films from places like pads. A new dry etch recipe, developed for this purpose was used for the selective etching of polymer films. In situ polarization of the polymer film has also been addressed.

Proceedings ArticleDOI
18 Aug 2008
TL;DR: An approach to ontology population based on a lexical substitution technique that consists in estimating the plausibility of sentences where the named entity to be classified is substituted with the ones contained in the training data, in this case, a partially populated ontology.
Abstract: We present an approach to ontology population based on a lexical substitution technique. It consists in estimating the plausibility of sentences where the named entity to be classified is substituted with the ones contained in the training data, in our case, a partially populated ontology. Plausibility is estimated by using Web data, while the classification algorithm is instance-based. We evaluated our method on two different ontology population tasks. Experiments show that our solution is effective, outperforming existing methods, and it can be applied to practical ontology population problems.

Proceedings ArticleDOI
10 May 2008
TL;DR: This paper test the claimed benefits of Fit through a series of three controlled experiments in which Fit tables and related fixtures are used to clarify a set of change requirements, in a software evolution scenario, and indicates improved correctness achieved with no significant impact on time.
Abstract: Test-driven software development tackles the problem of operationally defining the features to be implemented by means of test cases. This approach was recently ported to the early development phase, when requirements are gathered and clarified. Among the existing proposals, Fit (Framework for Integrated Testing) supports the precise specification of requirements by means of so called Fit tables, which express relevant usage scenarios in a tabular format, easily understood also by the customer. Fit tables can be turned into executable test cases through the creation of pieces of glue code, called fixtures. In this paper, we test the claimed benefits of Fit through a series of three controlled experiments in which Fit tables and related fixtures are used to clarify a set of change requirements, in a software evolution scenario. Results indicate improved correctness achieved with no significant impact on time, however benefits of Fit vary in a substantial way depending on the developers' experience. Preliminary results on the usage of Fit in combination with pair programming revealed another relevant source of variation.

Journal ArticleDOI
TL;DR: The use of synchrotron radiation (SR) as an excitation source for total reflection X-ray fluorescence analysis (TXRF) offers several advantages over Xray tube excitation as mentioned in this paper.
Abstract: The use of synchrotron radiation (SR) as an excitation source for total reflection X-ray fluorescence analysis (TXRF) offers several advantages over X-ray tube excitation. Detection limits in the fg range can be achieved with efficient excitation for low Z as well as high Z elements due to the features of synchrotron radiation and in particular the high brilliance in a wide spectral range and the linear polarization in the orbital plane. SR-TXRF is especially interesting for samples where only small sample masses are available. Lowest detection limits are typically achieved using multilayer monochromators since they exhibit a bandwidth of about 0.01 ΔE/E. Monochromators with smaller bandwidth like perfect crystals, reduce the intensity, but allow X-ray absorption spectroscopy (XAS) measurements in fluorescence mode for speciation and chemical characterisation at trace levels. SR-TXRF is performed at various synchrotron radiation facilities. An historical overview is presented and recent setups and applications as well as some critical aspects are reviewed.

Proceedings Article
01 May 2008
TL;DR: A first implementation of a tool for valence shifting of natural language texts, named Valentino (VALENced Text INOculator), is presented and can modify existing textual expressions towards more positively or negatively valenced versions.
Abstract: In this paper a first implementation of a tool for valence shifting of natural language texts, named Valentino (VALENced Text INOculator), is presented Valentino can modify existing textual expressions towards more positively or negatively valenced versions To this end we built specific resources gathering various valenced terms that are semantically or contextually connected, and implemented strategies that uses these resources for substituting input terms