scispace - formally typeset
Search or ask a question

Showing papers by "fondazione bruno kessler published in 2011"


Journal ArticleDOI
TL;DR: A completely automatic algorithm (ADJUST) that identifies artifacted independent components by combining stereotyped artifact-specific spatial and temporal features is proposed that provides a fast, efficient, and automatic way to use ICA for artifact removal.
Abstract: A successful method for removing artifacts from electroencephalogram (EEG) recordings is Independent Component Analysis (ICA), but its implementation remains largely user-dependent. Here, we propose a completely automatic algorithm (ADJUST) that identifies artifacted independent components by combining stereotyped artifact-specific spatial and temporal features. Features were optimized to capture blinks, eye movements, and generic discontinuities on a feature selection dataset. Validation on a totally different EEG dataset shows that (1) ADJUST’s classification of independent components largely matches a manual one by experts (agreement on 95.2% of the data variance), and (2) Removal of the artifacted components detected by ADJUST leads to neat reconstruction of visual and auditory eventrelated potentials from heavily artifacted data. These results demonstrate that ADJUST provides a fast, efficient, and automatic way to use ICA for artifact removal. Descriptors: Electroencephalography, Independent component analysis, EEG artifacts, EEG artefacts, Event-related potentials, Ongoing brain activity, Automatic classification, Thresholding

1,060 citations


Journal ArticleDOI
Stefan Hild1, M. R. Abernathy1, Fausto Acernese2, Pau Amaro-Seoane3, Nils Andersson4, K. G. Arun5, Fabrizio Barone2, B. Barr1, M. Barsuglia, Mark Beker, N. Beveridge1, S. Birindelli6, Suvadeep Bose7, L. Bosi, S. Braccini8, C. Bradaschia8, Tomasz Bulik9, Enrico Calloni10, Giancarlo Cella8, E. Chassande Mottin, S. Chelkowski11, Andrea Chincarini, James S. Clark12, E. Coccia13, C. Colacino8, J. Colas, A. Cumming1, L. Cunningham1, E. Cuoco, S. L. Danilishin14, Karsten Danzmann3, R. De Salvo15, T. Dent12, R. De Rosa10, L. Di Fiore10, A. Di Virgilio8, M. Doets16, V. Fafone13, Paolo Falferi17, R. Flaminio, J. Franc, F. Frasconi8, Andreas Freise11, D. Friedrich18, Paul Fulda11, Jonathan R. Gair19, Gianluca Gemme, E. Genin, A. Gennai11, A. Giazotto8, Kostas Glampedakis20, Christian Gräf3, M. Granata, Hartmut Grote3, G. M. Guidi21, A. Gurkovsky14, G. D. Hammond1, Mark Hannam12, Jan Harms15, D. Heinert22, Martin Hendry1, Ik Siong Heng1, E. Hennes, J. H. Hough, Sascha Husa23, S. H. Huttner1, G. T. Jones12, F. Y. Khalili14, Keiko Kokeyama11, Kostas D. Kokkotas20, Badri Krishnan3, Tjonnie G. F. Li, M. Lorenzini, H. Lück3, Ettore Majorana, Ilya Mandel24, Vuk Mandic25, M. Mantovani8, I. W. Martin1, Christine Michel, Y. Minenkov13, N. Morgado, S. Mosca10, B. Mours26, Helge Müller-Ebhardt18, P. G. Murray1, Ronny Nawrodt22, Ronny Nawrodt1, John Nelson1, Richard O'Shaughnessy27, Christian D. Ott15, C. Palomba, Angela Delli Paoli, G. Parguez, A. Pasqualetti, R. Passaquieti8, R. Passaquieti28, D. Passuello8, Laurent Pinard, Wolfango Plastino29, Rosa Poggiani8, Rosa Poggiani28, P. Popolizio, Mirko Prato, M. Punturo, P. Puppo, D. S. Rabeling16, P. Rapagnani30, Jocelyn Read31, Tania Regimbau6, H. Rehbein3, S. Reid1, F. Ricci30, F. Richard, A. Rocchi, Sheila Rowan1, A. Rüdiger3, Lucía Santamaría15, Benoit Sassolas, Bangalore Suryanarayana Sathyaprakash12, Roman Schnabel3, C. Schwarz22, Paul Seidel22, Alicia M. Sintes23, Kentaro Somiya15, Fiona C. Speirits1, Kenneth A. Strain1, S. E. Strigin14, P. J. Sutton12, S. P. Tarabrin18, Andre Thüring3, J. F. J. van den Brand16, M. van Veggel1, C. Van Den Broeck, Alberto Vecchio11, John Veitch12, F. Vetrano21, A. Viceré21, S. P. Vyatchanin14, Benno Willke3, Graham Woan1, Kazuhiro Yamamoto 
TL;DR: In this article, a special focus is set on evaluating the frequency band below 10 Hz where a complex mixture of seismic, gravity gradient, suspension thermal and radiation pressure noise dominates, including the most relevant fundamental noise contributions.
Abstract: Advanced gravitational wave detectors, currently under construction, are expected to directly observe gravitational wave signals of astrophysical origin. The Einstein Telescope (ET), a third-generation gravitational wave detector, has been proposed in order to fully open up the emerging field of gravitational wave astronomy. In this paper we describe sensitivity models for ET and investigate potential limits imposed by fundamental noise sources. A special focus is set on evaluating the frequency band below 10 Hz where a complex mixture of seismic, gravity gradient, suspension thermal and radiation pressure noise dominates. We develop the most accurate sensitivity model, referred to as ET-D, for a third-generation detector so far, including the most relevant fundamental noise contributions.

682 citations


Journal ArticleDOI
TL;DR: The extracted E1 polarizability leads to a neutron skin thickness close to that of a neutron star, thereby constraining the symmetry energy and its density dependence relevant to the description of neutron stars.
Abstract: A benchmark experiment on Pb-208 shows that polarized proton inelastic scattering at very forward angles including 0 degrees is a powerful tool for high-resolution studies of electric dipole (E1) and spin magnetic dipole (M1) modes in nuclei over a broad excitation energy range to test up-to-date nuclear models. The extracted E1 polarizability leads to a neutron skin thickness r(skin) = 0.156(-0.021)(+0.025) fm in Pb-208 derived within a mean-field model [Phys. Rev. C 81, 051303 (2010)], thereby constraining the symmetry energy and its density dependence relevant to the description of neutron stars.

362 citations


Proceedings ArticleDOI
07 Apr 2011
TL;DR: This work states that the introduction of SPAD devices in deep-submicron CMOS has enabled the design of massively parallel arrays where the entire photon detection and ToA circuitry is integrated on-pixel.
Abstract: Image sensors capable of resolving the time-of-arrival (ToA) of individual photons with high resolution are needed in several applications, such as fluorescence lifetime imaging microscopy (FLIM), Forster resonance energy transfer (FRET), optical rangefinding, and positron emission tomography In FRET, for example, typical fluorescence lifetime is of the order of 100 to 300ps, thus deep-subnanosecond resolutions are needed in the instrument response function (IRF) This in turn requires new time-resolved image sensors with better time resolution, increased throughput, and lower costs Solid-state avalanche photodiodes operated in Geiger-mode, or single-photon avalanche diodes (SPADs), have existed for decades [1] but only recently have SPADs been integrated in CMOS However, as array sizes have grown, the readout bottleneck has also become evident, leading to hybrid designs or more integration and more parallelism on-chip [2,3] This trend has accelerated with the introduction of SPAD devices in deep-submicron CMOS, that have enabled the design of massively parallel arrays where the entire photon detection and ToA circuitry is integrated on-pixel [4,5]

232 citations


Journal ArticleDOI
TL;DR: A new X-ray imaging set-up is proposed, combining full-field transmission X-rays microscopy (TXM) withX-ray absorption near-edge structure (XANES) spectroscopy to follow two-dimensional and three-dimensional morphological and chemical changes in large volumes at high resolution (tens of nanometers).
Abstract: The ability to probe morphology and phase distribution in complex systems at multiple length scales unravels the interplay of nano- and micrometer-scale factors at the origin of macroscopic behavior. While different electron- and X-ray-based imaging techniques can be combined with spectroscopy at high resolutions, owing to experimental time limitations the resulting fields of view are too small to be representative of a composite sample. Here a new X-ray imaging set-up is proposed, combining full-field transmission X-ray microscopy (TXM) with X-ray absorption near-edge structure (XANES) spectroscopy to follow two-dimensional and three-dimensional morphological and chemical changes in large volumes at high resolution (tens of nanometers). TXM XANES imaging offers chemical speciation at the nanoscale in thick samples (>20 µm) with minimal preparation requirements. Further, its high throughput allows the analysis of large areas (up to millimeters) in minutes to a few hours. Proof of concept is provided using battery electrodes, although its versatility will lead to impact in a number of diverse research fields.

225 citations


Journal ArticleDOI
TL;DR: A component-based modelling approach to system-software co-engineering of real-time embedded systems, in particular aerospace systems, centred around the standardized Architecture Analysis and Design Language (AADL) modelling framework is presented.
Abstract: This paper presents a component-based modelling approach to system-software co-engineering of real-time embedded systems, in particular aerospace systems. Our method is centred around the standardized Architecture Analysis and Design Language (AADL) modelling framework. We formalize a significant subset of AADL, incorporating its recent Error Model Annex for modelling faults and repairs. The major distinguishing aspects of this component-based approach are the possibility to describe nominal hardware and software operations, hybrid (and timing) aspects, as well as probabilistic faults and their propagation and recovery. Moreover, it supports dynamic (i.e. on-the-fly) reconfiguration of components and inter-component connections. The operational semantics gives a precise interpretation of specifications by providing a mapping onto networks of event-data automata. These networks are then subject to different kinds of formal analysis such as model checking, safety and dependability analysis and performance evaluation. Mature tool support realizes these analyses. The activities reported in this paper are carried out in the context of the correctness, modelling, and performance of aerospace systems, project which is funded by the European Space Agency.

216 citations


Journal ArticleDOI
TL;DR: The Neel IRAM KIDs Array (NIKA) as mentioned in this paper is a fully integrated measurement system based on kinetic inductance detectors (KIDs) currently being developed for millimeter wave astronomy.
Abstract: The Neel IRAM KIDs Array (NIKA) is a fully integrated measurement system based on kinetic inductance detectors (KIDs) currently being developed for millimeter wave astronomy. The instrument includes dual-band optics allowing simultaneous imaging at 150 GHz and 220 GHz. The imaging sensors consist of two spatially separated arrays of KIDs. The first array, mounted on the 150 GHz branch, is composed of 144 lumped-element KIDs. The second array (220 GHz) consists of 256 antenna-coupled KIDs. Each of the arrays is sensitive to a single polarization; the band splitting is achieved by using a grid polarizer. The optics and sensors are mounted in a custom dilution cryostat, with an operating temperature of ~70 mK. Electronic readout is realized using frequency multiplexing and a transmission line geometry consisting of a coaxial cable connected in series with the sensor array and a low-noise 4 K amplifier. The dual-band NIKA was successfully tested in 2010 October at the Institute for Millimetric Radio Astronomy (IRAM) 30 m telescope at Pico Veleta, Spain, performing in-line with laboratory predictions. An optical NEP was then calculated to be around 2 × 10–16 W Hz–1/2 (at 1 Hz) while under a background loading of approximately 4 pW pixel–1. This improvement in comparison with a preliminary run (2009) verifies that NIKA is approaching the target sensitivity for photon-noise limited ground-based detectors. Taking advantage of the larger arrays and increased sensitivity, a number of scientifically relevant faint and extended objects were then imaged including the Galactic Center SgrB2 (FIR1), the radio galaxy Cygnus A, and the NGC1068 Seyfert galaxy. These targets were all observed simultaneously in the 150 GHz and 220 GHz atmospheric windows.

194 citations


Journal ArticleDOI
TL;DR: A set of acoustic and linguistic features that characterise emotional/emotion-related user states - confined to the one database processed: four classes in a German corpus of children interacting with a pet robot are described and interpreted.

153 citations


Journal ArticleDOI
TL;DR: A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications and is implemented on a field programmable gate array to provide fast lifetime estimates.
Abstract: A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 ?m complementary metaloxide-semiconductor single-photon avalanche diode (SPAD) plus time-to-digital converter array. A simple look-up table is included to enhance the lifetime resolvability range and photon economics, making it comparable to the commonly used least-square method and maximum likelihood estimation based software. To demonstrate its performance, a widefield microscope was adapted to accommodate the SPAD array and image different test samples. Fluorescence lifetime imaging microscopy on fluorescent beads in Rhodamine 6G at a frame rate of 50 fps is also shown.

111 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the CI tools are effective in dealing with the challenging problem of material classification, and the comparative analysis shows that SVM provides the best tradeoff between classification accuracy and computational complexity of the classification algorithm.
Abstract: The two major components of a robotic tactile-sensing system are the tactile-sensing hardware at the lower level and the computational/software tools at the higher level. Focusing on the latter, this research assesses the suitability of computational-intelligence (CI) tools for tactile-data processing. In this context, this paper addresses the classification of sensed object material from the recorded raw tactile data. For this purpose, three CI paradigms, namely, the support-vector machine (SVM), regularized least square (RLS), and regularized extreme learning machine (RELM), have been employed, and their performance is compared for the said task. The comparative analysis shows that SVM provides the best tradeoff between classification accuracy and computational complexity of the classification algorithm. Experimental results indicate that the CI tools are effective in dealing with the challenging problem of material classification.

95 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: The algorithm combines symbolic information with dynamic analysis and has two key advantages: It does not require any change in the underlying test data generation technique and it avoids many problems traditionally associated with symbolic execution, in particular the presence of loops.
Abstract: We present an algorithm for constructing fitness functions that improve the efficiency of search-based testing when trying to generate branch adequate test data. The algorithm combines symbolic information with dynamic analysis and has two key advantages: It does not require any change in the underlying test data generation technique and it avoids many problems traditionally associated with symbolic execution, in particular the presence of loops. We have evaluated the algorithm on industrial closed source and open source systems using both local and global search-based testing techniques, demonstrating that both are statistically significantly more efficient using our approach. The test for significance was done using a one-sided, paired Wilcoxon signed rank test. On average, the local search requires 23.41% and the global search 7.78% fewer fitness evaluations when using a symbolic execution based fitness function generated by the algorithm.

Journal ArticleDOI
01 Oct 2011
TL;DR: In this paper, a new baseline compensation scheme was proposed to reduce the time jitter due to the dark counts of the detector and is based on a simple analog filter, which is well suited for an ASIC implementation.
Abstract: The most common method for time pick-off from signals coming from SiPMs coupled to scintillator crystals in PET applications is the Leading Edge Triggering. In this work we propose a new filtering scheme to be applied before the discriminator. It implements a type of baseline compensation aimed at the reduction of the time jitter due to the dark counts of the detector and is based on a simple analog filter, which is well suited for an ASIC implementation. We describe a circuit, built with discrete components, that implements the filter. Finally we report on the Coincidence Resolving Time measurements performed coupling the circuit to the real detector, composed of a SiPM built at FBK and a LYSO crystal, for the detection of 511 keV photons. The results obtained are promising, showing that the method is quite effective in reducing the impact of the detector noise on the timing performance of the system.

Journal ArticleDOI
TL;DR: This paper proposes a new method of frequency-domain blind source separation (FD-BSS), able to separate acoustic sources in challenging conditions, and proposes a recursively regularized implementation of the ICA (RR-ICA) that overcomes the mentioned problem by exploiting two types of deterministic knowledge.
Abstract: This paper proposes a new method of frequency-domain blind source separation (FD-BSS), able to separate acoustic sources in challenging conditions. In frequency-domain BSS, the time-domain signals are transformed into time-frequency series and the separation is generally performed by applying independent component analysis (ICA) at each frequency envelope. When short signals are observed and long demixing filters are required, the number of time observations for each frequency is limited and the variance of the ICA estimator increases due to the intrinsic statistical bias. Furthermore, common methods used to solve the permutation problem fail, especially with sources recorded under highly reverberant conditions. We propose a recursively regularized implementation of the ICA (RR-ICA) that overcomes the mentioned problem by exploiting two types of deterministic knowledge: 1) continuity of the demixing matrix across frequencies; 2) continuity of the time-activity of the sources. The recursive regularization propagates the statistics of the sources across frequencies reducing the effect of statistical bias and the occurrence of permutations. Experimental results on real-data show that the algorithm can successfully perform a fast separation of short signals (e.g., 0.5-1s), by estimating long demixing filters to deal with highly reverberant environments (e.g., ms).

Journal ArticleDOI
TL;DR: In this paper, a lock-in pixel array based on a buried channel photo-detector aimed at time-of-flight range imaging is presented, which provides a stream of three-dimensional images at 5-20 fps on a 3-6 m range, with a linearity error lower than 0.7% and a repeatability of 5-16 cm.
Abstract: This paper presents the design and characterization of a lock-in pixel array based on a buried channel photo-detector aimed at time-of-flight range imaging. The proposed photo-demodulator has been integrated in a 10-μm pixel pitch with a fill factor of 24%, and is capable of a maximum demodulation frequency of 50 MHz with a contrast of 29.5%. The sensor has been fabricated in a 0.18-μm CMOS imaging technology and assembled in a range camera system setup. The system provides a stream of three-dimensional images at 5-20 fps on a 3-6 m range, with a linearity error lower than 0.7% and a repeatability of 5-16 cm, while the best achievable precision is 2.7 cm at a 50-MHz modulation frequency.

Journal ArticleDOI
TL;DR: In this paper, a large-scale annotated biomedical corpus with four different semantic groups through the harmonisation of annotations from automatic text mining solutions, the first version of the Silver Standard Corpus (SSC-I), was used for the CALBC Challenge.
Abstract: Background: Competitions in text mining have been used to measure the performance of automatic text processing solutions against a manually annotated gold standard corpus (GSC). The preparation of the GSC is time-consuming and costly and the final corpus consists at the most of a few thousand documents annotated with a limited set of semantic groups. To overcome these shortcomings, the CALBC project partners (PPs) have produced a large-scale annotated biomedical corpus with four different semantic groups through the harmonisation of annotations from automatic text mining solutions, the first version of the Silver Standard Corpus (SSC-I). The four semantic groups are chemical entities and drugs (CHED), genes and proteins (PRGE), diseases and disorders (DISO) and species (SPE). This corpus has been used for the First CALBC Challenge asking the participants to annotate the corpus with their text processing solutions. Results: All four PPs from the CALBC project and in addition, 12 challenge participants (CPs) contributed annotated data sets for an evaluation against the SSC-I. CPs could ignore the training data and deliver the annotations from their genuine annotation system, or could train a machine-learning approach on the provided preannotated data. In general, the performances of the annotation solutions were lower for entities from the categories CHED and PRGE in comparison to the identification of entities categorized as DISO and SPE. The best performance over all semantic groups were achieved from two annotation solutions that have been trained on the SSC-I. The data sets from participants were used to generate the harmonised Silver Standard Corpus II (SSC-II), if the participant did not make use of the annotated data set from the SSC-I for training purposes. The performances of the participants’ solutions were again measured against the SSC-II. The performances of the annotation solutions showed again better results for DISO and SPE in comparison to CHED and PRGE.

Proceedings ArticleDOI
09 Jun 2011
TL;DR: The article reports the geometric investigation of the Kinect active sensors, considering its measurement performances, the accuracy of the retrieved range data and the possibility to use it for 3D modeling application.
Abstract: 3D imaging systems are widely available and used for surveying, modeling and entertainment applications, but clear statements regarding their characteristics, performances and limitations are still missing. The VDI/VDE and the ASTME57 committees are trying to set some standards but the commercial market is not reacting properly. Since many new users are approaching these 3D recording methodologies, clear statements and information clarifying if a package or system satisfies certain requirements before investing are fundamental for those users who are not really familiar with these technologies. Recently small and portable consumer-grade active sensors came on the market, like TOF rangeimaging cameras or low-cost triangulation-based range sensor. A quite interesting active system was produced by PrimeSense and launched on the market thanks to the Microsoft Xbox project with the name of Kinect. The article reports the geometric investigation of the Kinect active sensors, considering its measurement performances, the accuracy of the retrieved range data and the possibility to use it for 3D modeling application.

Journal ArticleDOI
TL;DR: An SIR transmission model with dynamic vaccine demand based on an imitation mechanism where the perceived risk of vaccination is modelled as a function of the incidence of vaccine side effects is studied.

Journal ArticleDOI
TL;DR: In this article, the behavior of silicon photomultipliers (SiPMs) at low temperatures: I-V characteristics, breakdown voltage, dark noise, afterpulsing, crosstalk, pulse shape, gain and photon detection efficiency are studied as a function of temperature in the range 50 K T 320 K.
Abstract: We investigate the behavior of silicon photomultipliers (SiPMs) at low temperatures: I–V characteristics, breakdown voltage, dark noise, afterpulsing, crosstalk, pulse shape, gain and photon detection efficiency are studied as a function of temperature in the range 50 K T 320 K . We discuss our measurements on the basis of the temperature dependent properties of silicon and of the models related to carrier generation, transport and multiplication in high electric field. We conclude that SiPMs provide an excellent alternative to vacuum tube photomultipliers (PMTs) in low temperature environments, even better than in room temperature ones: in particular they excel in the interval 100 K T 200 K .

Book ChapterDOI
14 Jul 2011
TL;DR: KRATOS verifies safety properties, in the form of program assertions, by allowing users to explore two directions in the verification, by relying on the translation from SystemC designs to sequential C programs, and is capable of model checking the resulting C programs using the symbolic lazy predicate abstraction technique.
Abstract: The growing popularity of SystemC has attracted research aimed at the formal verification of SystemC designs. In this paper we present KRATOS, a software model checker for SystemC. KRATOS verifies safety properties, in the form of program assertions, by allowing users to explore two directions in the verification. First, by relying on the translation from SystemC designs to sequential C programs, KRATOS is capable of model checking the resulting C programs using the symbolic lazy predicate abstraction technique. Second, KRATOS implements a novel algorithm, called ESST, that combines Explicit state techniques to deal with the SystemC Scheduler, with Symbolic techniques to deal with the Threads. KRATOS is built on top of NUSMV and MATHSAT, and uses state-ofthe-art SMT-based techniques for program abstractions and refinements.

Proceedings Article
05 Jul 2011
TL;DR: In this article, the authors argue that virality is a phenomenon strictly connected to the nature of the content being spread, rather than to the influencers who spread it, and they provide initial experiments in a machine learning framework to show how various aspects of virality can be independently predicted according to content features.
Abstract: This paper aims to shed some light on the concept of virality - especially in social networks - and to provide new insights on its structure. We argue that: (a) virality is a phenomenon strictly connected to the nature of the content being spread, rather than to the influencers who spread it (b) virality is a phenomenon with many facets, i.e. under this generic term several different effects of persuasive communication are comprised and they only partially overlap. To give ground to our claims, we provide initial experiments in a machine learning framework to show how various aspects of virality can be independently predicted according to content features.

Book ChapterDOI
05 Dec 2011
TL;DR: A framework that integrates layer specific monitoring and adaptation techniques, and enables multi-layered control loops in service-based systems is proposed and evaluated on a medical imaging procedure for Computed Tomography Scans, an e-Health scenario characterized by strong dependencies between the software layer and infrastructural resources.
Abstract: Service-based applications have become more and more multi-layered in nature, as we tend to build software as a service on top of infrastructure as a service. Most existing SOA monitoring and adaptation techniques address layer-specific issues. These techniques, if used in isolation, cannot deal with real-world domains, where changes in one layer often affect other layers, and information from multiple layers is essential in truly understanding problems and in developing comprehensive solutions. In this paper we propose a framework that integrates layer specific monitoring and adaptation techniques, and enables multi-layered control loops in service-based systems. The proposed approach is evaluated on a medical imaging procedure for Computed Tomography (CT) Scans, an e-Health scenario characterized by strong dependencies between the software layer and infrastructural resources.

Book ChapterDOI
01 Jan 2011
TL;DR: The subject area of this chapter is not emotions in some narrow sense but in a wider sense encompassing emotion-related states such as moods, attitudes, or interpersonal stances as well.
Abstract: In this chapter, we focus on the automatic recognition of emotional states using acoustic and linguistic parameters as features and classifiers as tools to predict the ‘correct’ emotional states. We first sketch history and state of the art in this field; then we describe the process of ‘corpus engineering’, i.e. the design and the recording of databases, the annotation of emotional states, and further processing such as manual or automatic segmentation. Next, we present an overview of acoustic and linguistic features that are extracted automatically or manually. In the section on classifiers, we deal with topics such as the curse of dimensionality and the sparse data problem, classifiers, and evaluation. At the end of each section, we point out important aspects that should be taken into account for the planning or the assessment of studies. The subject area of this chapter is not emotions in some narrow sense but in a wider sense encompassing emotion-related states such as moods, attitudes, or interpersonal stances as well. We do not aim at an in-depth treatise of some specific aspects or algorithms but at an overview of approaches and strategies that have been used or should be used.

Proceedings Article
01 Nov 2011
TL;DR: This paper takes a data driven approach to identify arguments of explicit discourse connectives and designs the argument segmentation task as a cascade of decisions based on conditional random fields (CRFs).
Abstract: Parsing discourse is a challenging natural language processing task. In this paper we take a data driven approach to identify arguments of explicit discourse connectives. In contrast to previous work we do not make any assumptions on the span of arguments and consider parsing as a token-level sequence labeling task. We design the argument segmentation task as a cascade of decisions based on conditional random fields (CRFs). We train the CRFs on lexical, syntactic and semantic features extracted from the Penn Discourse Treebank and evaluate feature combinations on the commonly used test split. We show that the best combination of features includes syntactic and semantic features. The comparative error analysis investigates the performance variability over connective types and argument positions.

Journal ArticleDOI
TL;DR: The design and electro-optical test of a 160 × 120-pixels CMOS sensor specifically conceived for Time-Of-Flight 3D imaging is presented, which allows the implementation of Indirect Time-of-Flight technique for distance measurement with reset noise removal through Correlated Double Sampling and embedded fixed-pattern noise reduction.
Abstract: This paper presents the design and electro-optical test of a 160 × 120-pixels CMOS sensor specifically conceived for Time-Of-Flight 3D imaging. The in-pixel processing allows the implementation of Indirect Time-Of-Flight technique for distance measurement with reset noise removal through Correlated Double Sampling and embedded fixed-pattern noise reduction, whereas a fast readout operation allows the pixels values to be streamed out at a maximum rate of 10 MSample/s. The imager can operate as a fast 2D camera up to 458 fps, as a 3D camera up to 80 fps, or even coupling both operation modes. The chip has been fabricated using a standard 0.18 μm 1P4M 1.8 V CMOS technology with MIM capacitors. The resulting pixel has a pitch of 29.1 μm with a fill-factor of 34% and includes 66 transistors. Distance measurements up to 4.5 m have been performed with pulsed laser light, achieving a best precision of 10 cm at 1 m in real-time at 55 fps and 175 mA current consumption.

Journal ArticleDOI
TL;DR: This paper presents the QALL-ME Framework, a reusable architecture for building multi- and cross-lingual Question Answering (QA) systems working on structured data modelled by an ontology, and presents a running example to clarify how the framework processes questions.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the Gross-Pitaevskii (GP) energy functional for a fast rotating Bose-Einstein condensate on the unit disc in two dimensions.
Abstract: We study the Gross-Pitaevskii (GP) energy functional for a fast rotating Bose-Einstein condensate on the unit disc in two dimensions. Writing the coupling parameter as 1/e2 we consider the asymptotic regime e → 0 with the angular velocity Ω proportional to (e2|log e|)−1. We prove that if Ω = Ω0(e2|log e|)−1 and Ω0 > 2(3π)−1 then a minimizer of the GP energy functional has no zeros in an annulus at the boundary of the disc that contains the bulk of the mass. The vorticity resides in a complementary ‘hole’ around the center where the density is vanishingly small. Moreover, we prove a lower bound to the ground state energy that matches, up to small errors, the upper bound obtained from an optimal giant vortex trial function, and also that the winding number of a GP minimizer around the disc is in accord with the phase of this trial function.

Proceedings ArticleDOI
13 Oct 2011
TL;DR: In this article, two different Single-Photon Avalanche Diode (SPAD) structures in a standard 0.15-nm CMOS technology are presented, and a characterization of the two detectors, having a 10-μm active-area diameter, and monolithically integrated with a passive quenching circuit and a fast comparator is presented.
Abstract: Two different Single-Photon Avalanche Diode (SPAD) structures in a standard 0.15-nm CMOS technology are presented. A characterization of the two detectors, having a 10-μm active-area diameter, and monolithically integrated with a passive quenching circuit and a fast comparator is presented. The two devices exhibit respectively a typical dark count rate of 230cps and 160cps, an afterpulsing probability of 2.1% and 1.3% at 30ns dead time, a Photon Detection Probability of 31% and 26 % at λ=470nm and a timing resolution of 170ps and 60ps. The adopted technology is therefore promising for the realization of SPAD-based image sensors with good overall performance.

Journal ArticleDOI
TL;DR: In this paper, the setup and experimental techniques for measurements of zero-degree inelastic scattering and reactions involving light ions with the K=600 magnetic spectrometer at iThemba LABS are described.
Abstract: The setup and experimental techniques for measurements of zero-degree inelastic scattering and reactions involving light ions with the K=600 magnetic spectrometer at iThemba LABS are described. Measurements were performed for inelastic proton scattering at an incident energy of 200 MeV for targets ranging from 27Al to 208Pb. An energy-resolution of 45 keV (FWHM) was achieved by utilizing the faint-beam dispersion-matching technique. A background subtraction procedure was applied and allowed for the extraction of excitation energy spectra with low background. Measurements of the (p,t) reaction at zero degrees for Ep=100 and 200 MeV benefited from the difference in magnetic rigidity between the reaction products and the beam particles, resulting in background-free spectra with an excitation energy-resolution of 32 and 48 keV (FWHM), respectively, and a scattering angle resolution of 0.55° (FWHM). The addition of Double Sided Silicon Strip Detectors (DSSSD) at backward scattering angles allowed for coincident measurements of particle-decay of states excited in the (p,t) reaction at E p = 200 MeV .

Journal ArticleDOI
TL;DR: The results suggest that the socio-demographic IBM is an optimal choice for evaluating current control strategies, including contact network investigation of index cases, and the simulation of alternative scenarios, particularly for TB eradication targets.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This work forms the task of discovering high level activity patterns as a prototype learning problem where the correlation among atomic activities is explicitly taken into account when grouping clip histograms.
Abstract: We present a novel approach for automatically discovering spatio-temporal patterns in complex dynamic scenes Similarly to recent non-object centric methods, we use low level visual cues to detect atomic activities and then construct clip histograms Differently from previous works, we formulate the task of discovering high level activity patterns as a prototype learning problem where the correlation among atomic activities is explicitly taken into account when grouping clip histograms Interestingly at the core of our approach there is a convex optimization problem which allows us to efficiently extract patterns at multiple levels of detail The effectiveness of our method is demonstrated on publicly available datasets