scispace - formally typeset
Search or ask a question

Showing papers by "fondazione bruno kessler published in 2015"


Journal ArticleDOI
TL;DR: An overview of the key aspects of graphene and related materials, ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries are provided.
Abstract: We present the science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems, targeting an evolution in technology, that might lead to impacts and benefits reaching into most areas of society. This roadmap was developed within the framework of the European Graphene Flagship and outlines the main targets and research areas as best understood at the start of this ambitious project. We provide an overview of the key aspects of graphene and related materials (GRMs), ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries. We also define an extensive list of acronyms in an effort to standardize the nomenclature in this emerging field.

2,560 citations


Proceedings ArticleDOI
06 Oct 2015
TL;DR: This work proposes Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations, and introduces a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies.
Abstract: We present a novel unsupervised deep learning framework for anomalous event detection in complex video scenes. While most existing works merely use hand-crafted appearance and motion features, we propose Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation (early fusion). Based on the learned representations, multiple one-class SVM models are used to predict the anomaly scores of each input, which are then integrated with a late fusion strategy for final anomaly detection. We evaluate the proposed method on two publicly available video surveillance datasets, showing competitive performance with respect to state of the art approaches.

520 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: Deep Neural Decision Forests as discussed by the authors proposes a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network.
Abstract: We present Deep Neural Decision Forests - a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on-par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84%/6.38% on ImageNet validation data when integrating our forests in a single-crop, single/seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67% error obtained by the best GoogLeNet architecture (7 models, 144 crops).

490 citations


Journal ArticleDOI
TL;DR: It is demonstrated thatRNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction.
Abstract: Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.

305 citations


Journal ArticleDOI
TL;DR: DECAF is presented, a detailed analysis of the correlations between participants' self-assessments and their physiological responses and single-trial classification results for valence, arousal and dominance are presented, with performance evaluation against existing data sets.
Abstract: In this work, we present DECAF —a multimodal data set for dec oding user physiological responses to af fective multimedia content. Different from data sets such as DEAP [15] and MAHNOB-HCI [31] , DECAF contains (1) brain signals acquired using the Magnetoencephalogram (MEG) sensor, which requires little physical contact with the user’s scalp and consequently facilitates naturalistic affective response, and (2) explicit and implicit emotional responses of 30 participants to 40 one-minute music video segments used in [15] and 36 movie clips, thereby enabling comparisons between the EEG versus MEG modalities as well as movie versus music stimuli for affect recognition. In addition to MEG data, DECAF comprises synchronously recorded near-infra-red (NIR) facial videos, horizontal Electrooculogram (hEOG), Electrocardiogram (ECG), and trapezius-Electromyogram (tEMG) peripheral physiological responses. To demonstrate DECAF’s utility, we present (i) a detailed analysis of the correlations between participants’ self-assessments and their physiological responses and (ii) single-trial classification results for valence , arousal and dominance , with performance evaluation against existing data sets. DECAF also contains time-continuous emotion annotations for movie clips from seven users, which we use to demonstrate dynamic emotion prediction.

257 citations


Proceedings ArticleDOI
01 Sep 2015
TL;DR: The WMT15 shared task as discussed by the authors included a standard news translation task, a metrics task, tuning task, and a task for run-time estimation of machine translation quality, and an automatic post-editing task.
Abstract: This paper presents the results of the WMT15 shared tasks, which included a standard news translation task, a metrics task, a tuning task, a task for run-time estimation of machine translation quality, and an automatic post-editing task. This year, 68 machine translation systems from 24 institutions were submitted to the ten translation directions in the standard translation task. An additional 7 anonymized systems were included, and were then evaluated both automatically and manually. The quality estimation task had three subtasks, with a total of 10 teams, submitting 34 entries. The pilot automatic postediting task had a total of 4 teams, submitting 7 entries.

253 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the renormalisation-group-invariant running-interaction predicted by contemporary analyses of QCD's gauge sector coincides with that required in order to describe Gro und-state hadron observables using a nonperturbative truncation of Dyson-Schwinger equations in the matter sector.

198 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logARithms.

185 citations


Journal ArticleDOI
TL;DR: In this article, a collinearly-improved version of the BK equation is presented, which resums to all orders the radiative corrections enhanced by large double transverse logarithms.

164 citations


Journal ArticleDOI
TL;DR: This paper presents and analyses the most relevant literature contributions on the image fusion concept in the context of multitemporal remote sensing image processing by considering images acquired by optical and SAR systems at medium, high and very high spatial resolution.
Abstract: This paper presents an overview on the image fusion concept in the context of multitemporal remote sensing image processing. In the remote sensing literature, multitemporal image analysis mainly deals with the detection of changes and land-cover transitions. Thus the paper presents and analyses the most relevant literature contributions on these topics. From the perspective of change detection and detection of land-cover transitions, multitemporal image analysis techniques can be divided into two main groups: i) those based on the fusion of the multitemporal information at feature level, and ii) those based on the fusion of the multitemporal information at decision level. The former mainly exploit multitemporal image comparison techniques, which aim at highlighting the presence/absence of changes by generating change indices. These indices are then analyzed by unsupervised algorithms for extracting the change information. The latter rely mainly on classification and include both supervised and semi/partially-supervised/unsupervised methods. The paper focuses the attention on both standard (and largely used) methods and techniques proposed in the recent literature. The analysis is conducted by considering images acquired by optical and SAR systems at medium, high and very high spatial resolution.

150 citations


Proceedings ArticleDOI
13 Apr 2015
TL;DR: This paper introduces a novel highly scalable many-objective genetic algorithm, called MOSA (Many-Objective Sorting Algorithm), suitably defined for the many- objective branch coverage problem, and indicates that the proposed many- objective algorithm is significantly more effective and more efficient than the whole suite approach.
Abstract: Test data generation has been extensively investigated as a search problem, where the search goal is to maximize the number of covered program elements (e.g., branches). Recently, the whole suite approach, which combines the fitness functions of single branches into an aggregate, test suite-level fitness, has been demonstrated to be superior to the traditional single-branch at a time approach. In this paper, we propose to consider branch coverage directly as a many-objective optimization problem, instead of aggregating multiple objectives into a single value, as in the whole suite approach. Since programs may have hundreds of branches (objectives), traditional many-objective algorithms that are designed for numerical optimization problems with less than 15 objectives are not applicable. Hence, we introduce a novel highly scalable many-objective genetic algorithm, called MOSA (Many-Objective Sorting Algorithm), suitably defined for the many- objective branch coverage problem. Results achieved on 64 Java classes indicate that the proposed many-objective algorithm is significantly more effective and more efficient than the whole suite approach. In particular, effectiveness (coverage) was significantly improved in 66% of the subjects and efficiency (search budget consumed) was improved in 62% of the subjects on which effectiveness remains the same.

Journal ArticleDOI
TL;DR: This review provides insight into various materials that have been used in the development of flexible electronics primarily for e-skin applications.
Abstract: Flexible electronics has huge potential to bring revolution in robotics and prosthetics as well as to bring about the next big evolution in electronics industry. In robotics and related applications, it is expected to revolutionise the way with which machines interact with humans, real-world objects and the environment. For example, the conformable electronic or tactile skin on robot’s body, enabled by advances in flexible electronics, will allow safe robotic interaction during physical contact of robot with various objects. Developing a conformable, bendable and stretchable electronic system requires distributing electronics over large non-planar surfaces and movable components. The current research focus in this direction is marked by the use of novel materials or by the smart engineering of the traditional materials to develop new sensors, electronics on substrates that can be wrapped around curved surfaces. Attempts are being made to achieve flexibility/stretchability in e-skin while retaining a relia...

Journal ArticleDOI
TL;DR: Results demonstrate that the proposed approach to building change detection in multitemporal VHR synthetic aperture radar images allows an accurate identification of new and demolished buildings while presents a low false-alarm rate and a high reliability.
Abstract: The increasing availability of very high resolution (VHR) images regularly acquired over urban areas opens new attractive opportunities for monitoring human settlements at the level of individual buildings. This paper presents a novel approach to building change detection in multitemporal VHR synthetic aperture radar (SAR) images. The proposed approach is based on two concepts: 1) the extraction of information on changes associated with increase and decrease of backscattering at the optimal building scale and 2) the exploitation of the expected backscattering properties of buildings to detect either new or fully demolished buildings. Each detected change is associated with a grade of reliability. The approach is validated on the following: 1) COSMO-SkyMed multitemporal spotlight images acquired in 2009 on the city of L'Aquila (Italy) before and after the earthquake that hit the region and 2) TerraSAR-X multitemporal spotlight images acquired on the urban area of the city of Trento (Italy). Results demonstrate that the proposed approach allows an accurate identification of new and demolished buildings while presents a low false-alarm rate and a high reliability.

Journal ArticleDOI
TL;DR: A novel hierarchical CD approach is proposed, aimed at identifying all the possible change classes present between the considered images, by considering spectral change information to identify the change classes having discriminable spectral behaviors.
Abstract: The new generation of satellite hyperspectral (HS) sensors can acquire very detailed spectral information directly related to land surface materials. Thus, when multitemporal images are considered, they allow us to detect many potential changes in land covers. This paper addresses the change-detection (CD) problem in multitemporal HS remote sensing images, analyzing the complexity of this task. A novel hierarchical CD approach is proposed, which is aimed at identifying all the possible change classes present between the considered images. In greater detail, in order to formalize the CD problem in HS images, an analysis of the concept of “change” is given from the perspective of pixel spectral behaviors. The proposed novel hierarchical scheme is developed by considering spectral change information to identify the change classes having discriminable spectral behaviors. Due to the fact that, in real applications, reference samples are often not available, the proposed approach is designed in an unsupervised way. Experimental results obtained on both simulated and real multitemporal HS images demonstrate the effectiveness of the proposed CD method.

Journal ArticleDOI
TL;DR: This paper presents an effective semiautomatic method for discovering and detecting multiple changes in multitemporal hyperspectral (HS) images and proposes a novel 2-D adaptive spectral change vector representation (ASCVR) to visualize the changes.
Abstract: This paper presents an effective semiautomatic method for discovering and detecting multiple changes (i.e., different kinds of changes) in multitemporal hyperspectral (HS) images. Differently from the state-of-the-art techniques, the proposed method is designed to be sensitive to the small spectral variations that can be identified in HS images but usually are not detectable in multispectral images. The method is based on the proposed sequential spectral change vector analysis, which exploits an iterative hierarchical scheme that at each iteration discovers and identifies a subset of changes. The approach is interactive and semiautomatic and allows one to study in detail the structure of changes hidden in the variations of the spectral signatures according to a top-down procedure. A novel 2-D adaptive spectral change vector representation (ASCVR) is proposed to visualize the changes. At each level this representation is optimized by an automatic definition of a reference vector that emphasizes the discrimination of changes. Finally, an interactive manual change identification is applied for extracting changes in the ASCVR domain. The proposed approach has been tested on three hyperspectral data sets, including both simulated and real multitemporal images showing multiple-change detection problems. Experimental results confirmed the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This paper presents the results of CTR measurements on two different SiPM technologies from FBK coupled to LSO:Ce codoped 0.4%Ca crystals, and demonstrates that a CTR of 140 ± 5 ps can be achieved for longer 2 × 2 × 20 mm(3) crystals, which can readily be implemented in the current generation PET systems to achieve the desired increase in the signal to noise ratio.
Abstract: The coincidence time resolution (CTR) becomes a key parameter of 511 keV gamma detection in time of flight positron emission tomography (TOF-PET). This is because additional information obtained through timing leads to a better noise suppression and therefore a better signal to noise ratio in the reconstructed image. In this paper we present the results of CTR measurements on two different SiPM technologies from FBK coupled to LSO:Ce codoped 0.4%Ca crystals. We compare the measurements performed at two separate test setups, i.e. at CERN and at FBK, showing that the obtained results agree within a few percent. We achieve a best CTR value of 85 ± 4 ps FWHM for 2 × 2 × 3 mm(3) LSO:Ce codoped 0.4%Ca crystals, thus breaking the 100 ps barrier with scintillators similar to LSO:Ce or LYSO:Ce. We also demonstrate that a CTR of 140 ± 5 ps can be achieved for longer 2 × 2 × 20 mm(3) crystals, which can readily be implemented in the current generation PET systems to achieve the desired increase in the signal to noise ratio.

Journal ArticleDOI
TL;DR: The authors' simulations show that these sensors, the so-called Ultra-Fast Silicon Detectors (UFSD), will be able to reach a time resolution factor of 10 better than that of traditional silicon sensors.
Abstract: Low-Gain Avalanche Diodes (LGAD) are silicon detectors with output signals that are about a factor of 10 larger than those of traditional sensors. In this paper we analyze how the design of LGAD can be optimized to exploit their increased output signal to reach optimum timing performances. Our simulations show that these sensors, the so-called Ultra-Fast Silicon Detectors (UFSD), will be able to reach a time resolution factor of 10 better than that of traditional silicon sensors.

Journal ArticleDOI
TL;DR: The hypothesis that infiltrating T cells influence the behavior of neuroblastoma and might be of clinical importance for the treatment of patients is supported.
Abstract: Neuroblastoma grows within an intricate network of different cell types including epithelial, stromal and immune cells. The presence of tumor-infiltrating T cells is considered an important prognostic indicator in many cancers, but the role of these cells in neuroblastoma remains to be elucidated. Herein, we examined the relationship between the type, density and organization of infiltrating T cells and clinical outcome within a large collection of neuroblastoma samples by quantitative analysis of immunohistochemical staining. We found that infiltrating T cells have a prognostic value greater than, and independent of, the criteria currently used to stage neuroblastoma. A variable in situ structural organization and different concurrent infiltration of T-cell subsets were detected in tumors with various outcomes. Low-risk neuroblastomas were characterized by a higher number of proliferating T cells and a more structured T-cell organization, which was gradually lost in tumors with poor prognosis. We defined an immunoscore based on the presence of CD3+, CD4+ and CD8+ infiltrating T cells that associates with favorable clinical outcome in MYCN-amplified tumors, improving patient survival when combined with the v-myc avian myelocytomatosis viral oncogene neuroblastoma derived homolog (MYCN) status. These findings support the hypothesis that infiltrating T cells influence the behavior of neuroblastoma and might be of clinical importance for the treatment of patients.

Proceedings ArticleDOI
28 Dec 2015
TL;DR: A service-based gamification framework is presented, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and the empirical findings of an experiment conducted in the city of Rovereto are discussed.
Abstract: Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.

Proceedings ArticleDOI
13 Oct 2015
TL;DR: A novel approach for predicting the perceived safety of a scene from Google Street View Images using a Convolutional Neural Network (CNN), significantly improving the accuracy of predictions over previous methods.
Abstract: Cities' visual appearance plays a central role in shaping human perception and response to the surrounding urban environment. For example, the visual qualities of urban spaces affect the psychological states of their inhabitants and can induce negative social outcomes. Hence, it becomes critically important to understand people's perceptions and evaluations of urban spaces. Previous works have demonstrated that algorithms can be used to predict high level attributes of urban scenes (e.g. safety, attractiveness, uniqueness), accurately emulating human perception. In this paper we propose a novel approach for predicting the perceived safety of a scene from Google Street View Images. Opposite to previous works, we formulate the problem of learning to predict high level judgments as a ranking task and we employ a Convolutional Neural Network (CNN), significantly improving the accuracy of predictions over previous methods. Interestingly, the proposed CNN architecture relies on a novel pooling layer, which permits to automatically discover the most important areas of the images for predicting the concept of perceived safety. An extensive experimental evaluation, conducted on the publicly available Place Pulse dataset, demonstrates the advantages of the proposed approach over state-of-the-art methods.

Journal ArticleDOI
16 Sep 2015
TL;DR: The hypothesis that aggregated human behavioral data captured from the mobile network infrastructure, in combination with basic demographic information, can be used to predict crime is supported by the findings.
Abstract: The wealth of information provided by real-time streams of data has paved the way for life-changing technological advancements, improving the quality of life of people in many ways, from facilitating knowledge exchange to self-understanding and self-monitoring. Moreover, the analysis of anonymized and aggregated large-scale human behavioral data offers new possibilities to understand global patterns of human behavior and helps decision makers tackle problems of societal importance. In this article, we highlight the potential societal benefits derived from big data applications with a focus on citizen safety and crime prevention. First, we introduce the emergent new research area of big data for social good. Next, we detail a case study tackling the problem of crime hotspot classification, that is, the classification of which areas in a city are more likely to witness crimes based on past data. In the proposed approach we use demographic information along with human mobility characteristics as der...

Journal ArticleDOI
TL;DR: A more severe histological profile in paediatric NAFLD is associated with LITAF over-expression in HSCs, which in turn correlates with hepatic and circulating IL-1β levels outlining a panel of potential biomarkers of NASH-related liver damage.
Abstract: Lipopolysaccharide (LPS) is currently considered one of the major players in non-alcoholic fatty liver disease (NAFLD) pathogenesis and progression. Here, we aim to investigate the possible role of LPS-induced TNF-α factor (LITAF) in inducing a pro-inflammatory and pro-fibrogenic phenotype of non-alcoholic steatohepatitis (NASH).We found that children with NAFLD displayed, in different liver-resident cells, an increased expression of LITAF which correlated with histological traits of hepatic inflammation and fibrosis. Total and nuclear LITAF expression increased in mouse and human hepatic stellate cells (HSCs). Moreover, LPS induced LITAF-dependent transcription of IL-1β, IL-6 and TNF-α in the clonal myofibroblastic HSC LX-2 cell line, and this effect was hampered by LITAF silencing. We showed, for the first time in HSCs, that LITAF recruitment to these cytokine promoters is LPS dependent. However, preventing LITAF nuclear translocation by p38MAPK inhibitor, the expression of IL-6 and TNF-α was significantly reduced with the aid of p65NF-A¸B, while IL-1β transcription exclusively required LITAF expression/activity. Finally, IL-1β levels in plasma mirrored those in the liver and correlated with LPS levels and LITAF-positive HSCs in children with NASH.In conclusion, a more severe histological profile in paediatric NAFLD is associated with LITAF over-expression in HSCs, which in turn correlates with hepatic and circulating IL-1β levels outlining a panel of potential biomarkers of NASH-related liver damage. The in vitro study highlights the role of LITAF as a key regulator of the LPS-induced pro-inflammatory pattern in HSCs and suggests p38MAPK inhibitors as a possible therapeutic approach against hepatic inflammation in NASH.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive design methodology to optimize the organic rankine cycle (ORC) considering a wide range of design variables as well as practical aspects such as component limitations and costs is proposed.

Journal ArticleDOI
01 Aug 2015-Talanta
TL;DR: A simple and low cost biosensor taking advantage of a plastic optical fiber for the detection of Vascular endothelial growth factor, selected as a circulating protein potentially associated with cancer.

Posted Content
TL;DR: This work proposes SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and shows how the recorded multiple cues synergetically aid automatic analysis of social interactions.
Abstract: Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at this http URL.

Journal ArticleDOI
TL;DR: It is determined how to optimally setup the simulation domain, and in so doing it is found that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.
Abstract: Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise – more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three “standard” nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna – for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.

Journal ArticleDOI
TL;DR: The radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction are addressed.
Abstract: Today the use of spaceborne Very High Resolution (VHR) optical sensors for automatic 3D information extraction is increasing in the scientific and civil communities. The 3D Optical Metrology (3DOM) unit of the Bruno Kessler Foundation (FBK) in Trento (Italy) has collected VHR satellite imagery, as well as aerial and terrestrial data over Trento for creating a complete testfield for investigations on image radiometry, geometric accuracy, automatic digital surface model (DSM) generation, 2D/3D feature extraction, city modelling and data fusion. This paper addresses the radiometric and the geometric aspects of the VHR spaceborne imagery included in the Trento testfield and their potential for 3D information extraction. The dataset consist of two stereo-pairs acquired by WorldView-2 and by GeoEye-1 in panchromatic and multispectral mode, and a triplet from Pleiades-1A. For reference and validation, a DSM from airborne LiDAR acquisition is used. The paper gives details on the project, dataset characteristics and achieved results.

Proceedings ArticleDOI
01 Sep 2015
TL;DR: This work described the design, the evaluation setup, and the results of the DiscoMT 2015 shared task, which included two subtasks, relevant to both the machine translation (MT) and the discourse communities: pronoun-focused translation and cross-lingual pronoun prediction.
Abstract: We describe the design, the evaluation setup, and the results of the DiscoMT 2015 shared task, which included two subtasks, relevant to both the machine translation (MT) and the discourse communities: (i) pronoun-focused translation, a practical MT task, and (ii) cross-lingual pronoun prediction, a classification task that requires no specific MT expertise and is interesting as a machine learning task in its own right. We focused on the English‐French language pair, for which MT output is generally of high quality, but has visible issues with pronoun translation due to differences in the pronoun systems of the two languages. Six groups participated in the pronoun-focused translation task and eight groups in the cross-lingual pronoun prediction task.

Journal ArticleDOI
TL;DR: A novel technique for parameter estimation of the Rayleigh-Rice density that is based on a specific definition of the expectation-maximization algorithm is presented, which is characterized by good theoretical properties, iteratively updates the parameters and does not depend on specific optimization routines.
Abstract: The problem of estimating the parameters of a Rayleigh-Rice mixture density is often encountered in image analysis (e.g., remote sensing and medical image processing). In this paper, we address this general problem in the framework of change detection (CD) in multitemporal and multispectral images. One widely used approach to CD in multispectral images is based on the change vector analysis. Here, the distribution of the magnitude of the difference image can be theoretically modeled by a Rayleigh-Rice mixture density. However, given the complexity of this model, in applications, a Gaussian-mixture approximation is often considered, which may affect the CD results. In this paper, we present a novel technique for parameter estimation of the Rayleigh-Rice density that is based on a specific definition of the expectation-maximization algorithm. The proposed technique, which is characterized by good theoretical properties, iteratively updates the parameters and does not depend on specific optimization routines. Several numerical experiments on synthetic data demonstrate the effectiveness of the method, which is general and can be applied to any image processing problem involving the Rayleigh-Rice mixture density. In the CD context, the Rayleigh-Rice model (which is theoretically derived) outperforms other empirical models. Experiments on real multitemporal and multispectral remote sensing images confirm the validity of the model by returning significantly higher CD accuracies than those obtained by using the state-of-the-art approaches.

Proceedings ArticleDOI
01 Jun 2015
TL;DR: The TimeLine task (Cross-Document Event Ordering) goes a step further than previous evaluation challenges by requiring participant systems to perform both event coreference and temporal relation extraction across documents.
Abstract: This paper describes the outcomes of the TimeLine task (Cross-Document Event Ordering), that was organised within the Time and Space track of SemEval-2015. Given a set of documents and a set of target entities, the task consisted of building a timeline for each entity, by detecting, anchoring in time and ordering the events involving that entity. The TimeLine task goes a step further than previous evaluation challenges by requiring participant systems to perform both event coreference and temporal relation extraction across documents. Four teams submitted the output of their systems to the four proposed subtracks for a total of 13 runs, the best of which obtained an F1-score of 7.85 in the main track (timeline creation from raw text).