scispace - formally typeset
Search or ask a question

Showing papers by "fondazione bruno kessler published in 2009"


Journal ArticleDOI
TL;DR: Evaluated the replication of data analyses in 18 articles on microarray-based gene expression profiling published in Nature Genetics in 2005–2006, finding that Repeatability of published microarray studies is apparently limited.
Abstract: Given the complexity of microarray-based gene expression studies, guidelines encourage transparent design and public data availability. Several journals require public data deposition and several public databases exist. However, not all data are publicly available, and even when available, it is unknown whether the published results are reproducible by independent scientists. Here we evaluated the replication of data analyses in 18 articles on microarray-based gene expression profiling published in Nature Genetics in 2005-2006. One table or figure from each article was independently evaluated by two teams of analysts. We reproduced two analyses in principle and six partially or with some discrepancies; ten could not be reproduced. The main reason for failure to reproduce was data unavailability, and discrepancies were mostly due to incomplete data annotation or specification of data processing and analysis. Repeatability of published microarray studies is apparently limited. More strict publication rules enforcing public data availability and explicit description of data processing and analysis should be considered.

539 citations


Proceedings ArticleDOI
04 Jun 2009
TL;DR: This article presented a brief overview of the main challenges in the extraction of semantic relations from English text, and discuss the shortcomings of previous data sets and shared tasks, and introduced a new task, which will be part of SemEval-2010: multi-way classification of mutually exclusive semantic relations between pairs of common nominals.
Abstract: We present a brief overview of the main challenges in the extraction of semantic relations from English text, and discuss the shortcomings of previous data sets and shared tasks. This leads us to introduce a new task, which will be part of SemEval-2010: multi-way classification of mutually exclusive semantic relations between pairs of common nominals. The task is designed to compare different approaches to the problem and to provide a standard testbed for future research, which can benefit many applications in Natural Language Processing.

336 citations


Journal ArticleDOI
TL;DR: This special issue of the JNLE provides an opportunity to showcase some of the most important work in this emerging area of textual entailment, particularly automatic acquisition of paraphrases and lexical semantic relationships and unsupervised inference in applications such as question answering, information extraction and summarization.
Abstract: The goal of identifying textual entailment – whether one piece of text can be plausibly inferred from another – has emerged in recent years as a generic core problem in natural language understanding. Work in this area has been largely driven by the PASCAL Recognizing Textual Entailment (RTE) challenges, which are a series of annual competitive meetings. The current work exhibits strong ties to some earlier lines of research, particularly automatic acquisition of paraphrases and lexical semantic relationships and unsupervised inference in applications such as question answering, information extraction and summarization. It has also opened the way to newer lines of research on more involved inference methods, on knowledge representations needed to support this natural language understanding challenge and on the use of learning methods in this context. RTE has fostered an active and growing community of researchers focused on the problem of applied entailment. This special issue of the JNLE provides an opportunity to showcase some of the most important work in this emerging area.

221 citations


Proceedings ArticleDOI
19 Jul 2009
TL;DR: An empirical study shows that the resulting prioritisation is more effective than existing coverage-based prioritisation techniques in terms of rate of fault detection and demonstrates that clustering (even without human input) can outperform unclustered Coverage-based technologies.
Abstract: Pair-wise comparison has been successfully utilised in order to prioritise test cases by exploiting the rich, valuable and unique knowledge of the tester. However, the prohibitively large cost of the pair-wise comparison method prevents it from being applied to large test suites. In this paper, we introduce a cluster-based test case prioritisation technique. By clustering test cases, based on their dynamic runtime behaviour, we can reduce the required number of pair-wise comparisons significantly. The approach is evaluated on seven test suites ranging in size from 154 to 1,061 test cases. We present an empirical study that shows that the resulting prioritisation is more effective than existing coverage-based prioritisation techniques in terms of rate of fault detection. Perhaps surprisingly, the paper also demonstrates that clustering (even without human input) can outperform unclustered coverage-based technologies, and discusses an automated process that can be used to determine whether the application of the proposed approach would yield improvement.

200 citations


Journal ArticleDOI
TL;DR: In this paper, the catalytic activity of the powders has been tested by measuring the H 2 generation rate and yield by the hydrolysis of NaBH 4 in basic medium.
Abstract: Catalyst powders of Co–B, Ni–B, and Co–Ni–B, with different molar ratios of Co/Ni, were synthesized by chemical reduction of cobalt and nickel salts with sodium borohydride at room temperature. Surface morphology and structural properties of the catalyst powders were studied using scanning electron microscopy (SEM) and X-ray diffraction (XRD) respectively. Surface electronic states and composition of the catalysts were studied by X-ray photoelectron spectroscopy (XPS). The catalytic activity of the powders has been tested by measuring the H 2 generation rate and yield by the hydrolysis of NaBH 4 in basic medium. Co–Ni–B with the Co/(Co + Ni) molar ratio ( χ Co ) of 0.85 exhibited much superior activity with highest H 2 generation rate as compared to the other powder catalysts. The enhanced activity obtained with Co–Ni–B ( χ Co = 0.85) powder catalyst could be attributed to: large active surface area and electron transfer by alloying large quantity of B to active Co and Ni sites on the surface of the catalyst. The electron enrichment, detected in the XPS spectra on active Co and Ni sites in Co–Ni–B, higher than that of Co–B and Ni–B seems to be able to facilitate the catalysis reaction by providing the negative charge electron required by the reaction. Synergetic effect of the Co and Ni atoms in Co–Ni–B catalyst is able to lower the activation energy up to 34 kJ mol −1 as compared to 45 kJ mol −1 obtained with Co–B powder. Structural modification, caused by the heat-treatment at 773 K for 2 h in Ar atmosphere, was not able to change the activity of the Co–Ni–B powder.

160 citations


Journal ArticleDOI
TL;DR: A model is proposed which couples an SIR model with selection of behaviours driven by imitation dynamics that can explain "asymmetric waves", i.e., infection waves whose rising and decaying phases differ in slope.

159 citations


01 Jan 2009
TL;DR: The resulting time correlated pixel array is a viable candidate for single photon counting (TCSPC) applications such as fluorescent lifetime imaging microscopy (FLIM), nuclear or 3D imaging and permits scaling to larger array formats.
Abstract: (SUMMARY e report the design and characterisation of a 32x32 time to digital (TDC) converter plus single photon avalanche diode (SPAD) pixel array implemented in a 130nm imaging process. Based on a gated ring oscillator approach, the 10 bit, 50μm pitch TDC array exhibits a minimum time resolution of 50ps, with accuracy of ±0.5 LSB DNL and 2.4 LSB INL. Process, voltage and temperature compensation (PVT) is achieved by locking the array to a stable external clock. The resulting time correlated pixel array is a viable candidate for single photon counting (TCSPC) applications such as fluorescent lifetime imaging microscopy (FLIM), nuclear or 3D imaging and permits scaling to larger array formats.

158 citations


Book ChapterDOI
01 Jan 2009
TL;DR: This work proposes the use of a trust metric, an algorithm able to propagate trust over the trust network in order to find users that can be trusted by the active user, so that trust is able to alleviate the cold start problem and other weaknesses that beset Collaborative Filtering Recommender Systems.
Abstract: Recommender Systems based on Collaborative Filtering suggest to users items they might like, such as movies, songs, scientific papers, or jokes Based on the ratings Based on the ratings provided by users about items, they first find users similar to the users receiving the recommendations and then suggest to her items appreciated in past by those like-minded users However, given the ratable items are many and the ratings provided by each users only a tiny fraction, the step of finding similar users often fails We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network in order to find users that can be trusted by the active user Items appreciated by these trustworthy users can then be recommended to the active user An empirical evaluation on a large dataset crawled from Epinionscom shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage This is especially evident on users who provided few ratings, so that trust is able to alleviate the cold start problem and other weaknesses that beset Collaborative Filtering Recommender Systems

145 citations


Journal ArticleDOI
TL;DR: In this paper, a series of experiments with an Optech ILRIS 3D TLS was carried out to verify the feasibility of this application, as well as to solve problems in data acquisition protocol and data processing.
Abstract: Terrestrial Laser Scanner (TLS) is an active instrument widely used for physical surface acquisition and data modeling. TLS provides both the geometry and the intensity information of scanned objects depending on their physical and chemical properties. The intensity data can be used to discriminate different materials, since intensity is proportional, among other parameters, to the reflectance of the target at the specific wavelength of the laser beam. This article focuses on the TLS-based recognition of rocks in simple sedimentary successions mainly constituted by limestones and marls. In particular, a series of experiments with an Optech ILRIS 3D TLS was carried out to verify the feasibility of this application, as well as to solve problems in data acquisition protocol and data processing. Results indicate that a TLS intensity-based discrimination can provide reliable information about the clay content of rocks in clean outcrop conditions if the geometrical aspects of the acquisition (i.e. distance) are taken into account. Reflectance values of limestones, marls and clays show, both in controlled conditions and in the field, clear differences due to the interaction of the laser beam (having a 1535 nm wavelength) with H2O-bearing minerals and materials. Information about lithology can be therefore obtained also from real outcrops, at least if simple alternation of limestones and marls are considered. Comparison between reflectance values derived from TLS acquisition of an outcrop and the clay abundance curves obtained by gas chromatography on rock samples taken from the same stratigraphic section shows that reflectance is linked by an inverse linear relationship (correlation coefficient r = − 0.85 ) to the abundance of clay minerals in the rocks. Reflectance series obtained from TLS data are proposed as a tool to evaluate the variation of clay content along a stratigraphic section. The possibility of linking reflectance values to lithological parameters (i.e. clay content) could provide a tool for lithological mapping of outcrops, with possible applications in various fields, ranging from petroleum geology to environmental engineering, stratigraphy and sedimentology.

129 citations


Journal ArticleDOI
TL;DR: In this article, a single-photon avalanche diode-based pixel array for the analysis of fluorescence phenomena is presented, which integrates a single photon detector combined with an active quenching circuit and a 17-bit digital events counter.
Abstract: A single-photon avalanche diode-based pixel array for the analysis of fluorescence phenomena is presented. Each 180 times 150 - mum2 pixel integrates a single photon detector combined with an active quenching circuit and a 17-bit digital events counter. On-chip master logic provides the digital control phases required by the pixel array with a full programmability of the main timing synchronisms. The pixel exhibits an average dark count rate of 3 kcps and a dynamic range of over 120-dB in time uncorrelated operation. A complete characterization of the single photon avalanche diode characteristics is reported. Time-resolved fluorescence measurements have been demonstrated by detecting the fluorescence decay of quantum-dot samples without the aid of any optical filters for excitation laser light cutoff.

128 citations


Proceedings ArticleDOI
05 Nov 2009
TL;DR: The acoustic and linguistic properties of children's speech for both read and spontaneous speech, and the developments in automatic speech recognition for children with application to spoken dialogue and multimodal dialogue system design are reviewed.
Abstract: In this paper, we review: (1) the acoustic and linguistic properties of children's speech for both read and spontaneous speech, and (2) the developments in automatic speech recognition for children with application to spoken dialogue and multimodal dialogue system design. First, the effect of developmental changes on the absolute values and variability of acoustic correlates is presented for read speech for children ages 6 and up. Then, verbal child-machine spontaneous interaction is reviewed and results from recent studies are presented. Age trends of acoustic, linguistic and interaction parameters are discussed, such as sentence duration, filled pauses, politeness and frustration markers, and modality usage. Some differences between child-machine and human-human interaction are pointed out. The implications for acoustic modeling, linguistic modeling and spoken dialogue system design for children are presented. We conclude with a review of relevant applications of spoken dialogue technologies for children.

Proceedings ArticleDOI
23 Nov 2009
TL;DR: Results show that EC has a positive effect on collaboration although it appears to be associated with a more complex interaction; for children with ASD, EC was also related to a higher number of "negotiation" moves, which may reflect their higher need of coordination during the collaborative activity.
Abstract: We present the design and evaluation of the Collaborative Puzzle Game (CPG), a tabletop interactive activity developed for fostering collaboration in children with Autism Spectrum Disorder (ASD). The CPG was inspired by cardboard jigsaw puzzles and runs on the MERL DiamondTouch table [7]. Digital pieces can be manipulated by direct finger touch. The CPG features a set of interaction rules called Enforced Collaboration (EC); in order to be moved, puzzle pieces must be touched and dragged simultaneously by two players. Two studies were conducted to test whether EC has the potential to serve as an interaction paradigm that would help foster collaborative skills. In Study 1, 70 boys with typical development were tested and in Study 2 16 boys with ASD were tested. Results show that EC has a positive effect on collaboration although it appears to be associated with a more complex interaction. For children with ASD, EC was also related to a higher number of "negotiation" moves, which may reflect their higher need of coordination during the collaborative activity.

Proceedings ArticleDOI
10 Nov 2009
TL;DR: A TAC with embedded analog-to-digital conversion is implemented in a 130-nm CMOS imaging technology and can operate both as a TAC or as an analog counter, thus allowing both time-correlated or time-uncorrelated imaging operation.
Abstract: A Time-to-Amplitude Converter (TAC) with embedded analog-to-digital conversion is implemented in a 130-nm CMOS imaging technology. The proposed module is conceived for Single-Photon Avalanche Diode imagers and can operate both as a TAC or as an analog counter, thus allowing both time-correlated or time-uncorrelated imaging operation. A single-ramp, 8-bit ADC with two memory banks to allow high-speed, time-interleaved operation is also included within each module. A 32x32-TACs array has been fabricated with a 50-µm pitch in order prove the highly parallel operation and to test uniformity and power consumption issues. The measured time resolution (LSB) is of 160 ps on a 20-ns time range with a uniformity across the array within ±2LSBs, while DNL and INL are 0.7LSB and 1.9LSB respectively. The average power consumption is below 300µW/pixel when running at 500k measurements per second.

Book ChapterDOI
03 Sep 2009
TL;DR: A model-based approach to system-software co-engineering which is tailored to the specific characteristics of critical on-board systems for the aerospace domain is reported, supported by a System-Level Integrated Modeling (SLIM) Language.
Abstract: We report on a model-based approach to system-software co-engineering which is tailored to the specific characteristics of critical on-board systems for the aerospace domain. The approach is supported by a System-Level Integrated Modeling (SLIM) Language by which engineers are provided with convenient ways to describe nominal hardware and software operation, (probabilistic) faults and their propagation, error recovery, and degraded modes of operation. Correctness properties, safety guarantees, and performance and dependability requirements are given using property patterns which act as parameterized "templates" to the engineers and thus offer a comprehensible and easy-to-use framework for requirement specification. Instantiated properties are checked on the SLIM specification using state-of-the-art formal analysis techniques such as bounded SAT-based and symbolic model checking, and probabilistic variants thereof. The precise nature of these techniques together with the formal SLIM semantics yield a trustworthy modeling and analysis framework for system and software engineers supporting, among others, automated derivation of dynamic (i.e., randomly timed) fault trees, FMEA tables, assessment of FDIR, and automated derivation of observability requirements.

Journal ArticleDOI
TL;DR: An empirical study aiming at evaluating two state-of-the art tool-supported requirements prioritization methods, AHP and CBRank, focusing on three measures: the ease of use, the time-consumption and the accuracy.
Abstract: Requirements prioritization aims at identifying the most important requirements for a software system, a crucial step when planning for system releases and deciding which requirements to implement in each release. Several prioritization methods and supporting tools have been proposed so far. How to evaluate their properties, with the aim of supporting the selection of the most appropriate method for a specific project, is considered a relevant question. In this paper, we present an empirical study aiming at evaluating two state-of-the art tool-supported requirements prioritization methods, AHP and CBRank. We focus on three measures: the ease of use, the time-consumption and the accuracy. The experiment has been conducted with 23 experienced subjects on a set of 20 requirements from a real project. Results indicate that for the first two characteristics CBRank overcomes AHP, while for the accuracy AHP performs better than CBRank, even if the resulting ranks from the two methods are very similar. The majority of the users found CBRank the ''overall best'' method.

Proceedings ArticleDOI
09 Oct 2009
TL;DR: In this article, a 32×32 time to digital (TDC) converter plus single photon avalanche diode (SPAD) pixel array implemented in a 130nm imaging process is presented.
Abstract: We report the design and characterisation of a 32×32 time to digital (TDC) converter plus single photon avalanche diode (SPAD) pixel array implemented in a 130nm imaging process. Based on a gated ring oscillator approach, the 10 bit, 50µm pitch TDC array exhibits a minimum time resolution of 50ps, with accuracy of ±0.5 LSB DNL and 2.4 LSB INL. Process, voltage and temperature compensation (PVT) is achieved by locking the array to a stable external clock. The resulting time correlated pixel array is a viable candidate for single photon counting (TCSPC) applications such as fluorescent lifetime imaging microscopy (FLIM), nuclear or 3D imaging and permits scaling to larger array formats.

Journal ArticleDOI
10 May 2009
TL;DR: This paper proposes a methodology to derive objective (fitness) functions that drive evolutionary algorithms, and evaluates the overall approach with two simulated autonomous agents, showing that the approach is effective in finding good test cases automatically.
Abstract: A system built in terms of autonomous agents may require even greater correctness assurance than one which is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents' developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimization to generate demanding test cases.

Journal ArticleDOI
TL;DR: An ultra-low power 128 times 64 pixels vision sensor is here presented, featuring pixel-level spatial contrast extraction and binarization, and the pixel-embedded binary frame buffer allows the sensor to directly process visual information, such as motion and background subtraction, which are the most useful filters in machine vision applications.
Abstract: An ultra-low power 128 times 64 pixels vision sensor is here presented, featuring pixel-level spatial contrast extraction and binarization. The asynchronous readout only dispatches the addresses of the asserted pixels in bursts of 80 MB/s, significantly reducing the amount of data at the output. The pixel-embedded binary frame buffer allows the sensor to directly process visual information, such as motion and background subtraction, which are the most useful filters in machine vision applications. The presented sensor consumes less than 100 muW at 50 fps with 25% of pixel activity. Power consumption can be further reduced down to about 30 muW by operating the sensor in Idle-Mode, thus minimizing the sensor activity at the ouput.

Proceedings ArticleDOI
10 Nov 2009
TL;DR: The characteristics of the array make it an excellent candidate for in-pixel TDC in time-resolved imagers for applications such as 3-D imaging and fluorescence lifetime imaging microscopy (FLIM).
Abstract: We report on the design and characterization of a 32 × 32 time-to-digital converter (TDC) array implemented in a 130 nm imaging CMOS technology. The 10-bit TDCs exhibit a timing resolution of 119 ps with a timing uniformity across the entire array of less than 2 LSBs. The differential- and integral non-linearity (DNL and INL) were measured at ± 0.4 and ±1.2 LSBs respectively. The TDC array was fabricated with a pitch of 50µm in both directions and with a total TDC area of less than 2000µm2. The characteristics of the array make it an excellent candidate for in-pixel TDC in time-resolved imagers for applications such as 3-D imaging and fluorescence lifetime imaging microscopy (FLIM).

Proceedings ArticleDOI
17 May 2009
TL;DR: Source code obfuscation is a protection mechanism widely used to limit the possibility of malicious reverse engineering or attack activities on a software system and the contexts in which such an efficiency may vary.
Abstract: Source code obfuscation is a protection mechanism widely used to limit the possibility of malicious reverse engineering or attack activities on a software system. Although several code obfuscation techniques and tools are available, little knowledge is available about the capability of obfuscation to reduce attackers' efficiency, and the contexts in which such an efficiencymay vary.

Journal ArticleDOI
TL;DR: FT-IR microspectroscopy is a reliable method for discrimination and classification of allergenic pollen and the limits of its practical application to the monitoring performed in the aerobiological stations were discussed.
Abstract: The discrimination and classification of allergy-relevant pollen was studied for the first time by mid-infrared Fourier transform infrared (FT-IR) microspectroscopy together with unsupervised and supervised multivariate statistical methods. Pollen samples of 11 different taxa were collected, whose outdoor air concentration during the flowering time is typically measured by aerobiological monitoring networks. Unsupervised hierarchical cluster analysis provided valuable information about the reproducibility of FT-IR spectra of the same taxon acquired either from one pollen grain in a 25 × 25 μm2 area or from a group of grains inside a 100 × 100 μm2 area. As regards the supervised learning method, best results were achieved using a K nearest neighbors classifier and the leave-one-out cross-validation procedure on the dataset composed of single pollen grain spectra (overall accuracy 84%). FT-IR microspectroscopy is therefore a reliable method for discrimination and classification of allergenic pollen. The limits of its practical application to the monitoring performed in the aerobiological stations were also discussed.

Proceedings ArticleDOI
10 Nov 2009
TL;DR: A monolithic 64-pixel linear array for Fluorescence Lifetime Imaging applications is presented and preliminary fluorescence measurements have been performed obtaining a 6% lifetime precision in 660 µs accumulation time and a very good uniformity among the pixels.
Abstract: A monolithic 64-pixel linear array for Fluorescence Lifetime Imaging applications is presented. Each pixel includes four actively quenched Single Photon Avalanche Diodes, four gated 8-bit counters and is capable of measuring single and double exponential decays. The array has a 34% fill factor, a maximum throughput rate of 320 Mbps, and has been tested up to 40 MHz laser repetition rate. Preliminary fluorescence measurements have been performed obtaining a 6% lifetime precision in 660 µs accumulation time and a very good uniformity among the pixels.

Journal ArticleDOI
TL;DR: Experimental results show that Fit helps in the understanding of requirements without requiring a significant additional effort.
Abstract: One of the main reasons for the failure of many software projects is the late discovery of a mismatch between the customers' expectations and the pieces of functionality implemented in the delivered system. At the root of such a mismatch is often a set of poorly defined, incomplete, under-specified, and inconsistent requirements. Test driven development has recently been proposed as a way to clarify requirements during the initial elicitation phase, by means of acceptance tests that specify the desired behavior of the system. The goal of the work reported in this paper is to empirically characterize the contribution of acceptance tests to the clarification of the requirements coming from the customer. We focused on Fit tables, a way to express acceptance tests, which can be automatically translated into executable test cases. We ran two experiments with students from University of Trento and Politecnico of Torino, to assess the impact of Fit tables on the clarity of requirements. We considered whether Fit tables actually improve requirement understanding and whether this requires any additional comprehension effort. Experimental results show that Fit helps in the understanding of requirements without requiring a significant additional effort.

Journal ArticleDOI
10 Dec 2009-Vaccine
TL;DR: Elimination turns out to be possible when cooperation is encouraged by a social planner, provided he incorporates in the "social loss function" the preferences of anti-vaccinators only, which allows an interpretation of the current Italian vaccination policy.

Book ChapterDOI
23 Jun 2009
TL;DR: The HRELTL logic is proposed, that extends the Linear-time Temporal Logic with Regular Expressions (RELTL) with hybrid aspects, and it is shown that the satisfiability problem for the linear fragment can be reduced to an equi-satisfiable problem for RELTL.
Abstract: The importance of requirements for the whole development flow calls for strong validation techniques based on formal methods. In the case of discrete systems, some approaches based on temporal logic satisfiability are gaining increasing momentum. However, in many real-world domains (e.g. railways signaling), the requirements constrain the temporal evolution of both discrete and continuous variables. These hybrid domains pose substantial problems: on one side, a continuous domain requires very expressive formal languages; on the other side, the resulting expressiveness results in highly intractable problems. In this paper, we address the problem of requirements validation for real-world hybrid domains, and present two main contributions. First, we propose the HRELTL logic, that extends the Linear-time Temporal Logic with Regular Expressions (RELTL) with hybrid aspects. Second, we show that the satisfiability problem for the linear fragment can be reduced to an equi-satisfiable problem for RELTL. This makes it possible to use automatic (albeit incomplete) techniques based on Bounded Model Checking and on Satisfiability Modulo Theory. The choice of the language is inspired by and validated within a project funded by the European Railway Agency, on the formalization and validation of the European Train Control System specifications. The activity showed that most of requirements can be formalized into HRELTL, and an experimental evaluation confirmed the practicality of the analyses.


Book ChapterDOI
23 Nov 2009
TL;DR: The fundamental components and interfaces in this architecture are discussed and the developed integrated framework is explained and results from a qualitative evaluation of the framework are shown in the context of an open reference case.
Abstract: Service-Oriented Architectures (SOA) represent an architectural shift for building business applications based on loosely-coupled services. In a multi-layered SOA environment the exact conditions under which services are to be delivered can be formally specified by Service Level Agreements (SLAs). However, typical SLAs are just specified at the customer-level and do not allow service providers to manage their IT stack accordingly as they have no insight on how customer-level SLAs translate to metrics or parameters at the various layers of the IT stack. In this paper we present a technical architecture for a multi-level SLA management framework.We discuss the fundamental components and interfaces in this architecture and explain the developed integrated framework. Furthermore, we show results from a qualitative evaluation of the framework in the context of an open reference case.

Book ChapterDOI
22 May 2009
TL;DR: The formal framework behind the ASTRO approach is presented; the implementation of the framework and its integration within a commercial toolkit for developing Web services are presented; and the approach is evaluated on a real-world composition domain.
Abstract: One of the key ideas underlying Web services is that of allowing the combination of existing services published on the Web into a new service that achieves some higher-level functionality and satisfies some business goals. As the manual development of the new composite service is recognized as a difficult and error-prone task, the automated synthesis of the composition is considered one of the key challenges in the field of Web services. In this paper, we will present a survey of existing approaches for the synthesis of Web service compositions. We will then focus on a specific approach, the ASTRO approach, which has been shown to support complex composition requirements and to be applicable in real domains. In the paper, we will present the formal framework behind the ASTRO approach; we will present the implementation of the framework and its integration within a commercial toolkit for developing Web services; we will finally evaluate the approach on a real-world composition domain.

Journal ArticleDOI
15 Dec 2009-Toxicon
TL;DR: The construction of inactive cytolysin with built-in biological "trigger" that renders the toxin active in the presence of tumour-specific proteinases is considered as a proof of concept to demonstrate the feasibility of such activation systems in the construction of ITs based on pore-forming cy tolysins from sea anemones with reduced unspecific activity.

Journal ArticleDOI
TL;DR: In this article, the authors present a finite element formulation based on a weak form of the boundary value problem for fully coupled thermoelasticity, which is calculated from the irreversible flow of entropy due to the thermal fluxes that have originated from the volumetric strain variations.
Abstract: We present a finite element formulation based on a weak form of the boundary value problem for fully coupled thermoelasticity. The thermoelastic damping is calculated from the irreversible flow of entropy due to the thermal fluxes that have originated from the volumetric strain variations. Within our weak formulation we define a dissipation function that can be integrated over an oscillation period to evaluate the thermoelastic damping. We show the physical meaning of this dissipation function in the framework of the well-known Biot's variational principle of thermoelasticity. The coupled finite element equations are derived by considering harmonic small variations of displacement and temperature with respect to the thermodynamic equilibrium state. In the finite element formulation two elements are considered: the first is a new 8-node thermoelastic element based on the Reissner–Mindlin plate theory, which can be used for modeling thin or moderately thick structures, while the second is a standard three-dimensional 20-node iso-parametric thermoelastic element, which is suitable to model massive structures. For the 8-node element the dissipation along the plate thickness has been taken into account by introducing a through-the-thickness dependence of the temperature shape function. With this assumption the unknowns and the computational effort are minimized. Comparisons with analytical results for thin beams are shown to illustrate the performances of those coupled-field elements. Copyright © 2008 John Wiley & Sons, Ltd.