scispace - formally typeset
Search or ask a question

Showing papers by "fondazione bruno kessler published in 2013"


Journal ArticleDOI
TL;DR: A novel implementation in ANSI C of the MINE family of algorithms for computing maximal information-based measures of dependence between two variables in large datasets, with the aim of a low memory footprint and ease of integration within bioinformatics pipelines is introduced.
Abstract: Summary: We introduce a novel implementation in ANSI C of the MINE family of algorithms for computing maximal information-based measures of dependence between two variables in large datasets, with the aim of a low memory footprint and ease of integration within bioinformatics pipelines. We provide the libraries minerva (with the R interface) and minepy for Python, MATLAB, Octave and C++. The C solution reduces the large memory requirement of the original Java implementation, has good upscaling properties and offers a native parallelization for the R interface. Low memory requirements are demonstrated on the MINE benchmarks as well as on large ( = 1340) microarray and Illumina GAII RNA-seq transcriptomics datasets. Availability and implementation: Source code and binaries are freely available for download under GPL3 licence at http://minepy.sourceforge.net for minepy and through the CRAN repository http://cran.r-project.org for the R package minerva. All software is multiplatform (MS Windows, Linux and OSX). Contact: [email protected] Supplementary information:Supplementary data are available at Bioinformatics online.

180 citations


Journal ArticleDOI
TL;DR: A requirements prioritization method called Case-Based Ranking (CBRank) is described, which combines project's stakeholders preferences with requirements ordering approximations computed through machine learning techniques, bringing promising advantages.
Abstract: Deciding which, among a set of requirements, are to be considered first and in which order is a strategic process in software development. This task is commonly referred to as requirements prioritization. This paper describes a requirements prioritization method called Case-Based Ranking (CBRank), which combines project's stakeholders preferences with requirements ordering approximations computed through machine learning techniques, bringing promising advantages. First, the human effort to input preference information can be reduced, while preserving the accuracy of the final ranking estimates. Second, domain knowledge encoded as partial order relations defined over the requirement attributes can be exploited, thus supporting an adaptive elicitation process. The techniques CBRank rests on and the associated prioritization process are detailed. Empirical evaluations of properties of CBRank are performed on simulated data and compared with a state-of-the-art prioritization method, providing evidence of the method ability to support the management of the tradeoff between elicitation effort and ranking accuracy and to exploit domain knowledge. A case study on a real software project complements these experimental measurements. Finally, a positioning of CBRank with respect to state-of-the-art requirements prioritization methods is proposed, together with a discussion of benefits and limits of the method.

161 citations


Proceedings Article
14 Jun 2013
TL;DR: The results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge are presented, aiming to bring together researchers in educational NLP technology and textual entailment.
Abstract: We present the results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge, aiming to bring together researchers in educational NLP technology and textual entailment. The task of giving feedback on student answers requires semantic inference and therefore is related to recognizing textual entailment. Thus, we offered to the community a 5-way student response labeling task, as well as 3-way and 2way RTE-style tasks on educational data. In addition, a partial entailment task was piloted. We present and compare results from 9 participating teams, and discuss future directions.

157 citations


Journal ArticleDOI
TL;DR: In this article, an experiment using the gravitational wave bar detector AURIGA explores the limits of quantum gravity-induced modifications in the ground state of a mechanical oscillator cooled to the sub-millikelvin regime.
Abstract: The elusive effects of quantum gravity could be betrayed by subtle deviations from standard quantum mechanics. An experiment using the gravitational wave bar detector AURIGA explores the limits of quantum gravity-induced modifications in the ground state of a mechanical oscillator cooled to the sub-millikelvin regime.

125 citations


Journal ArticleDOI
TL;DR: The results lead to the conclusion that rational design of intercalation-based electrode materials, such as LiFePO4, with optimized utilization and life requires the tailoring of particles that minimize kinetic barriers and mechanical strain.
Abstract: The chemical phase distribution in hydrothermally grown micrometric single crystals of LiFePO4 following partial chemical delithiation was investigated. Full field and scanning X-ray microscopy were combined with X-ray absorption spectroscopy at the Fe and O K-edges, respectively, to produce maps with high chemical and spatial resolution. The resulting information was compared to morphological insight into the mechanics of the transformation by scanning transmission electron microscopy. This study revealed the interplay at the mesocale between microstructure and phase distribution during the redox process, as morphological defects were found to kinetically determine the progress of the reaction. Lithium deintercalation was also found to induce severe mechanical damage in the crystals, presumably due to the lattice mismatch between LiFePO4 and FePO4. Our results lead to the conclusion that rational design of intercalation-based electrode materials, such as LiFePO4, with optimized utilization and life requi...

118 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: A novel Multi-Task Learning framework that achieves state-of-the-art classification with few training data, FEGA-MTL operates on a dense uniform spatial grid and learns appearance relationships across partitions as well as partition-specific appearance variations for a given head pose to build region-specific classifiers.
Abstract: We propose a novel Multi-Task Learning framework (FEGA-MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. As the target (person) moves, distortions in facial appearance owing to camera perspective and scale severely impede performance of traditional head pose classification methods. FEGA-MTL operates on a dense uniform spatial grid and learns appearance relationships across partitions as well as partition-specific appearance variations for a given head pose to build region-specific classifiers. Guided by two graphs which a-priori model appearance similarity among (i) grid partitions based on camera geometry and (ii) head pose classes, the learner efficiently clusters appearance wise related grid partitions to derive the optimal partitioning. For pose classification, upon determining the target's position using a person tracker, the appropriate region specific classifier is invoked. Experiments confirm that FEGA-MTL achieves state-of-the-art classification with few training data.

118 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a multifunctional hierarchical honeycomb (MHH) with negative Poisson's ratio (NPR) sub-structures, which is constructed by replacing the solid cell walls of the original regular hexagonal honeycomb with two kinds of equal mass NPR honeycombs, the anisotropic re-entrant honeycomb or the isotropic chiral honeycomb.

117 citations


Journal ArticleDOI
TL;DR: This letter presents a technique to automatically build the extended attribute profiles with the standard deviation attribute based on the statistics of the samples belonging to the classes of interest.
Abstract: Extended attribute profiles, which are based on attribute filters, have recently been presented as efficient tools for spectral-spatial classification of remote sensing images However, construction of these profiles usually requires manual selection of parameters for the corresponding attribute filters In this letter, we present a technique to automatically build the extended attribute profiles with the standard deviation attribute based on the statistics of the samples belonging to the classes of interest The methodology is tested on two widely used hyperspectral images and the results are found to be highly accurate

111 citations


Journal ArticleDOI
TL;DR: An Interactive Genetic Algorithm (IGA) is proposed that includes incremental knowledge acquisition and combines it with the existing constraints, such as dependencies and priorities, and outperforms IAHP in terms of effectiveness, efficiency and robustness to decision maker errors.
Abstract: Context: The order in which requirements are implemented affects the delivery of value to the end-user, but it also depends on technical constraints and resource availability. The outcome of requirements prioritization is a total ordering of requirements that best accommodates the various kinds of constraints and priorities. During requirements prioritization, some decisions on the relative importance of requirements or the feasibility of a given implementation order must necessarily resort to a human (e.g., the requirements analyst), possessing the involved knowledge. Objective: In this paper, we propose an Interactive Genetic Algorithm (IGA) that includes incremental knowledge acquisition and combines it with the existing constraints, such as dependencies and priorities. We also assess the performance of the proposed algorithm. Method: The validation of IGA was conducted on a real case study, by comparing the proposed algorithm with the state of the art, interactive prioritization technique Incomplete Analytic Hierarchy Process (IAHP). Results: The proposed method outperforms IAHP in terms of effectiveness, efficiency and robustness to decision maker errors. Conclusion: IGA produces a good approximation of the reference requirements ranking, requiring an acceptable manual effort and tolerating a reasonable human error rate.

110 citations


Proceedings ArticleDOI
21 Nov 2013
TL;DR: An empirical cost/benefit analysis of two different categories of automated functional web testing approaches is presented, finding that, in the majority of the cases, the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.
Abstract: There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver) On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%) We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases

101 citations


Proceedings Article
01 Jun 2013
TL;DR: The proposed approach indirectly (and automatically) exploits the scope of negation cues and the semantic roles of involved entities for reducing the skewness in the training data as well as discarding possible negative instances from the test data.
Abstract: This paper presents the multi-phase relation extraction (RE) approach which was used for the DDI Extraction task of SemEval 2013. As a preliminary step, the proposed approach indirectly (and automatically) exploits the scope of negation cues and the semantic roles of involved entities for reducing the skewness in the training data as well as discarding possible negative instances from the test data. Then, a state-of-the-art hybrid kernel is used to train a classifier which is later applied on the instances of the test data not filtered out by the previous step. The official results of the task show that our approach yields an F-score of 0.80 for DDI detection and an F-score of 0.65 for DDI detection and classification. Our system obtained significantly higher results than all the other participating teams in this shared task and has been ranked 1st.

Posted Content
TL;DR: The authors compare the most often used techniques together with newly proposed ones and incorporate all of them in a learning framework to see whether blending them can further improve the estimation of prior polarity scores.
Abstract: Assigning a positive or negative score to a word out of context (i.e. a word's prior polarity) is a challenging task for sentiment analysis. In the literature, various approaches based on SentiWordNet have been proposed. In this paper, we compare the most often used techniques together with newly proposed ones and incorporate all of them in a learning framework to see whether blending them can further improve the estimation of prior polarity scores. Using two different versions of SentiWordNet and testing regression and classification models across tasks and datasets, our learning approach consistently outperforms the single metrics, providing a new state-of-the-art approach in computing words' prior polarity for sentiment analysis. We conclude our investigation showing interesting biases in calculated prior polarity scores when word Part of Speech and annotator gender are considered.

Journal ArticleDOI
24 Apr 2013-Autism
TL;DR: In this article, the authors examined the effectiveness of a school-based, collaborative technology intervention combined with cognitive behavioral therapy to teach the concepts of social collaboration and social conversation to children with high-functioning autism spectrum disorders.
Abstract: This study examined the effectiveness of a school-based, collaborative technology intervention combined with cognitive behavioral therapy to teach the concepts of social collaboration and social conversation to children with high-functioning autism spectrum disorders (n = 22) as well as to enhance their actual social engagement behaviors (collaboration and social conversation) with peers. Two computer programs were included in the intervention: "Join-In" to teach collaboration and "No-Problem" to teach conversation. Assessment in the socio-cognitive area included concept perception measures, problem solving, Theory of Mind, and a dyadic drawing collaborative task to examine change in children's social engagement. Results demonstrated improvement in the socio-cognitive area with children providing more active social solutions to social problems and revealing more appropriate understanding of collaboration and social conversation after intervention, with some improvement in Theory of Mind. Improvement in actual social engagement was more scattered.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: An unsupervised approach for the automatic detection of static interactive groups based on a competition of different voting sessions, each one specialized for a particular group cardinality; all the votes are then evaluated using information theoretic criteria, producing the final set of groups.
Abstract: We present an unsupervised approach for the automatic detection of static interactive groups. The approach builds upon a novel multi-scale Hough voting policy, which incorporates in a flexible way the sociological notion of group as F-formation; the goal is to model at the same time small arrangements of close friends and aggregations of many individuals spread over a large area. Our technique is based on a competition of different voting sessions, each one specialized for a particular group cardinality; all the votes are then evaluated using information theoretic criteria, producing the final set of groups. The proposed technique has been applied on public benchmark sequences and a novel cocktail party dataset, evaluating new group detection metrics and obtaining state-of-the-art performances.

Journal ArticleDOI
TL;DR: In this article, the authors presented ultra-thin silicon chips (flex-chips) on flexible foils, realized through post-processing steps such as wafer thinning, dicing, and transferring the thinned chips to flexible polyimide foils.
Abstract: This paper presents ultra-thin silicon chips (flex-chips) on flexible foils, realized through post-processing steps such as wafer thinning, dicing, and transferring the thinned chips to flexible polyimide foils. The cost effective chemical etching is adopted for wafer thinning and the transfer printing approach, to transfer quasi 1-D structures such as micro/nanoscale wires and ribbons, that is adapted for transferring large ultra-thin flex-chips (widths 4.5-15 mm, lengths 8-36 mm, and thickness ≈ 15 μm). The post-processing capability is demonstrated with passive structures such as metal interconnects realized on the flex-chips before carrying out the chip thinning step. The resistance values of metal interconnects do not show any appreciable change because of bending of chips for the tested range viz., radius of curvature 9 mm and above. Further, the bending mechanics of silicon membranes on foil is investigated to evaluate the bending limits before a mechanical fracture/failure occurs. The distinct advantages of this paper are: attaining bendability through post-processing of chips, cost effective fabrication process, and easy transfer of chips to the flexible substrates without using conventional and sophisticated equipment such as pick and place set up.

Journal ArticleDOI
TL;DR: In this article, a 32 × 32 pixel image sensor for time-gated fluorescence lifetime detection based on single-photon avalanche diodes is presented, which uses an analog counting approach to minimize the area occupation of pixel electronics while maintaining a nanosecond timing resolution and shot-noise limited operation.
Abstract: This paper presents a 32 × 32 pixel image sensor for time-gated fluorescence lifetime detection based on single-photon avalanche diodes. The sensor, fabricated in a high-voltage 0.35- μm CMOS technology, uses an analog counting approach to minimize the area occupation of pixel electronics while maintaining a nanosecond timing resolution and shot-noise-limited operation. The all nMOS pixel is formed by 12 transistors and features 25- μm pitch and 20.8% fill factor. The chip includes a phase-locked loop circuit for gating window generation, working at a maximum repetition frequency of 40 MHz, while the sensor can be gated at frequency up to 80 MHz using an external delay generator. Optical characterization with a picosecond-pulsed laser showed a minimum gating window width of 1.1 ns. Example images acquired in both continuous and time-gated mode are presented, together with a lifetime image obtained with the sensor mounted on a fluorescence microscope.

Journal ArticleDOI
TL;DR: Results show that the N-terminal region needs to be inserted in the lipid membrane before the oligomerization into the final pore and imply that there is no need for a stable prepore formation.

Proceedings ArticleDOI
08 Sep 2013
TL;DR: This paper provides the first evidence that daily happiness of individuals can be automatically recognized using an extensive set of indicators obtained from the mobile phone usage data and ``background noise'' indicators coming from the weather factor and personality traits.
Abstract: In this paper we provide the first evidence that daily happiness of individuals can be automatically recognized using an extensive set of indicators obtained from the mobile phone usage data (call log, sms and Bluetooth proximity data) and ``background noise'' indicators coming from the weather factor and personality traits. Our final machine learning model, based on the Random Forest classifier, obtains an accuracy score of 80.81% for a 3-class daily happiness recognition problem. Moreover, we identify and discuss the indicators, which have strong predictive power in the source and the feature spaces, discuss different approaches, machine learning models and provide an insight for future research.

Journal ArticleDOI
TL;DR: An extended MVAR (eMVAR) framework allowing either exclusive consideration of time-lagged effects according to the classic notion of Granger causality, or consideration of combined instantaneous and lagged effectsaccording to an extended causality definition is introduced.
Abstract: We present an approach for the quantification of directional relations in multiple time series exhibiting significant zero-lag interactions. To overcome the limitations of the traditional multivariate autoregressive (MVAR) modelling of multiple series, we introduce an extended MVAR (eMVAR) framework allowing either exclusive consideration of time-lagged effects according to the classic notion of Granger causality, or consideration of combined instantaneous and lagged effects according to an extended causality definition. The spectral representation of the eMVAR model is exploited to derive novel frequency domain causality measures that generalize to the case of instantaneous effects the known directed coherence (DC) and partial DC measures. The new measures are illustrated in theoretical examples showing that they reduce to the known measures in the absence of instantaneous causality, and describe peculiar aspects of directional interaction among multiple series when instantaneous causality is non-negligible. Then, the issue of estimating eMVAR models from time-series data is faced, proposing two approaches for model identification and discussing problems related to the underlying model assumptions. Finally, applications of the framework on cardiovascular variability series and multichannel EEG recordings are presented, showing how it allows one to highlight patterns of frequency domain causality consistent with well-interpretable physiological interaction mechanisms.

Journal ArticleDOI
TL;DR: A 64 × 64-pixel ultra-low power vision sensor is presented, performing pixel-level dynamic background subtraction as the low-level processing layer of an algorithm for scene interpretation.
Abstract: A 64 × 64-pixel ultra-low power vision sensor is presented, performing pixel-level dynamic background subtraction as the low-level processing layer of an algorithm for scene interpretation. The pixel embeds two digitally-programmable Switched-Capacitors Low-Pass Filters (SC-LPF) and two clocked comparators, aimed at detecting any anomalous behavior of the current photo-generated signal with respect to its past history. The 45 T, 26 μm square pixel has a fill-factor of 12%. The vision sensor has been fabricated in a 0.35 μm 2P3M CMOS process, powered with 3.3 V, and consumes 33 μ W at 13 fps, which corresponds to 620 pW/frame.pixel.

Proceedings ArticleDOI
22 Oct 2013
TL;DR: A novel approach for gesture recognition which is based on global alignment kernels is shown to be effective in the challenging scenario of user independent recognition and is embedded into a system targeted to visually impaired which will also integrate several other modules.
Abstract: Modern mobile devices provide several functionalities and new ones are being added at a breakneck pace. Unfortunately browsing the menu and accessing the functions of a mobile phone is not a trivial task for visual impaired users. Low vision people typically rely on screen readers and voice commands. However, depending on the situations, screen readers are not ideal because blind people may need their hearing for safety, and automatic recognition of voice commands is challenging in noisy environments. Novel smart watches technologies provides an interesting opportunity to design new forms of user interaction with mobile phones. We present our first works towards the realization of a system, based on the combination of a mobile phone and a smart watch for gesture control, for assisting low vision people during daily life activities. More specifically we propose a novel approach for gesture recognition which is based on global alignment kernels and is shown to be effective in the challenging scenario of user independent recognition. This method is used to build a gesture-based user interaction module and is embedded into a system targeted to visually impaired which will also integrate several other modules. We present two of them: one for identifying wet floor signs, the other for automatic recognition of predefined logos.

Journal ArticleDOI
TL;DR: In this paper, the authors presented the results of the characterization of the first high-density (HD) cell silicon photomultipliers produced at FBK, which has a cell size of 15 × 15 μm2 featuring a nominal fill factor of 48%.
Abstract: In this paper, we present the results of the characterization of the first high-density (HD) cell silicon photomultipliers produced at FBK. The most advanced prototype manufactured with this technology has a cell size of 15 × 15 μm2 featuring a nominal fill factor of 48%. To reach this high area coverage, we developed a new border structure to confine the high electric-field region of each single-photon avalanche diode. The measured detection efficiency approaches 30% in the green part of the light spectrum and it is above 20% from 400 to 650 nm. At these efficiency values, the correlated noise is very low, giving an excess charge factor below 1.1. We coupled a 2 × 2 × 10- mm3 LYSO scintillator crystal to a 2.2 × 2.2- mm2 silicon photomultiplier, obtaining very promising results for PET application: energy resolution of less than 11% full-width at half maximum (FWHM) with negligible loss of linearity and coincidence resolving time of 200-ps FWHM at 20°C.

Journal ArticleDOI
TL;DR: In this article, the main design and technological characteristics related to the latest 3D sensor process developments at Fondazione Bruno Kessler (FBK, Trento, Italy) are reported.
Abstract: We report on the main design and technological characteristics related to the latest 3D sensor process developments at Fondazione Bruno Kessler (FBK, Trento, Italy). With respect to the previous version of this technology, which involved columnar electrodes of both doping types etched from both wafer sides and stopping at a short distance from the opposite surface, passing-through columns are now available. This feature ensures better performance, but also a higher reproducibility, which is of concern in medium volume productions. In particular, this R&D project was aimed at establishing a suitable technology for the production of 3D pixel sensors to be installed into the ATLAS Insertable B-Layer. An additional benefit is the feasibility of slim edges, which consist of a multiple ohmic column termination with an overall size as low as 100 μm. Eight batches with two different wafer layouts have been fabricated using this approach, and including several design options, among them the ATLAS 3D sensor prototypes compatible with the new read-out chip FE-I4.

Journal ArticleDOI
TL;DR: Transmission X-ray microscopy has been used to investigate individual Co/TiO2 Fischer-Tropsch catalyst particles in 2-D and 3-D with 30 nm spatial resolution and showed that Co is heterogeneously concentrated in the centre of the catalyst particles.

Journal ArticleDOI
TL;DR: It is demonstrated that bacterial and fungal communities in vineyards are mostly stable over the considered seasons, with the presence of a stable core microbiome of operational taxonomic units (OTUs) within each transect.

Proceedings ArticleDOI
11 Dec 2013
TL;DR: A novel approach based on an extension of the IC3 algorithm for infinite-state transition systems with linear constraints is proposed, which finds the feasible region of parameters by complement, incrementally finding and blocking sets of “bad” parameters which lead to system failures.
Abstract: Parametric systems arise in different application domains, such as software, cyber-physical systems or tasks scheduling. A key challenge is to estimate the values of parameters that guarantee the desired behaviours of the system. In this paper, we propose a novel approach based on an extension of the IC3 algorithm for infinite-state transition systems. The algorithm finds the feasible region of parameters by complement, incrementally finding and blocking sets of “bad” parameters which lead to system failures. If the algorithm terminates we obtain the precise region of feasible parameters of the system. We describe an implementation for symbolic transition systems with linear constraints and perform an experimental evaluation on benchmarks taken from the domain of hybrid systems. The results demonstrate the potential of the approach.

Journal ArticleDOI
TL;DR: A single-quadrature feedback scheme able to overcome the conventional 3 dB limit on parametric squeezing is presented and might be used to squeeze one quadrature of a mechanical resonator below the quantum noise level, even without the need for a quantum limited detector.
Abstract: We present a single-quadrature feedback scheme able to overcome the conventional 3 dB limit on parametric squeezing. The method is experimentally demonstrated in a micromechanical system based on a cantilever with a magnetic tip. The cantilever is detected at low temperature by a SQUID susceptometer, while parametric pumping is obtained by modulating the magnetic field gradient at twice the cantilever frequency. A maximum squeezing of 11.5 dB and 11.3 dB is observed, respectively, in the response to a sinusoidal test signal and in the thermomechanical noise. So far, the maximum squeezing factor is limited only by the maximum achievable parametric modulation. The proposed technique might be used to squeeze one quadrature of a mechanical resonator below the quantum noise level, even without the need for a quantum limited detector.

Journal ArticleDOI
TL;DR: In this article, analytical solutions in closed form are proposed for double peeling of an elastic tape as well as for axisymmetric peeling on a flat smooth rigid substrate, and the results are validated by a fully numerical analysis performed with the aid of a finite element commercial software.
Abstract: The mechanism of detachment of thin films from a flat smooth rigid substrate is investigated. In particular, analytical solutions in closed form are proposed for the double peeling of an elastic tape as well as for the axisymmetric peeling of a membrane. We show that in the case of double peeling of an endless elastic tape, a critical value of the pull-off force is found, above which the tape is completely detached from the substrate. In particular, as the detachment process advances, the peeling angle is stabilized on a limiting value, which only depends on the geometry of the tape, its elastic modulus and on the interfacial energy $$\Updelta\gamma$$ . This predicted behavior agrees with the “theory of multiple peeling” and clarifies some aspects of this theory. Moreover, it is also corroborated by experimental results (work in progress) we are carrying out on a standard adhesive tape adhered to a smooth flat poly(methyl methacrylate) surface. In the case of the axisymmetric adhering membrane, a different behavior is observed. In such case, the system is always stable, and the detached area monotonically increases with the peeling force, i.e., the elastic membrane can sustain in principle any applied force. Results are validated by a fully numerical analysis performed with the aid of a finite element commercial software.

Proceedings ArticleDOI
09 Dec 2013
TL;DR: It is demonstrated that social attention features are excellent predictors of the Extraversion and Neuroticism personality traits and that while prediction performance for both traits is affected by head pose estimation errors, the impact is more adverse for Extraversion.
Abstract: Correlates between social attention and personality traits have been widely acknowledged in social psychology studies. Head pose has commonly been employed as a proxy for determining the social attention direction in small group interactions. However, the impact of head pose estimation errors on personality estimates has not been studied to our knowledge.In this work, we consider the unstructured and dynamic cocktail party scenario where the scene is captured by multiple, large field-of-view cameras. Head pose estimation is a challenging task under these conditions owing to the uninhibited motion of persons (due to which facial appearance varies owing to perspective and scale changes), and the low resolution of captured faces. Based on proxemic and social attention features computed from position and head pose annotations, we first demonstrate that social attention features are excellent predictors of the Extraversion and Neuroticism personality traits. We then repeat classification experiments with behavioral features computed from automated estimates-- obtained experimental results show that while prediction performance for both traits is affected by head pose estimation errors, the impact is more adverse for Extraversion.

Proceedings ArticleDOI
20 May 2013
TL;DR: The results emphasise that training can significantly improve the efficiency of subjects working with graphical representations and challenge the general assumption that graphical representations are more efficient than the textual ones at least in the case of developers not familiar with the graphical representation.
Abstract: Graphical representations are used to visualise, specify, and document software artifacts in all stages of software development process. In contrast with text, graphical representations are presented in two-dimensional form, which seems easy to process. However, few empirical studies investigated the efficiency of graphical representations vs. textual ones in modelling and presenting software requirements. Therefore, in this paper, we report the results of an eye-tracking experiment involving 28 participants to study the impact of structured textual vs. graphical representations on subjects' efficiency while performing requirement comprehension tasks. We measure subjects' efficiency in terms of the percentage of correct answers (accuracy) and of the time and effort spend to perform the tasks. We observe no statistically-significant difference in term of accuracy. However, our subjects spent more time and effort while working with the graphical representation although this extra time and effort does not affect accuracy. Our findings challenge the general assumption that graphical representations are more efficient than the textual ones at least in the case of developers not familiar with the graphical representation. Indeed, our results emphasise that training can significantly improve the efficiency of our subjects working with graphical representations. Moreover, by comparing the visual paths of our subjects, we observe that the spatial structure of the graphical representation leads our subjects to follow two different strategies (top-down vs. bottomup) and subsequently this hierarchical structure helps developers to ease the difficulty of model comprehension tasks.