scispace - formally typeset
Search or ask a question

Showing papers by "National Technical University of Athens published in 2011"


Journal ArticleDOI
Marcos Daniel Actis1, G. Agnetta2, Felix Aharonian3, A. G. Akhperjanian  +682 moreInstitutions (109)
TL;DR: The ground-based gamma-ray astronomy has had a major breakthrough with the impressive results obtained using systems of imaging atmospheric Cherenkov telescopes as mentioned in this paper, which is an international initiative to build the next generation instrument, with a factor of 5-10 improvement in sensitivity in the 100 GeV-10 TeV range and the extension to energies well below 100GeV and above 100 TeV.
Abstract: Ground-based gamma-ray astronomy has had a major breakthrough with the impressive results obtained using systems of imaging atmospheric Cherenkov telescopes. Ground-based gamma-ray astronomy has a huge potential in astrophysics, particle physics and cosmology. CTA is an international initiative to build the next generation instrument, with a factor of 5-10 improvement in sensitivity in the 100 GeV-10 TeV range and the extension to energies well below 100 GeV and above 100 TeV. CTA will consist of two arrays (one in the north, one in the south) for full sky coverage and will be operated as open observatory. The design of CTA is based on currently available technology. This document reports on the status and presents the major design concepts of CTA.

1,006 citations


Journal ArticleDOI
TL;DR: The current status of knowledge and evidence on the mechanisms and involvement of intracellular oxidative stress and DNA damage in human malignancy evolution and possible use of these parameters as cancer biomarkers are presented and controversies related to specific methodologies used for the measurement of oxidatively induced DNA lesions in human cells or tissues are discussed.
Abstract: Cells in tissues and organs are continuously subjected to oxidative stress and free radicals on a daily basis. This free radical attack has exogenous or endogenous (intracellular) origin. The cells withstand and counteract this occurrence by the use of several and different defense mechanisms ranging from free radical scavengers like glutathione (GSH), vitamins C and E and antioxidant enzymes like catalase, superoxide dismutase and various peroxidases to sophisticated and elaborate DNA repair mechanisms. The outcome of this dynamic equilibrium is usually the induction of oxidatively induced DNA damage and a variety of lesions of small to high importance and dangerous for the cell i.e. isolated base lesions or single strand breaks (SSBs) to complex lesions like double strand breaks (DSBs) and other non-DSB oxidatively generated clustered DNA lesions (OCDLs). The accumulation of DNA damage through misrepair or incomplete repair may lead to mutagenesis and consequently transformation particularly if combined with a deficient apoptotic pathway. In this review, we present the current status of knowledge and evidence on the mechanisms and involvement of intracellular oxidative stress and DNA damage in human malignancy evolution and possible use of these parameters as cancer biomarkers. At the same time, we discuss controversies related to potential artifacts inherent to specific methodologies used for the measurement of oxidatively induced DNA lesions in human cells or tissues.

820 citations


Journal ArticleDOI
TL;DR: Differences and similarities between these two approaches to data analysis are discussed, relevant literature is reviewed and a set of insights are provided for selecting the appropriate approach.
Abstract: In the field of transportation, data analysis is probably the most important and widely used research tool available. In the data analysis universe, there are two ‘schools of thought’; the first uses statistics as the tool of choice, while the second – one of the many methods from – Computational Intelligence. Although the goal of both approaches is the same, the two have kept each other at arm’s length. Researchers frequently fail to communicate and even understand each other’s work. In this paper, we discuss differences and similarities between these two approaches, we review relevant literature and attempt to provide a set of insights for selecting the appropriate approach.

752 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, A. A. Abdelalim4  +3034 moreInstitutions (179)
TL;DR: In this article, a search for squarks and gluinos in final states containing jets, missing transverse momentum and no electrons or muons is presented, and the data were recorded by the ATLAS experiment in sqrt(s) = 7 TeV proton-proton collisions at the Large Hadron Collider.

452 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, A. A. Abdelalim4  +3104 moreInstitutions (190)
TL;DR: In this paper, the particle multiplicity, its dependence on transverse momentum and pseudorapidity and the relationship between the mean transversal momentum and the charged-particle multiplicity are measured.
Abstract: Measurements are presented from proton-proton collisions at centre-of-mass energies of root s = 0.9, 2.36 and 7 TeV recorded with the ATLAS detector at the LHC. Events were collected using a single-arm minimum-bias trigger. The charged-particle multiplicity, its dependence on transverse momentum and pseudorapidity and the relationship between the mean transverse momentum and charged-particle multiplicity are measured. Measurements in different regions of phase space are shown, providing diffraction-reduced measurements as well as more inclusive ones. The observed distributions are corrected to well-defined phase-space regions, using model-independent corrections. The results are compared to each other and to various Monte Carlo (MC) models, including a new AMBT1 pythia6 tune. In all the kinematic regions considered, the particle multiplicities are higher than predicted by the MC models. The central charged-particle multiplicity per event and unit of pseudorapidity, for tracks with p(T) > 100 MeV, is measured to be 3.483 +/- 0.009 (stat) +/- 0.106 (syst) at root s = 0.9 TeV and 5.630 +/- 0.003 (stat) +/- 0.169 (syst) at root s = 7 TeV.

435 citations


Proceedings ArticleDOI
24 Jul 2011
TL;DR: The controller aims to optimize the operation of the microgrid during interconnected operation, i.e., maximize its value by optimizing the production of the local DGs and power exchanges with the main distribution grid.
Abstract: Microgrids are Low Voltage distribution networks comprising various distributed generators (DG), storage devices and controllable loads that can operate either interconnected or isolated from the main distribution grid as a controlled entity. This paper describes the operation of a Central Controller for Microgrids. The controller aims to optimize the operation of the Microgrid during interconnected operation, i.e. maximize its value by optimizing production of the local DGs and power exchanges with the main distribution grid. Two market policies are assumed including Demand Side Bidding options for controllable loads. The developed optimization algorithms are applied on a typical LV study case network operating under various market policies and assuming realistic spot market prices and DG bids reflecting realistic operational costs. The effects on the Microgrid and the Distribution network operation are presented and discussed.

429 citations


Journal ArticleDOI
TL;DR: In this article, a survey regarding methods and tools presently available to determine potential and exploitable energy in the most important renewable sectors (i.e., solar, wind, wave, biomass and geothermal energy) is presented.
Abstract: The recent statements of both the European Union and the US Presidency pushed in the direction of using renewable forms of energy, in order to act against climate changes induced by the growing concentration of carbon dioxide in the atmosphere. In this paper, a survey regarding methods and tools presently available to determine potential and exploitable energy in the most important renewable sectors (i.e., solar, wind, wave, biomass and geothermal energy) is presented. Moreover, challenges for each renewable resource are highlighted as well as the available tools that can help in evaluating the use of a mix of different sources.

378 citations


Proceedings ArticleDOI
27 Jun 2011
TL;DR: This paper investigates the application of CS to data collection in wireless sensor networks, and aims at minimizing the network energy consumption through joint routing and compressed aggregation, and proposes a mixed-integer programming formulation along with a greedy heuristic.
Abstract: As a burgeoning technique for signal processing, compressed sensing (CS) is being increasingly applied to wireless communications. However, little work is done to apply CS to multihop networking scenarios. In this paper, we investigate the application of CS to data collection in wireless sensor networks, and we aim at minimizing the network energy consumption through joint routing and compressed aggregation. We first characterize the optimal solution to this optimization problem, then we prove its NP-completeness. We further propose a mixed-integer programming formulation along with a greedy heuristic, from which both the optimal (for small scale problems) and the near-optimal (for large scale problems) aggregation trees are obtained. Our results validate the efficacy of the greedy heuristics, as well as the great improvement in energy efficiency through our joint routing and aggregation scheme.

358 citations


Journal ArticleDOI
TL;DR: Autonomous control of surgical robotic platforms may offer enhancements such as higher precision, intelligent manoeuvres, tissue‐damage avoidance, etc.
Abstract: Background Autonomous control of surgical robotic platforms may offer enhancements such as higher precision, intelligent manoeuvres, tissue-damage avoidance, etc. Autonomous robotic systems in surgery are largely at the experimental level. However, they have also reached clinical application. Methods A literature review pertaining to commercial medical systems which incorporate autonomous and semi-autonomous features, as well as experimental work involving automation of various surgical procedures, is presented. Results are drawn from major databases, excluding papers not experimentally implemented on real robots. Results Our search yielded several experimental and clinical applications, describing progress in autonomous surgical manoeuvres, ultrasound guidance, optical coherence tomography guidance, cochlear implantation, motion compensation, orthopaedic, neurological and radiosurgery robots. Conclusion Autonomous and semi-autonomous systems are beginning to emerge in various interventions, automating important steps of the operation. These systems are expected to become standard modality and revolutionize the face of surgery. Copyright © 2011 John Wiley & Sons, Ltd.

324 citations


Journal ArticleDOI
01 May 2011-Fuel
TL;DR: In this paper, an experimental study was conducted to evaluate the effects of using blends of diesel fuel with either ethanol in proportions of 5% and 10% or n -butanol in 8% and 16% (by vol.), on the combustion behavior of a fully-instrumented, six-cylinder, turbocharged and after-cooled, heavy duty, direct injection (DI), ‘Mercedes-Benz’ engine installed at the laboratory.

306 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a novel approach to UV-completion of a class of non-renormalizable theories, according to which the high-energy scattering amplitudes get unitarized by production of extended classical objects (classicalons), playing a role analogous to black holes, in the case of nongravitational theories.
Abstract: We suggest a novel approach to UV-completion of a class of non-renormalizable theories, according to which the high-energy scattering amplitudes get unitarized by production of extended classical objects (classicalons), playing a role analogous to black holes, in the case of non-gravitational theories. The key property of classicalization is the existence of a classicalizer field that couples to energy-momentum sources. Such localized sources are excited in high-energy scattering processes and lead to the formation of classicalons. Two kinds of natural classicalizers are Nambu-Goldstone bosons (or, equivalently, longitudinal polarizations of massive gauge fields) and scalars coupled to energy-momentum type sources. Classicalization has interesting phenomenological applications for the UVcompletion of the Standard Model both with or without the Higgs. In the Higgless Standard Model the high-energy scattering amplitudes of longitudinal W-bosons self-unitarize via classicalization, without the help of any new weakly-coupled physics. Alternatively, in the presence of a Higgs boson, classicalization could explain the stabilization of the hierarchy. In both scenarios the high-energy scatterings are dominated by the formation of classicalons, which subsequently decay into many particle states. The experimental signatures at the LHC are quite distinctive, with sharp differences in the two cases.

Journal ArticleDOI
TL;DR: In this paper, molecular dynamics simulations of a composite consisting of an ungrafted or a grafted spherical silica nanoparticle embedded in a melt of 20-monomer atactic polystyrene chains have been performed.
Abstract: Atomistic molecular dynamics simulations of a composite consisting of an ungrafted or a grafted spherical silica nanoparticle embedded in a melt of 20-monomer atactic polystyrene chains have been performed. The structural properties of the polymer in the vicinity of a nanoparticle have been studied. The nanoparticle modifies the polymer structure in its neighborhood. These changes increase for higher grafting densities and larger particle diameters. Mass and number density profiles show layering of the polymer chains around the nanoparticle, which extends to ∼2 nm. In contrast, the increase in the polymer’s radius of gyration and other induced ordering (alignment of the chains parallel to the surface and orientation of backbone segments) are shorter-ranged. The infiltration of free polystyrene chains into the grafted chains region is reduced with increasing grafting density. Therefore, the interpenetration of grafted and free chains at high grafting densities, which is responsible for the mechanical ancho...

Journal ArticleDOI
D. Aad1, D. Aad2, Brad Abbott2, Brad Abbott3  +5600 moreInstitutions (187)
TL;DR: In this article, measurements of luminosity obtained using the ATLAS detector during early running of the Large Hadron Collider (LHC) at root s = 7 TeV are presented, independently determined using several detectors and multiple algorithms, each having different acceptances, systematic uncertainties and sensitivity to background.
Abstract: Measurements of luminosity obtained using the ATLAS detector during early running of the Large Hadron Collider (LHC) at root s = 7 TeV are presented. The luminosity is independently determined using several detectors and multiple algorithms, each having different acceptances, systematic uncertainties and sensitivity to background. The ratios of the luminosities obtained from these methods are monitored as a function of time and of mu, the average number of inelastic interactions per bunch crossing. Residual time- and mu-dependence between the methods is less than 2% for 0 < mu < 2.5. Absolute luminosity calibrations, performed using beam separation scans, have a common systematic uncertainty of +/- 11%, dominated by the measurement of the LHC beam currents. After calibration, the luminosities obtained from the different methods differ by at most +/- 2%. The visible cross sections measured using the beam scans are compared to predictions obtained with the PYTHIA and PHOJET event generators and the ATLAS detector simulation.

Journal ArticleDOI
TL;DR: In this article, a hereditarily indecomposable Banach space with dual space isomorphic to l 1 is constructed, and every bounded linear operator on this space is expressible as λI + K, with λ a scalar and K compact.
Abstract: We construct a hereditarily indecomposable Banach space with dual space isomorphic to l1. Every bounded linear operator on this space is expressible as λI + K, with λ a scalar and K compact.

Journal ArticleDOI
TL;DR: In this article, a distributed state estimation method for multi-area power systems is presented, where each area performs its own state estimation, using local measurements, and exchanges border information (estimated boundary states and measurements) at a coordination state estimator, which computes the system-wide state.
Abstract: This paper presents a new distributed state estimation method for multiarea power systems. Each area performs its own state estimation, using local measurements, and exchanges border information (estimated boundary states and measurements) at a coordination state estimator, which computes the system-wide state. Furthermore, observability and bad data analysis are accomplished in a distributed manner. The proposed method is illustrated with the IEEE 14-bus system. Test results with the IEEE 118-bus system are given.

Journal ArticleDOI
TL;DR: In this paper, the design and performance of a spark-resistant bulk-micromegas chamber for large-area muon detectors at the Large Hadron Collider at CERN for luminosities in excess of 1034 cm−2 ǫ s−1 was described.
Abstract: We report on the design and performance of a spark-resistant bulk-micromegas chamber. The principle of this design lends itself to the construction of large-area muon chambers for the upgrade of the detectors at the Large Hadron Collider at CERN for luminosities in excess of 1034 cm−2 s−1 or other high-rate applications.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah, A. A. Abdelalim3  +3042 moreInstitutions (179)
TL;DR: In this paper, the cross-section and fraction of J/psi mesons produced in B-hadron decays are measured in proton proton collisions at root s = 7 TeV with the ATLAS detector at the LHC, using 2.3 pb(-1) of integrated luminosity.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, A. A. Abdelalim4  +3072 moreInstitutions (177)
TL;DR: A measurement of the production cross-section for top quark pairs (t (t) over bar) in pp collisions at root s = 7 TeV is presented in this article using data recorded with the ATLAS detector at the Large Hadron Co.
Abstract: A measurement of the production cross-section for top quark pairs (t (t) over bar) in pp collisions at root s = 7 TeV is presented using data recorded with the ATLAS detector at the Large Hadron Co ...

Journal ArticleDOI
TL;DR: In this paper, experimental tests were conducted at the authors' laboratory on a bus/truck, turbocharged diesel engine in order to investigate the formation mechanisms of nitric oxide (NO), smoke, and combustion noise radiation during hot starting for various alternative fuel blends.

Journal ArticleDOI
21 Apr 2011
TL;DR: A generic Internet of Things architecture trying to resolve the existing restrictions of current architectural models by integrating both RFID and smart object-based infrastructures, while also exploring a third parameter, i.e. the social potentialities of the Internet of Thing building blocks towards shaping the “Social Internet of things”.
Abstract: The term Internet of Things refers to the networked interconnection of objects of diverse nature, such as electronic devices, sensors, but also physical objects and beings as well as virtual data and environments. Although the basic concept of the Internet of Things sounds simple, its application is difficult and, so far, the respective existing architectural models are rather monolithic and are dominated by several limitations. The paper introduces a generic Internet of Things architecture trying to resolve the existing restrictions of current architectural models by integrating both RFID and smart object-based infrastructures, while also exploring a third parameter, i.e. the social potentialities of the Internet of Things building blocks towards shaping the “Social Internet of Things”. The proposed architecture is based on a layered lightweight and open middle-ware solution following the paradigm of Service Oriented Architecture and the Semantic Model Driven Ap-proach, which is realized at both design-time and deployment–time covering the whole service lifecycle for the corresponding services and applications provided.

Journal ArticleDOI
TL;DR: A K-means clustering approach is proposed for the automated diagnosis of defective rolling element bearings, which presents a 100% classification success and is tested in one literature established laboratory test case and in three different industrial test cases.
Abstract: A K-means clustering approach is proposed for the automated diagnosis of defective rolling element bearings. Since K-means clustering is an unsupervised learning procedure, the method can be directly implemented to measured vibration data. Thus, the need for training the method with data measured on the specific machine under defective bearing conditions is eliminated. This fact consists the major advantage of the method, especially in industrial environments. Critical to the success of the method is the feature set used, which consists of a set of appropriately selected frequency-domain parameters, extracted both from the raw signal, as well as from the signal envelope, as a result of the engineering expertise, gained from the understanding of the physical behavior of defective rolling element bearings. Other advantages of the method are its ease of programming, simplicity and robustness. In order to overcome the sensitivity of the method to the choice of the initial cluster centers, the initial centers are selected using features extracted from simulated signals, resulting from a well established model for the dynamic behavior of defective rolling element bearings. Then, the method is implemented as a two-stage procedure. At the first step, the method decides whether a bearing fault exists or not. At the second step, the type of the defect (e.g. inner or outer race) is identified. The effectiveness of the method is tested in one literature established laboratory test case and in three different industrial test cases. Each test case includes successive measurements from bearings under different types of defects. In all cases, the method presents a 100% classification success. Contrarily, a K-means clustering approach, which is based on typical statistical time domain based features, presents an unstable classification behavior.

Journal ArticleDOI
TL;DR: In this article, the authors compare the implementation of two semi-quantitative landslide assessment approaches, using landslide susceptibility maps compiled in a GIS environment, and reveal that even though both methods correctly show the landslide status of the second site, the RES map reveals a better behavior in the spatial distribution of the various landslide susceptibility zones.
Abstract: As landslides are very common in Greece, causing serious problems to the social and economic welfare of many communities, the implementation of a proper hazard analysis system will help the creation of a reliable susceptibility map. Τhis will help local communities to define a safe land use and urban development. The purpose of this study is to compare the implementation of two semi-quantitative landslide assessment approaches, using landslide susceptibility maps compiled in a GIS environment. The compared methods are rock engineering system (RES) and the analytic hierarchy process (AHP). For the landslide susceptibility analysis, the Northeastern part of the Achaia County was examined. This area suffers from many landslides, because of its neighborhood with the tectonically active Corinthian Gulf and its geological setting (Neogene sediments, flysch and other bedrock formations, with local overthrusts). Ten parameters were used in both methodologies, and each one was separated into five categories ranging from 0 to 4, representing their specific conditions derived from the investigation of the landslides in the western part of the study area (ranking area). A layer map was generated for each parameter, using GIS, while the weighting coefficients of each methodology were used for the compilation of RES and AHP final maps of the eastern part of the study area (validating area). By examining these two maps, it is revealed that even though both correctly show the landslide status of the second site, the RES map reveals a better behavior in the spatial distribution of the various landslide susceptibility zones.

Journal ArticleDOI
TL;DR: In this article, liquid solvent extraction and supercritical fluid extraction using different solvents and carbon dioxide, respectively, were performed on virgin olive oil and sunflower oil, and the optimal solvent extraction conditions of phenols were 180min using ethanol, at a solvent to sample ratio 5:1 v/w, and at pH 2.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, A. A. Abdelalim4  +3163 moreInstitutions (177)
TL;DR: In this article, the anti-kt algorithm is used to identify jets, with two jet resolution parameters, R = 0.4 and 0.6, and the dominant uncertainty comes from the jet energy scale, which is determined to within 7% for central jets above 60 GeV transverse momentum.
Abstract: Jet cross sections have been measured for the first time in proton-proton collisions at a centre-of-mass energy of 7 TeV using the ATLAS detector. The measurement uses an integrated luminosity of 17 nb-1 recorded at the Large Hadron Collider. The anti-kt algorithm is used to identify jets, with two jet resolution parameters, R = 0.4 and 0.6. The dominant uncertainty comes from the jet energy scale, which is determined to within 7% for central jets above 60 GeV transverse momentum. Inclusive single-jet differential cross sections are presented as functions of jet transverse momentum and rapidity. Dijet cross sections are presented as functions of dijet mass and the angular variable $\chi$. The results are compared to expectations based on next-to-leading-order QCD, which agree with the data, providing a validation of the theory in a new kinematic regime.

Journal ArticleDOI
TL;DR: In this paper, the effect of successively replacing wheat flour with dietary fiber from wheat, oat, barley, and maize or cereal bran (CB) from wheat and oat and rice on cake batter, final cake quality parameters, as well as on product shelf-life was studied.
Abstract: The effect of successively replacing (10%, 20%, and 30%) wheat flour with dietary fiber (DF) from wheat, oat, barley, and maize or cereal bran (CB) from wheat, oat, and rice on cake batter, final cake quality parameters, as well as on product shelf-life was studied. Batter viscosity (control, 2.96; wheat fiber 30%, 20.21; rice bran 10%, 0.47 Pa sn), cake-specific volume (control, 2.27; wheat fiber 20%, 2.83; rice bran 30%, 1.94 cm3/g), porosity (control, 0.75; wheat fiber 30%, 0.81; rice bran 30%, 0.69), and crumb moisture content (control, 20.07%,; wheat fiber 30%, 26.45%; oat bran 30%, 13.89%) increased significantly (P < 0.05) with DF addition but decreased with CB addition. Addition of DF resulted in softer crumb texture (Control, 4.20 N; wheat fiber 20%, 3.19 N), while CB addition increased crumb firmness (rice bran 30%, 10.84 N), respectively. Minor differences were observed in the crumb and crust color of the DF cakes with respect to the control. Addition of CB decreased the L values of crumb color significantly and the decrease increased with increased level of CB incorporation. DF addition led to cakes with greater acceptance by panelists than CB addition, similar to the control. DF cakes stored in polyethylene bags at 25 °C and 60% relative humidity for 6 days showed delayed moisture loss and lower firmness compared to CB cakes. The optimal level of incorporation based both on the objective and sensory characteristics results was found 20% for DF and 10% for CB, respectively. Concluding, by incorporating DF or CB properly, cakes with improved nutritional value can be manufactured.

Journal ArticleDOI
TL;DR: In this paper, a study was carried out to assess the extractability of tomato waste carotenoids in different organic solvents and to optimise the extraction parameters (type of solvent, extraction time, temperature and extraction steps) for maximum yield.
Abstract: Tomato waste is an important source of natural carotenoids This study was carried out to assess the extractability of tomato waste carotenoids in different organic solvents and to optimise the extraction parameters (type of solvent, extraction time, temperature and extraction steps) for maximum yield Among other solvents, we tested a new environmentally friendly one, ethyl lactate, which gave the highest carotenoid yield (24300 mg kg -1 dry tomato waste) at 70 °C, compared to acetone (5190 mg kg -1 ), ethyl acetate (4621 mg kg -1 ), hexane (3445 mg kg -1 ) and ethanol (1757 mg kg -1 ) The carotenoid recovery was significantly (P < 005) affected by the number of extraction steps and temperature in all solvents Mathematic equations predicted rather satisfactorily (R 2 = 089-093) the rate of carotenoid extraction in the above-mentioned solvents Carotenoid concentration increased with time, approaching a quasi-saturated condition at approximately 30 min of extraction

Journal ArticleDOI
Georges Aad1, Brad Abbott2, J. Abdallah3, A. A. Abdelalim4  +3139 moreInstitutions (192)
TL;DR: In this paper, a measurement of the cross section for the inclusive production of isolated prompt photons in pp collisions at a center-of-mass energy root s = 7 TeV is presented.
Abstract: A measurement of the cross section for the inclusive production of isolated prompt photons in pp collisions at a center-of-mass energy root s = 7 TeV is presented. The measurement covers the pseudorapidity ranges vertical bar eta(gamma)vertical bar < 1: 37 and 1: 52 <= vertical bar eta(gamma)vertical bar < 1: 81 in the transverse energy range 15 <= E-T(gamma) < 100 GeV. The results are based on an integrated luminosity of 880 nb(-1), collected with the ATLAS detector at the Large Hadron Collider. Photon candidates are identified by combining information from the calorimeters and from the inner tracker. Residual background in the selected sample is estimated from data based on the observed distribution of the transverse isolation energy in a narrow cone around the photon candidate. The results are compared to predictions from next-to-leading-order perturbative QCD calculations.

Journal ArticleDOI
TL;DR: The proposed algorithm reconstructs a background instance on demand under any traffic conditions, and demonstrated a rather robust performance in various operating conditions including unstable lighting, different view-angles and congestion.
Abstract: An innovative system for detecting and extracting vehicles in traffic surveillance scenes is presented. This system involves locating moving objects present in complex road scenes by implementing an advanced background subtraction methodology. The innovation concerns a histogram-based filtering procedure, which collects scatter background information carried in a series of frames, at pixel level, generating reliable instances of the actual background. The proposed algorithm reconstructs a background instance on demand under any traffic conditions. The background reconstruction algorithm demonstrated a rather robust performance in various operating conditions including unstable lighting, different view-angles and congestion.

Journal ArticleDOI
22 Jun 2011-Polymer
TL;DR: In this article, the effect of in situ synthesized 10nm silica nanoparticles on the glass transition and dynamics of natural rubber networks using differential scanning calorimetry, broadband dielectric relaxation spectroscopy and thermally stimulated depolarization currents.

Journal ArticleDOI
TL;DR: It is shown that a coupling of the inflaton kinetic term to the Einstein tensor allows f ≪ M(p) to be enhanced by enhancing the gravitational friction acting onThe inflaton during inflation.
Abstract: In natural inflation, the inflaton is a pseudo-Nambu-Goldstone boson which acquires a mass by explicit breaking of a global shift symmetry at scale $f$. In this case, for small field values, the potential is flat and stable under radiative corrections. Nevertheless, slow roll conditions enforce $f\ensuremath{\gg}{M}_{p}$, making the validity of the whole scenario questionable. In this Letter, we show that a coupling of the inflaton kinetic term to the Einstein tensor allows $f\ensuremath{\ll}{M}_{p}$ by enhancing the gravitational friction acting on the inflaton during inflation. This new unique interaction (a) keeps the theory perturbative in the whole inflationary trajectory, (b) preserves the tree-level shift invariance of the pseudo-Nambu-Goldstone boson, and (c) avoids the introduction of any new degrees of freedom with respect to standard natural inflation.