scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Arlington published in 2012"


Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2964 moreInstitutions (200)
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.

9,282 citations


Journal ArticleDOI
TL;DR: This paper reviews the status of hierarchical control strategies applied to microgrids and discusses the future trends.
Abstract: Advanced control strategies are vital components for realization of microgrids. This paper reviews the status of hierarchical control strategies applied to microgrids and discusses the future trends. This hierarchical control structure consists of primary, secondary, and tertiary levels, and is a versatile tool in managing stationary and dynamic performance of microgrids while incorporating economical aspects. Various control approaches are compared and their respective advantages are highlighted. In addition, the coordination among different control hierarchies is discussed.

1,234 citations


Journal ArticleDOI
TL;DR: The conceptual structure underlying agile scholarship is delineated by performing an analysis of authors who have made notable contributions to the field and urging agile researchers to embrace a theory-based approach in their scholarship.

944 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the use of reinforcement learning to design feedback controllers for discrete and continuous-time dynamical systems that combine features of adaptive control and optimal control, which are not usually designed to be optimal in the sense of minimizing user-prescribed performance functions.
Abstract: This article describes the use of principles of reinforcement learning to design feedback controllers for discrete- and continuous-time dynamical systems that combine features of adaptive control and optimal control. Adaptive control [1], [2] and optimal control [3] represent different philosophies for designing feedback controllers. Optimal controllers are normally designed of ine by solving Hamilton JacobiBellman (HJB) equations, for example, the Riccati equation, using complete knowledge of the system dynamics. Determining optimal control policies for nonlinear systems requires the offline solution of nonlinear HJB equations, which are often difficult or impossible to solve. By contrast, adaptive controllers learn online to control unknown systems using data measured in real time along the system trajectories. Adaptive controllers are not usually designed to be optimal in the sense of minimizing user-prescribed performance functions. Indirect adaptive controllers use system identification techniques to first identify the system parameters and then use the obtained model to solve optimal design equations [1]. Adaptive controllers may satisfy certain inverse optimality conditions [4].

841 citations


Journal ArticleDOI
TL;DR: A practical design method is developed for cooperative tracking control of higher-order nonlinear systems with a dynamic leader using a robust adaptive neural network controller for each follower node such that all follower nodes ultimately synchronize to the leader node with bounded residual errors.

805 citations


Journal ArticleDOI
TL;DR: It is argued that the conflict between hosts and viruses has led to the invention and diversification of molecular arsenals, which, in turn, promote the cellular co-option of endogenous viruses.
Abstract: Recent studies have uncovered myriad viral sequences that are integrated or 'endogenized' in the genomes of various eukaryotes. Surprisingly, it appears that not just retroviruses but almost all types of viruses can become endogenous. We review how these genomic 'fossils' offer fresh insights into the origin, evolutionary dynamics and structural evolution of viruses, which are giving rise to the burgeoning field of palaeovirology. We also examine the multitude of ways through which endogenous viruses have influenced, for better or worse, the biology of their hosts. We argue that the conflict between hosts and viruses has led to the invention and diversification of molecular arsenals, which, in turn, promote the cellular co-option of endogenous viruses.

741 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek  +3081 moreInstitutions (197)
TL;DR: A combined search for the Standard Model Higgs boson with the ATLAS experiment at the LHC using datasets corresponding to integrated luminosities from 1.04 fb(-1) to 4.9 fb(1) of pp collisions is described in this paper.

572 citations


Journal ArticleDOI
TL;DR: The symposium includes articles from 10 urban analysts working on 30 cities around the globe whose collaborative work aims to understand different types of city shrinkage and the role that different approaches, policies and strategies have played in the regeneration of these cities.
Abstract: Urban shrinkage is not a new phenomenon. It has been documented in a large literature analyzing the social and economic issues that have led to population flight, resulting, in the worse cases, in the eventual abandonment of blocks of housing and neighbourhoods. Analysis of urban shrinkage should take into account the new realization that this phenomenon is now global and multidimensional — but also little understood in all its manifestations. Thus, as the world's population increasingly becomes urban, orthodox views of urban decline need redefinition. The symposium includes articles from 10 urban analysts working on 30 cities around the globe. These analysts belong to the Shrinking Cities International Research Network (SCIRN), whose collaborative work aims to understand different types of city shrinkage and the role that different approaches, policies and strategies have played in the regeneration of these cities. In this way the symposium will inform both a rich diversity of analytical perspectives and country-based studies of the challenges faced by shrinking cities. It will also disseminate SCIRN's research results from the last 3 years. Resume La decroissance urbaine n'est pas un phenomene nouveau. De nombreux travaux ont analyse les problemes sociaux et economiques conduisant au depart de populations et resultant dans les pires des cas a l'abandon d'ilots d'habitat et de quartiers entiers. Cependant, l'etude de la decroissance urbaine doit aujourd'hui tenir compte du constat recent selon lequel ce phenomene est desormais global et multidimensionnel, tout en restant peu apprehende dans toutes ses composantes. Ainsi, alors que la population mondiale est de plus en plus urbaine, les conceptions classiques du declin urbain meritent d'etre reexaminees. Ce symposium inclut des articles de dix chercheurs travaillant sur trente villes a travers le monde. Ils appartiennent au Shrinking Cities International Research Netwok (SCIRN), dont le travail collectif a pour objectif d'analyser differents types de decroissance urbaine et le role que les multiples approches, politiques et strategies ont joue dans la regeneration des villes touchees par ce processus. Ce numero s'appuie sur une diversite d'approches et sur l'etude de contextes urbains varies, ayant pour point commun d'etre concernes par les enjeux de la decroissance urbaine. Il permet de diffuser les resultats des recherches menees au sein du SCIRN au cours des trois dernieres annees.

570 citations


Journal ArticleDOI
TL;DR: This paper presents three design techniques for cooperative control of multiagent systems on directed graphs, namely, Lyapunov design, neural adaptive design, and linear quadratic regulator (LQR)-based optimal design.
Abstract: This paper presents three design techniques for cooperative control of multiagent systems on directed graphs, namely, Lyapunov design, neural adaptive design, and linear quadratic regulator (LQR)-based optimal design. Using a carefully constructed Lyapunov equation for digraphs, it is shown that many results of cooperative control on undirected graphs or balanced digraphs can be extended to strongly connected digraphs. Neural adaptive control technique is adopted to solve the cooperative tracking problems of networked nonlinear systems with unknown dynamics and disturbances. Results for both first-order and high-order nonlinear systems are given. Two examples, i.e., cooperative tracking control of coupled Lagrangian systems and modified FitzHugh-Nagumo models, justify the feasibility of the proposed neural adaptive control technique. For cooperative tracking control of the general linear systems, which include integrator dynamics as special cases, it is shown that the control gain design can be decoupled from the topology of the graphs, by using the LQR-based optimal control technique. Moreover, the synchronization region is unbounded, which is a desired property of the controller. The proposed optimal control method is applied to cooperative tracking control of two-mass-spring systems, which are well-known models for vibration in many mechanical systems.

550 citations


Journal ArticleDOI
TL;DR: In this paper, a one-day-ahead PV power output forecasting model for a single station is derived based on the weather forecasting data, actual historical power output data, and the principle of SVM.
Abstract: Due to the growing demand on renewable energy, photovoltaic (PV) generation systems have increased considerably in recent years. However, the power output of PV systems is affected by different weather conditions. Accurate forecasting of PV power output is important for system reliability and promoting large-scale PV deployment. This paper proposes algorithms to forecast power output of PV systems based upon weather classification and support vector machines (SVM). In the process, the weather conditions are divided into four types which are clear sky, cloudy day, foggy day, and rainy day. In this paper, a one-day-ahead PV power output forecasting model for a single station is derived based on the weather forecasting data, actual historical power output data, and the principle of SVM. After applying it into a PV station in China (the capability is 20 kW), results show the proposed forecasting model for grid-connected PV systems is effective and promising.

547 citations


Journal ArticleDOI
Georges Aad, B. Abbott1, Jalal Abdallah2, A. A. Abdelalim3  +3013 moreInstitutions (174)
TL;DR: In this article, detailed measurements of the electron performance of the ATLAS detector at the LHC were reported, using decays of the Z, W and J/psi particles.
Abstract: Detailed measurements of the electron performance of the ATLAS detector at the LHC are reported, using decays of the Z, W and J/psi particles. Data collected in 2010 at root s = 7 TeV are used, corresponding to an integrated luminosity of almost 40 pb(-1). The inter-alignment of the inner detector and the electromagnetic calorimeter, the determination of the electron energy scale and resolution, and the performance in terms of response uniformity and linearity are discussed. The electron identification, reconstruction and trigger efficiencies, as well as the charge misidentification probability, are also presented.

Journal ArticleDOI
TL;DR: 2 studies demonstrate the psychometric strength, clinical utility, and the initial construct validity of the CSI in evaluating CS‐related clinical symptoms in chronic pain populations.
Abstract: Central Sensitization (CS) has been proposed as a common pathophysiological mechanism to explain related syndromes for which no specific organic cause can be found. The term Central Sensitivity Syndrome (CSS) has been proposed to describe these poorly understood disorders related to CS. The goal of this investigation was to develop the Central Sensitization Inventory (CSI), which identifies key symptoms associated with CSSs, and quantifies the degree of these symptoms. The utility of the CSI, to differentiate among different types of chronic pain patients that presumably have different levels of CS impairment, was then evaluated. Study 1 demonstrated strong psychometric properties (test-retest reliability = 0.817; Cronbach's alpha = 0.879) of the CSI in a cohort of normative subjects. A factor analysis (including both normative and chronic pain subjects) yielded 4 major factors (all related to somatic and emotional symptoms), accounting for 53.4% of the variance in the dataset. In Study 2, the CSI was administered to four groups: fibromyalgia (FM); chronic widespread pain (CWP) without FM; work-related regional chronic low back pain (CLBP); and normative control group. Analyses revealed that the FM patients reported the highest CSI scores, and the normative population the lowest (p<.05). Analyses also demonstrated that the prevalence of previously diagnosed CSSs and related disorders was highest in the FM group and lowest in the normative group (p<.001). Taken together, these two studies demonstrate the psychometric strength, clinical utility, and the initial construct validity of the CSI in evaluating CS-related clinical symptoms in chronic pain populations.

BookDOI
17 Dec 2012
TL;DR: Feedback control of dynamic systems 6th solution PDF feedback control of Dynamic Systems as discussed by the authors 6th solutions PDF feedback feedback control dynamic systems 5th edition solutions PDF solutions manual feedback control Dynamic Systems 3rd edition solution PDF dynamic programming and optimal control and dynamic programming & optimal control solution manual PDF learning microsoft windows server 2012 dynamic access control PDF data-variant kernel analysis adaptive and cognitive dynamic systems signal processing learning communications and control PDF
Abstract: feedback control of dynamic systems 6th solution PDF feedback control of dynamic systems 6th solutions PDF feedback control of dynamic systems 5th edition pdf PDF feedback control of dynamic systems solution PDF feedback control of dynamic systems 7th edition PDF feedback control of dynamic systems 6th edition PDF feedback control of dynamic systems solutions PDF feedback control of dynamic systems solution manual PDF feedback control of dynamic systems solutions manual PDF feedback control dynamic systems 5th edition solutions PDF solutions manual feedback control of dynamic systems PDF feedback control of dynamic systems solution manual 6th PDF feedback control of dynamic systems solutions manual 5th PDF feedback control of dynamic systems franklin solutions PDF feedback control of dynamic systems solutions 6th edition PDF feedback control of dynamic systems 6th edition solutions PDF solutions feedback control dynamic systems 6th edition PDF feedback control of dynamic systems 6th solutions manual PDF feedback control of dynamic systems 6th edition solution manual PDF feedback control of dynamic systems franklin 5th edition solution PDF dynamic programming and optimal control PDF dynamic programming & optimal control vol i PDF dynamic programming and optimal control solution manual PDF learning microsoft windows server 2012 dynamic access control PDF data-variant kernel analysis adaptive and cognitive dynamic systems signal processing learning communications and control PDF

Journal ArticleDOI
Georges Aad1, Brad Abbott2, J. Abdallah3, S. Abdel Khalek4  +3073 moreInstitutions (193)
TL;DR: In this paper, a Fourier analysis of the charged particle pair distribution in relative azimuthal angle (Delta phi = phi(a)-phi(b)) is performed to extract the coefficients v(n,n) =.
Abstract: Differential measurements of charged particle azimuthal anisotropy are presented for lead-lead collisions at root sNN = 2.76 TeV with the ATLAS detector at the LHC, based on an integrated luminosity of approximately 8 mu b(-1). This anisotropy is characterized via a Fourier expansion of the distribution of charged particles in azimuthal angle relative to the reaction plane, with the coefficients v(n) denoting the magnitude of the anisotropy. Significant v(2)-v(6) values are obtained as a function of transverse momentum (0.5 = 3 are found to vary weakly with both eta and centrality, and their p(T) dependencies are found to follow an approximate scaling relation, v(n)(1/n)(p(T)) proportional to v(2)(1/2)(p(T)), except in the top 5% most central collisions. A Fourier analysis of the charged particle pair distribution in relative azimuthal angle (Delta phi = phi(a)-phi(b)) is performed to extract the coefficients v(n,n) = . For pairs of charged particles with a large pseudorapidity gap (|Delta eta = eta(a) - eta(b)| > 2) and one particle with p(T) < 3 GeV, the v(2,2)-v(6,6) values are found to factorize as v(n,n)(p(T)(a), p(T)(b)) approximate to v(n) (p(T)(a))v(n)(p(T)(b)) in central and midcentral events. Such factorization suggests that these values of v(2,2)-v(6,6) are primarily attributable to the response of the created matter to the fluctuations in the geometry of the initial state. A detailed study shows that the v(1,1)(p(T)(a), p(T)(b)) data are consistent with the combined contributions from a rapidity-even v(1) and global momentum conservation. A two-component fit is used to extract the v(1) contribution. The extracted v(1) isobserved to cross zero at pT approximate to 1.0 GeV, reaches a maximum at 4-5 GeV with a value comparable to that for v(3), and decreases at higher p(T).

Journal ArticleDOI
TL;DR: A survey of the proposed approaches in each category of modified MPPT techniques that properly detect the global MPP is surveyed and a brief discussion of their characteristics is provided.
Abstract: Partial shading in photovoltaic (PV) arrays renders conventional maximum power point tracking (MPPT) techniques ineffective. The reduced efficiency of shaded PV arrays is a significant obstacle in the rapid growth of the solar power systems. Thus, addressing the output power mismatch and partial shading effects is of paramount value. Extracting the maximum power of partially shaded PV arrays has been widely investigated in the literature. The proposed solutions can be categorized into four main groups. The first group includes modified MPPT techniques that properly detect the global MPP. They include power curve slope, load-line MPPT, dividing rectangles techniques, the power increment technique, instantaneous operating power optimization, Fibonacci search, neural networks, and particle swarm optimization. The second category includes different array configurations for interconnecting PV modules, namely series-parallel, total-cross-tie, and bridge-link configurations. The third category includes different PV system architectures, namely centralized architecture, series-connected microconverters, parallel-connected microconverters, and microinverters. The fourth category includes different converter topologies, namely multilevel converters, voltage injection circuits, generation control circuits, module-integrated converters, and multiple-input converters. This paper surveys the proposed approaches in each category and provides a brief discussion of their characteristics.

Journal ArticleDOI
Georges Aad1, Georges Aad2, Brad Abbott3, Brad Abbott1  +5592 moreInstitutions (189)
TL;DR: The ATLAS trigger system as discussed by the authors selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy.
Abstract: Proton-proton collisions at root s = 7 TeV and heavy ion collisions at root(NN)-N-s = 2.76 TeV were produced by the LHC and recorded using the ATLAS experiment's trigger system in 2010. The LHC is designed with a maximum bunch crossing rate of 40 MHz and the ATLAS trigger system is designed to record approximately 200 of these per second. The trigger system selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy. An overview of the ATLAS trigger system, the evolution of the system during 2010 and the performance of the trigger system components and selections based on the 2010 collision data are shown. A brief outline of plans for the trigger system in 2011 is presented.

Journal ArticleDOI
TL;DR: A new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking and a semi- supervised long-term RF algorithm to refine the multimedia data representation.
Abstract: We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

Journal ArticleDOI
TL;DR: A cooperative policy iteration algorithm is given for graphical games that converges to the best response when the neighbors of each agent do not update their policies, and to the cooperative Nash equilibrium when all agentsupdate their policies simultaneously.

Journal ArticleDOI
TL;DR: It is recommended that cost-effective transition of hydrologic DA from research to operations should be helped by developing community-based, generic modeling and DA tools or frameworks, and through fostering collaborative efforts among hydrologics modellers, DA developers, and operational forecasters.
Abstract: Data assimilation (DA) holds considerable potential for improving hydrologic predictions as demonstrated in numerous research studies. However, advances in hydrologic DA research have not been adequately or timely implemented in operational forecast systems to improve the skill of forecasts for better informed real-world decision making. This is due in part to a lack of mechanisms to properly quantify the uncertainty in observations and forecast models in real-time forecasting situations and to conduct the merging of data and models in a way that is adequately efficient and transparent to operational forecasters. The need for effective DA of useful hydrologic data into the forecast process has become increasingly recognized in recent years. This motivated a hydrologic DA workshop in Delft, the Netherlands in November 2010, which focused on advancing DA in operational hydrologic forecasting and water resources management. As an outcome of the workshop, this paper reviews, in relevant detail, the current status of DA applications in both hydrologic research and operational practices, and discusses the existing or potential hurdles and challenges in transitioning hydrologic DA research into cost-effective operational forecasting tools, as well as the potential pathways and newly emerging opportunities for overcoming these challenges. Several related aspects are discussed, including (1) theoretical or mathematical aspects in DA algorithms, (2) the estimation of different types of uncertainty, (3) new observations and their objective use in hydrologic DA, (4) the use of DA for real-time control of water resources systems, and (5) the development of community-based, generic DA tools for hydrologic applications. It is recommended that cost-effective transition of hydrologic DA from research to operations should be helped by developing community-based, generic modeling and DA tools or frameworks, and through fostering collaborative efforts among hydrologic modellers, DA developers, and operational forecasters.

Journal ArticleDOI
TL;DR: For example, trace-metal/TOC ratios can provide insight into the degree of watermass restriction and estimates of deepwater renewal times in restricted anoxic marine systems.

Journal ArticleDOI
TL;DR: The core idea is to enlarge the distance between different classes under the conceptual framework of LSR, and a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged.
Abstract: This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called e-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the e-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.

Journal ArticleDOI
TL;DR: The results support that iCMBA technology is highly translational and could have broad impact on surgeries where surgical tissue adhesives, sealants, and hemostatic agents are used.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: A two-stage multi-task sparse learning (MTSL) framework is proposed to efficiently locate the common and specific patches which are important to discriminate all the expressions and only a particular expression, respectively.
Abstract: In this paper, we present a new idea to analyze facial expression by exploring some common and specific information among different expressions. Inspired by the observation that only a few facial parts are active in expression disclosure (e.g., around mouth, eye), we try to discover the common and specific patches which are important to discriminate all the expressions and only a particular expression, respectively. A two-stage multi-task sparse learning (MTSL) framework is proposed to efficiently locate those discriminative patches. In the first stage MTSL, expression recognition tasks, each of which aims to find dominant patches for each expression, are combined to located common patches. Second, two related tasks, facial expression recognition and face verification tasks, are coupled to learn specific facial patches for individual expression. Extensive experiments validate the existence and significance of common and specific patches. Utilizing these learned patches, we achieve superior performances on expression recognition compared to the state-of-the-arts.

Posted Content
TL;DR: Tests of differences in effect sizes showed that policy availability was more strongly related to job satisfaction, affective commitment, and intentions to stay than was policy use, and number of policies and sample characteristics moderated the effects of policy availability and use on outcomes.
Abstract: This meta-analysis examines relationships between work-family support policies, which are policies that provide support for dependent care responsibilities, and employee outcomes by developing a conceptual model detailing the psychological mechanisms through which policy availability and use relate to work attitudes. Bivariate results indicated that availability and use of work-family support policies had modest positive relationships with job satisfaction, affective commitment, and intentions to stay. Further, tests of differences in effect sizes showed that policy availability was more strongly related to job satisfaction, affective commitment, and intentions to stay than was policy use. Subsequent meta-analytic structural equation modeling results indicated that policy availability and use had modest effects on work attitudes, which were partially mediated by family-supportive organization perceptions and work-to-family conflict, respectively. Additionally, number of policies and sample characteristics (percent women, percent married-cohabiting, percent with dependents) moderated the effects of policy availability and use on outcomes. Implications of these findings and directions for future research on work-family supportpolicies are discussed.

Journal ArticleDOI
TL;DR: A review of available data on the iron-containing nanomaterials can be found in this article, where main attention is paid to the following themes: synthetic methods, structures, composition and properties of the nano zerovalent iron (NZVI), and polymorphic forms of iron oxides and FeOOH.
Abstract: Available data on the iron-containing nanomaterials are reviewed. Main attention is paid to the following themes: synthetic methods, structures, composition and properties of the nano zerovalent iron (NZVI), and polymorphic forms of iron oxides and FeOOH. Synthetic methods summarized here include a series of physico-chemical methods such as microwave heating, electrodeposition, laser ablation, radiolytical techniques, arc discharge, metal-membrane incorporation, pyrolysis, combustion, reverse micelle and co-deposition routes. We have also included a few “greener” methods. Coated, doped, supported with polymers or inert inorganic materials, core–shell nanostructures, in particular those of iron and its oxides with gold, are discussed. Studies of remediation involving iron-containing nanomaterials are discussed and special attention is paid to the processes of remediation of organic contaminants (chlorine-containing pollutants, benzoic and formic acids, dyes) and inorganic cations (Zn(II), Cu(II), Cd(II) and Pb(II)) and anions (nitrates, bromates, arsenates). Water disinfection (against viruses and bacteria), toxicity and risks of iron nanomaterials application are also examined.

Journal ArticleDOI
Georges Aad1, Georges Aad2, Brad Abbott3, Brad Abbott2  +5559 moreInstitutions (188)
TL;DR: In this paper, the performance of the missing transverse momentum reconstruction was evaluated using data collected in pp collisions at a centre-of-mass energy of 7 TeV in 2010.
Abstract: The measurement of missing transverse momentum in the ATLAS detector, described in this paper, makes use of the full event reconstruction and a calibration based on reconstructed physics objects. The performance of the missing transverse momentum reconstruction is evaluated using data collected in pp collisions at a centre-of-mass energy of 7 TeV in 2010. Minimum bias events and events with jets of hadrons are used from data samples corresponding to an integrated luminosity of about 0.3 nb(-1) and 600 nb(-1) respectively, together with events containing a Z boson decaying to two leptons (electrons or muons) or a W boson decaying to a lepton (electron or muon) and a neutrino, from a data sample corresponding to an integrated luminosity of about 36 pb(-1). An estimate of the systematic uncertainty on the missing transverse momentum scale is presented.

Journal ArticleDOI
TL;DR: Some of the research issues, challenges and opportunities in the convergence between the cyber and physical worlds are presented, with a goal to stimulate new research activities in the emerging areas of CPW convergence.

Journal ArticleDOI
T. Aaltonen1, V. M. Abazov2, Brad Abbott3, Bobby Samir Acharya4  +868 moreInstitutions (117)
TL;DR: An excess of events in the data is interpreted as evidence for the presence of a new particle consistent with the standard model Higgs boson, which is produced in association with a weak vector boson and decays to a bottom-antibottom quark pair.
Abstract: We combine searches by the CDF and D0 Collaborations for the associated production of a Higgs boson with a W or Z boson and subsequent decay of the Higgs boson to a bottom-antibottom quark pair. The data, originating from Fermilab Tevatron p (p) over bar collisions at root s = 1.96 TeV, correspond to integrated luminosities of up to 9.7 fb(-1). The searches are conducted for a Higgs boson with mass in the range 100-150 GeV/c(2). We observe an excess of events in the data compared with the background predictions, which is most significant in the mass range between 120 and 135 GeV/c(2). The largest local significance is 3.3 standard deviations, corresponding to a global significance of 3.1 standard deviations. We interpret this as evidence for the presence of a new particle consistent with the standard model Higgs boson, which is produced in association with a weak vector boson and decays to a bottom-antibottom quark pair.

Proceedings Article
22 Jul 2012
TL;DR: A novel Schatten p-Norm optimization framework that unifies different norm formulations and an efficient algorithm derived to solve the new objective is derived and followed by the rigorous theoretical proof on the convergence.
Abstract: As an emerging machine learning and information retrieval technique, the matrix completion has been successfully applied to solve many scientific applications, such as collaborative prediction in information retrieval, video completion in computer vision, etc. The matrix completion is to recover a low-rank matrix with a fraction of its entries arbitrarily corrupted. Instead of solving the popularly used trace norm or nuclear norm based objective, we directly minimize the original formulations of trace norm and rank norm. We propose a novel Schatten p-Norm optimization framework that unifies different norm formulations. An efficient algorithm is derived to solve the new objective and followed by the rigorous theoretical proof on the convergence. The previous main solution strategy for this problem requires computing singular value decompositions - a task that requires increasingly cost as matrix sizes and rank increase. Our algorithm has closed form solution in each iteration, hence it converges fast. As a consequence, our algorithm has the capacity of solving large-scale matrix completion problems. Empirical studies on the recommendation system data sets demonstrate the promising performance of our new optimization framework and efficient algorithm.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah, A. A. Abdelalim3  +3002 moreInstitutions (178)
TL;DR: In this article, the authors describe the measurement of elliptic flow of charged particles in lead-lead collisions at root s(NN) = 2.76 TeV using the ATLAS detector at the Large Hadron Collider (LHC).