scispace - formally typeset
Search or ask a question
Author

V. Dodonov

Other affiliations: RWTH Aachen University
Bio: V. Dodonov is an academic researcher from Max Planck Society. The author has contributed to research in topics: HERA & Deep inelastic scattering. The author has an hindex of 46, co-authored 138 publications receiving 8687 citations. Previous affiliations of V. Dodonov include RWTH Aachen University.


Papers
More filters
ReportDOI
18 Jun 1999

1,107 citations

Journal ArticleDOI
F. D. Aaron1, Halina Abramowicz2, I. Abt3, Leszek Adamczyk4  +538 moreInstitutions (69)
TL;DR: In this article, a combination of the inclusive deep inelastic cross sections measured by the H1 and ZEUS Collaborations in neutral and charged current unpolarised e(+/-)p scattering at HERA during the period 1994-2000 is presented.
Abstract: A combination is presented of the inclusive deep inelastic cross sections measured by the H1 and ZEUS Collaborations in neutral and charged current unpolarised e(+/-)p scattering at HERA during the period 1994-2000. The data span six orders of magnitude in negative four-momentum-transfer squared, Q(2), and in Bjorken x. The combination method used takes the correlations of systematic uncertainties into account, resulting in an improved accuracy. The combined data are the sole input in a NLO QCD analysis which determines a new set of parton distributions, HERAPDF1.0, with small experimental uncertainties. This set includes an estimate of the model and parametrisation uncertainties of the fit result.

624 citations

Journal ArticleDOI
C. Adloff, H. Henschel1, Wolfram Erdmann, P. Dixon1  +338 moreInstitutions (1)
TL;DR: In this article, a precise measurement of the inclusive deep-inelastic e^+p scattering cross section is reported in the kinematic range 1.5 = Q^2 =150 GeV^2 and 3*10^(-5) = x = 0.2.
Abstract: A precise measurement of the inclusive deep-inelastic e^+p scattering cross section is reported in the kinematic range 1.5 = Q^2 =150 GeV^2 and 3*10^(-5) = x =0.2. The data were recorded with the H1 detector at HERA in 1996 and 1997, and correspond to an integrated luminosity of 20 pb^(-1).The double differential cross section, from which the proton structure function F_2(x,Q^2) and the longitudinal structure function F_L(x,Q^2) are extracted, is measured with typically 1% statistical and 3% systematic uncertainties. The measured partial derivative (dF_2(x,Q^2)/dln Q^2)_x is observed to rise continuously towards small x for fixed Q^2. The cross section data are combined with published H1 measurements at high Q^2 for a next-to-leading order DGLAP QCD analysis.The H1 data determine the gluon momentum distribution in the range 3*10^(-4) = x =0.1 to within an experimental accuracy of about 3% for Q^2 =20 GeV^2. A fit of the H1 measurements and the mu p data of the BCDMS collaboration allows the strong coupling constant alpha_s and the gluon distribution to be simultaneously determined. A value of alpha _s(M_Z^2)=0.1150+-0. 0017 (exp) +0.0009-0.0005 (model) is obtained in NLO, with an additional theoretical uncertainty of about +-0.005, mainly due to the uncertainty of the renormalisation scale.

448 citations

Journal ArticleDOI
A. Aktas, Calin Alexa, V. P. Andreev, T. Anthonis1  +283 moreInstitutions (35)
TL;DR: In this article, a new set of diffractive parton distribution functions is obtained through a simultaneous fit to the diffractive inclusive and dijet cross sections, which allows for a precise determination of both diffractive quark and gluon distributions in the range 0.05 < zIP < 0.9.
Abstract: Differential dijet cross sections in diffractive deep-inelastic scattering are measured with the H1 detector at HERA using an integrated luminosity of 51.5 pb−1. The selected events are of the type ep → eXY , where the system X contains at least two jets and is well separated in rapidity from the low mass proton dissociation system Y . The dijet data are compared with QCD predictions at next-to-leading order based on diffractive parton distribution functions previously extracted from measurements of inclusive diffractive deepinelastic scattering. The prediction describes the dijet data well at low and intermediate zIP (the fraction of the momentum of the diffractive exchange carried by the parton entering the hard interaction) where the gluon density is well determined from the inclusive diffractive data, supporting QCD factorisation. A new set of diffractive parton distribution functions is obtained through a simultaneous fit to the diffractive inclusive and dijet cross sections. This allows for a precise determination of both the diffractive quark and gluon distributions in the range 0.05 < zIP < 0.9. In particular, the precision on the gluon density at high momentum fractions is improved compared to previous extractions.

312 citations

Journal ArticleDOI
A. Aktas, V. Andreev1, T. Anthonis2, Biljana Antunović3  +293 moreInstitutions (33)
TL;DR: In this article, a detailed analysis of the diffractive deep-inelastic scattering process is presented, where the cross section is measured for photon virtualities in the range $3.5 \leq Q^2 \leqs 1600 \rm GeV^2, triple differentially in $\xpom$, $Q^2$ and $\beta = x / \xpom", where x$ is the Bjorken scaling variable.
Abstract: A detailed analysis is presented of the diffractive deep-inelastic scattering process $ep\to eXY$, where $Y$ is a proton or a low mass proton excitation carrying a fraction $1 - \xpom > 0.95$ of the incident proton longitudinal momentum and the squared four-momentum transfer at the proton vertex satisfies $|t|<1 {\rm GeV^2}$. Using data taken by the H1 experiment, the cross section is measured for photon virtualities in the range $3.5 \leq Q^2 \leq 1600 \rm GeV^2$, triple differentially in $\xpom$, $Q^2$ and $\beta = x / \xpom$, where $x$ is the Bjorken scaling variable. At low $\xpom$, the data are consistent with a factorisable $\xpom$ dependence, which can be described by the exchange of an effective pomeron trajectory with intercept $\alphapom(0)= 1.118 \pm 0.008 {\rm (exp.)} ^{+0.029}_{-0.010} {\rm (model)}$. Diffractive parton distribution functions and their uncertainties are determined from a next-to-leading order DGLAP QCD analysis of the $Q^2$ and $\beta$ dependences of the cross section. The resulting gluon distribution carries an integrated fraction of around 70% of the exchanged momentum in the $Q^2$ range studied. Total and differential cross sections are also measured for the diffractive charged current process $e^+ p \to \bar{ u}_e XY$ and are found to be well described by predictions based on the diffractive parton distributions. The ratio of the diffractive to the inclusive neutral current $ep$ cross sections is studied. Over most of the kinematic range, this ratio shows no significant dependence on $Q^2$ at fixed $\xpom$ and $x$ or on $x$ at fixed $Q^2$ and $\beta$.

260 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2964 moreInstitutions (200)
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.

9,282 citations

Journal ArticleDOI
TL;DR: MadGraph5 aMC@NLO as discussed by the authors is a computer program capable of handling all these computations, including parton-level fixed order, shower-matched, merged, in a unified framework whose defining features are flexibility, high level of parallelisation and human intervention limited to input physics quantities.
Abstract: We discuss the theoretical bases that underpin the automation of the computations of tree-level and next-to-leading order cross sections, of their matching to parton shower simulations, and of the merging of matched samples that differ by light-parton multiplicities. We present a computer program, MadGraph5 aMC@NLO, capable of handling all these computations — parton-level fixed order, shower-matched, merged — in a unified framework whose defining features are flexibility, high level of parallelisation, and human intervention limited to input physics quantities. We demonstrate the potential of the program by presenting selected phenomenological applications relevant to the LHC and to a 1-TeV e + e − collider. While next-to-leading order results are restricted to QCD corrections to SM processes in the first public version, we show that from the user viewpoint no changes have to be expected in the case of corrections due to any given renormalisable Lagrangian, and that the implementation of these are well under way.

6,509 citations

Journal ArticleDOI
TL;DR: In this paper, a new generation of parton distribution functions with increased precision and quantitative estimates of uncertainties is presented, using a recently developed eigenvector-basis approach to the hessian method, which provides the means to quickly estimate the uncertainties of a wide range of physical processes at these high-energy hadron colliders, based on current knowledge of the parton distributions.
Abstract: A new generation of parton distribution functions with increased precision and quantitative estimates of uncertainties is presented. This work signiflcantly extends previous CTEQ and other global analyses on two fronts: (i) a full treatment of available experimental correlated systematic errorsforbothnewandolddata sets; (ii) asystematic and pragmatic treatment of uncertainties of the parton distributions and their physical predictions, using a recently developed eigenvector-basis approach to the hessian method. The new gluon distribution is considerably harder than that of previous standard flts. A numberofphysicsissues,particularlyrelatingtothebehaviorofthegluondistribution,are addressedinmorequantitativetermsthanbefore. Extensiveresultsontheuncertaintiesof parton distributions at various scales, and on parton luminosity functions at the Tevatron RunII and the LHC, are presented. The latter provide the means to quickly estimate the uncertainties of a wide range of physical processes at these high-energy hadron colliders, basedoncurrentknowledgeofthepartondistributions. Inparticular, theuncertaintieson the production cross sections of the W, Z at the Tevatron and the LHC are estimated to be§4% and§5%, respectively, and that of a light Higgs at the LHC to be§5%.

4,427 citations