scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The ALICE experiment at the CERN LHC

K. Aamodt1, A. Abrahantes Quintana, R. Achenbach2, S. Acounis3  +1151 moreInstitutions (76)
14 Aug 2008-Journal of Instrumentation (IOP Publishing)-Vol. 3, Iss: 08, pp 1-245
TL;DR: The Large Ion Collider Experiment (ALICE) as discussed by the authors is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model.
Abstract: ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma at extreme values of energy density and temperature in nucleus-nucleus collisions. Besides running with Pb ions, the physics programme includes collisions with lighter ions, lower energy running and dedicated proton-nucleus runs. ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC detectors. The ALICE detector has been built by a collaboration including currently over 1000 physicists and engineers from 105 Institutes in 30 countries. Its overall dimensions are 161626 m3 with a total weight of approximately 10 000 t. The experiment consists of 18 different detector systems each with its own specific technology choice and design constraints, driven both by the physics requirements and the experimental conditions expected at LHC. The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification (PID) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This will allow for comprehensive studies of hadrons, electrons, muons, and photons produced in the collision of heavy nuclei. Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition Radiation Detector (TRD) and Electro Magnetic Calorimeter (EMCal). These detectors will be completed for the high-luminosity ion run expected in 2010. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008.

Summary (15 min read)

Jump to: [5.1.1 Introduction][5.1.2 Detector layout][5.1.3 Signal transmission and readout][5.1.4 Monitoring and calibration][5.2.1 Design considerations][5.2.2 Detector layout][5.2.3 Front-End Electronics and readout][5.3.1 Design considerations][5.3.2 Detector layout][5.3.3 Front-end electronics and readout][5.4.1 Design considerations][5.4.2 Detector layout][5.4.3 Front-end electronics][5.5.1 Design considerations][5.5.2 Detector layout][5.5.3 Fast electronics and readout][6.1.1 Design considerations][6.1.2 Trigger logic][6.1.3 Trigger inputs and classes][6.1.4 Trigger data][Snapshots.][6.1.5 Event rates and rare events][6.2.1 Design considerations][System architecture][Data transfer][Event building][AFFAIR: the DAQ performance monitoring software][MOOD: the DAQ framework for the Monitoring Of Online Data][AMORE: the DAQ framework for the Automatic MOnitoRing Environment][6.2.3 System flexibility and scalability][6.2.4 Event rates and rare events][6.2.5 Data challenges][6.3.2 Architecture][6.3.3 Cluster management][6.3.4 Software architecture][6.3.5 HLT interfaces to other online systems and offline][6.4 Offline computing 6.4.1 Introduction][6.4.2 Computing model][6.4.4 AliRoot framework][Hardware architecture][Software architecture][Control, Logical and Device Units][Partitioning][Finite-State Machine][PVSSII][JCOP and ALICE frameworks][Operating systems][Communication and hardware access][Databases][Front-End Electronics][Services][Detector Safety System and Interlocks][7.2 Experiment Control System (ECS) 7.2.1 Requirements][7.2.2 System architecture Partitions and standalone detectors][Partition Control Agent (PCA)][Human Interfaces][7.2.3 Interfaces to the online systems][ECS/DCS interface][ECS/DAQ interface][7.3 Online Detector Calibration][Performance][8.1 Track and vertex reconstruction 8.1.1 Primary vertex determination][8.1.2 Track reconstruction][8.1.3 Secondary-vertex finding][8.2 Particle identification 8.2.1 Charged-hadron identification] and [8.4 Muon detection]

5.1.1 Introduction

  • The number of participant nucleons is the observable most directly related to the geometry of A-A collisions.
  • In ALICE, spectator nucleons are detected by means of Zero-Degree Calorimeters (ZDC).
  • The centrality information provided by the ZDC is also used for triggering at Level 1 (L1).
  • Finally, the ZDC being also a position-sensitive detector, can give an estimate of the reaction plane in nuclear collisions.

5.1.2 Detector layout

  • In addition, two small electromagnetic calorimeters (ZEM) are placed at about 7 m from the IP, on both sides of the LHC beam pipe, opposite to the muon arm .
  • Spectator protons are spatially separated from neutrons by the magnetic elements of the LHC beam line.
  • Therefore, each ZDC set is made by two distinct detectors: one for spectator neutrons (ZN), placed between the beam pipes at 0 relative to the LHC axis, and one for spectator protons (ZP), placed externally to the outgoing beam pipe on the side where positive particles are deflected.
  • The ZN and ZP are installed on lifting platforms, in order to lower them out of the horizontal beam plane (where the radiation levels are highest) when not in use .

5.1.3 Signal transmission and readout

  • The analogue signals from the calorimeter's PMTs are transmitted through 215 m long low-loss coaxial cables to the closest counting room.
  • They are then fanned out, sending an output to the trigger logic and another one to the readout chain.
  • For the trigger, signals from the ZDCs and the ZEMs are summed separately and then discriminated.
  • An appropriate combination of these signals will provide three ALICE L1 triggers, defining different centrality intervals (central events, semi-central events, and minimum bias events).
  • For the readout, each analogue signal from the photomultipliers will be sent to commercial ADC modules hosted in a VME crate.

5.1.4 Monitoring and calibration

  • Two monitoring procedures are implemented for the ZDCs.
  • The first is based on the detection of cosmic rays crossing the absorber material, by means of scintillators placed above and below the calorimeters.
  • These events induce a very small amount of Cherenkov light, that may give a single photoelectron signal in the PMTs.
  • The peak of the resulting amplitude distribution from the calorimeters'.
  • With these two procedures it is possible to monitor both the PMT gain and the light transmission efficiency in the fibres, which is expected to deteriorate with the radiation dose.

5.2.1 Design considerations

  • These measurements also provide estimations of transverse electromagnetic energy and the reaction plane on an eventby-event basis.
  • The measurement of photon multiplicity gives important information in terms of limiting fragmentation, order of phase transition, the equation of state of matter and the formation of disoriented chiral condensates.
  • The PMD uses the preshower method where a three radiation length thick converter (1.5 cm thick lead with a 0.5 cm stainless steel backing) is sandwiched between two planes of highly granular gas proportional counters.
  • The cell cross-section and depth are 0.22 cm 2 and 0.5 cm respectively, optimised on the basis of detailed simulation and test beam results.
  • The response of the detector to charged particles was studied using a 5 GeV/c pion beam.

5.2.2 Detector layout

  • The PMD chambers are fabricated in the form of modules consisting of 4608 honeycomb cells.
  • Each module is a gas tight enclosure and care is taken for high voltage isolation of each of the modules.
  • Each half has independent cooling, gas supply and electronics accessories.
  • The PMD is supported from a stainless steel girder which forms part of the baby space frame in the forward region, provision being made for both x− and z− movements.
  • The two halves can be moved on the girder to bring them together for data taking operation or separated for servicing.

5.2.3 Front-End Electronics and readout

  • The schematics of the Front-End Electronics for the PMD is shown in figure 5 .9, which is similar to the setup for the tracking chambers of ALICE muon spectrometer [12] .
  • The signals are processed using the MANAS chip which handles 16 channels providing multiplexed analogue outputs.
  • Each FEE board consists of four MANAS chips, two 12-bit ADCs, and a custom built ASIC called MARC chip which controls all 64 channels.
  • The MARC chip controls 4 MANAS chips, two ADCs and performs zero suppression of the data.

5.3.1 Design considerations

  • The main functionality of the Forward Multiplicity Detector (FMD) is to provide charged-particle multiplicity information in the pseudo-rapidity range −3.4 < η < −1.7 and 1.7 < η < 5.0. Figure 5 .11 shows the combined pseudo-rapidity coverage of the FMD and ITS pixel detector seen from the nominal vertex position.
  • The overlap between the FMD silicon rings and the ITS inner pixel layer provides redundancy and cross-checks of measurements between subdetectors and ensures that continuous coverage for a distribution of vertices along the z-axis.
  • Additionally, high radial detector segmentation allows for the study of multiplicity fluctuations on an event-by-event basis while azimuthal segmentation allows for the determination of the reaction plane for each event and the analysis of flow within the FMD's pseudo-rapidity coverage.
  • The segmentation of the FMD was chosen such that, on average, one charged particle would occupy each strip for central events.
  • Simulations of central Pb-Pb collisions with dN ch /dη ≈ 8000 in the mid-rapidity region were used to study the FMD design parameters [18] .

5.3.2 Detector layout

  • Figure 5 .12 shows the location of each FMD ring in ALICE as well as the basic layout of the silicon sensors within an FMD ring.
  • FMD2 and FMD3 each consist of both an inner and an outer ring of silicon sensors and are located on either side of the ITS detector.
  • Another ring, FMD1, was placed further from the interaction point opposite to the muon spectrometer to extend the charged particle multiplicity coverage.
  • The radial span (distance from inner radius to outer radius) is limited by the 15 cm diameter wafers from which the silicon sensors are made.
  • Outer sensors also consist of two azimuthal sectors each with 256 silicon strips with radii ranging from 15.4 cm to 28.4 cm.

5.3.3 Front-end electronics and readout

  • The signals from each silicon strip in the FMD must be collected and transferred for processing.
  • The digitizer card provides the low voltage to power the silicon modules as well as controlling readout.
  • It allows analogueto-digital conversion to be done on the detector, thereby avoiding long cables for analogue signals.
  • Additionally, the use of an FPGA allows changes to be made to the readout and monitoring algorithm without having access to the physical hardware.
  • The board controller then manages the serial readout of the 128 signals from each VA chip to the ALTRO where it is digitized and stored in one of four or eight buffers.

5.4.1 Design considerations

  • The V0 detector [18] is a small angle detector consisting of two arrays of scintillator counters, called V0A and V0C, which are installed on either side of the ALICE interaction point.
  • These triggers are given by particles originating from initial collisions and from secondary interactions in the vacuum chamber elements.
  • As the dependence between the number of registered particles on the V0 arrays and the number of primary emitted particles is monotone, the V0 serves as an indicator of the centrality of the collision via the multiplicity recorded in the event.
  • There are three such triggers, the multiplicity, semi-central and central triggers.

5.4.2 Detector layout

  • The V0A detector is located 340 cm from the vertex on the side opposite to the muon spectrometer whereas V0C is fixed to the front face of the hadronic absorber, 90 cm from the vertex.
  • The material consists of BC404 1 scintillating material (2.5 and 2.0 cm in thickness for V0A and V0C respectively) with 1 mm in diameter BCF9929A Wave-Length Shifting (WLS) fibres.
  • The fibres spaced by 1 cm are embedded in the two transverse faces of the segments following the 'megatile' technique [188] for the V0A array .
  • There are 48 elementary counters of this type distributed following two inner rings of 8 counters and two outer rings of 16 counters.
  • Therefore, spare inner V0C elements are foreseen if large irradiation effects require their replacement.

5.4.3 Front-end electronics

  • Two signals are delivered to the Front-End Electronics (FEE).
  • Signal charge and arrival time relatively to the LHC bunch clock are measured for the 32 channels of both arrays.
  • Two types of triggers are provided from each array [18] .
  • The first one is based on pre-adjusted time windows in coincidence with the time signals from the counters.
  • The Channel Interface Unit (CIU) board performs the PMT's anode signal integration (dual charge integrator), the digitization of the time, the pre-processing for the generation of the various triggers, and the data storage during a L0 and a L2 trigger.

5.5.1 Design considerations

  • This timing signal corresponds to the real time of the collision (plus a fixed time delay) and is independent of the position of the vertex.
  • In addition, T0 provides redundancy to the V0 counters and can generate minimum bias (one or both arrays hit) and multiplicity triggers (semi-central and central).
  • Since the T0 detector generates the earliest (L0) trigger signals, they must be generated online without the possibility of any offline corrections.

5.5.2 Detector layout

  • The detector consists of two arrays of Cherenkov counters, 12 counters per array.
  • On the opposite side of the Interaction Point (IP), the distance of the array is about 375 cm -comfortably far from the congested central region.
  • The triggering efficiency of the detector for minimum bias pp collisions, estimated by Monte Carlo simulations, is about 40% for all inelastic processes at 14 TeV.

5.5.3 Fast electronics and readout

  • The T0 electronics consists of the front-end electronics, located close to the arrays, inside the so-called shoeboxes, and the main T0 electronics, placed inside the T0 electronic racks, already outside of the L3 magnet, close to the racks assigned to ALICE trigger electronics.
  • The T0-C and T0-A arrays generate timing signals, which feed directly to the TRD, to be used as a pre-trigger 'wake-up' signal.
  • In order to achieve the very high online time resolution of about 50 ps, the T0 detector is equipped with sophisticated fast timing electronics [194] .
  • All modules are quite similar to those used for the TOF detector.
  • The DRM card receives the trigger information from the CTP via the TTCrx chip and performs a slow-control function with a dedicated CPU.

6.1.1 Design considerations

  • The ALICE Central Trigger Processor (CTP) [17, [196] [197] [198] is designed to select events having a variety of different features at rates which can be scaled down to suit physics requirements and the restrictions imposed by the bandwidth of the Data Acquisition (DAQ) system, see section 6.2, and the High-Level Trigger (HLT), see section 6.3.
  • The challenge for the ALICE trigger is to make optimum use of the component detectors, which are busy for widely different periods following a valid trigger, and to perform trigger selections in a way which is optimised for several different running modes: ion (Pb-Pb and several lighter species), pA,and pp, varying by almost two orders of magnitude in counting rate.
  • In some cases this has led to the use of non-pipelined 'track and hold' electronics (e.g. for those detectors using the GASSIPLEX [124] front-end chip) and these can require a strobe at 1.2 µs.
  • To achieve this, the 'fast' part of the trigger is split into two levels: a Level 0 (L0) signal, which reaches detectors at 1.2 µs, but which is too fast to receive all the trigger inputs, and a Level 1 (L1) signal arriving at 6.5 µs, which picks up all remaining fast inputs.
  • The CTP consists of seven different types of 6U VME boards housed in a single VME crate.

6.1.2 Trigger logic

  • The number of trigger inputs and classes required for ALICE (24 L0 inputs, 24 L1 inputs, 12 L2 inputs, 50 trigger classes -see below) means that the trigger logic requires some restrictions, since a simple enumeration of all outcomes (look-up table approach) is not feasible.
  • In order to do this, four specific L0 inputs are selected for use in a look-up table for which any arbitrary logic can be applied.
  • These features are described in more detail below.
  • The other TRD L0 inputs in fact reflect different requirements on the pre-trigger signals.
  • The exact allocation of trigger logic will be based on charged-track p t cuts and electron identification in combination with multiplicity requirements (e.g. like sign high-p t dielectron pair, four or more high-p t tracks, identified electron, etc.).

6.1.3 Trigger inputs and classes

  • The trigger inputs and classes were introduced in the previous section.
  • Trigger inputs are sent as LVDS signals using twisted pair cables, and are put in time by delays in the CTP input circuits.
  • A slightly different approach is required for proton-proton interactions, where the increased luminosity makes pile-up a certainty, but where the multiplicities are much lower than in ion-ion interactions, and therefore some degree of pile-up is tolerable.
  • The boundaries between these different azimuthal sectors, which define 'Regions-of-Interest' (RoI), line up in the larger central detectors TPC, TRD and TOF, and equivalent boundaries could be imposed in software in the ITS.

6.1.4 Trigger data

  • The CTP must be able to process triggers for all trigger clusters concurrently, and therefore has a tight time budget for collecting and distributing data.
  • For this reason these data are kept to a minimum; for example, all information concerning how a trigger signal was generated must be.

Snapshots.

  • For completeness, the authors note that it will also be possible to run 'snapshots' in which all steps in the trigger (input and output patterns, busy status, etc.) are recorded on a bunch-crossing by bunch-crossing basis for a period of about 30 ms.
  • The purpose originally foreseen for this facility is for diagnostic tests of the CTP itself.
  • Snapshots can also be used for checks of correlations between CTP inputs, since an unbiased record of all input data received during the 30 ms snapshot period can be analysed.
  • Time correlations between a pulse on one input and pulses in other input channels in adjacent bunch crossings can be studied; trigger efficiencies for different inputs can also be studied.

6.1.5 Event rates and rare events

  • When several trigger classes are running concurrently, it becomes necessary to adjust the rates at which they are read out to reflect the physics requirements and the overall DAQ bandwidth.
  • These factors may dictate rates quite different from the natural interaction rates.

6.2.1 Design considerations

  • ALICE will study a variety of observables, using different beam conditions.
  • A large number of trigger classes will be used to select and characterise the events.
  • These triggers will use a very large fraction of the total data acquisition bandwidth.
  • The task of the ALICE Trigger, Data AcQuisition (DAQ) and High-Level Trigger (HLT) systems is therefore to select interesting physics events, to provide an efficient access to these events for the execution of high-level trigger algorithms and finally to archive the data to permanent data storage for later analysis.
  • While the current estimate of event sizes,.

System architecture

  • The detectors receive the trigger signals and the associated information from the Central Trigger Processor (CTP), through a dedicated Local Trigger Unit (LTU) interfaced to a Timing, Trigger and Control (TTC) system.
  • The data produced by the detectors (event fragments) are injected on the DDLs using the same standard protocol.
  • The D-RORCs are hosted by the front-end machines (commodity PCs), called Local Data Concentrators (LDCs).
  • The CTP receives a busy signal from each detector.
  • The role of the LDCs is to ship the sub-events to a farm of machines (also commodity PCs) called Global Data Collectors (GDCs), where the whole events are built (from all the sub-events pertaining to the same trigger).

Data transfer

  • The Detector Data Link (DDL) is the common hardware and protocol interface between the frontend electronics and the DAQ system.
  • The transmission medium is a pair of optical fibres linking sites several hundred metres apart.
  • It can be used to send commands and bulky information (typically pedestals or other calibration constants) to the detectors [204, 205] .
  • The D-RORC is the readout board of the DDL link.
  • The card is able to transfer the event fragments from the link directly into the PC memory at 200 MB/s, without on-board buffering; this bandwidth fulfils (and far exceeds) the original requirement of the ALICE dataacquisition system.

Event building

  • The sub-events prepared by the LDCs are transferred to one GDC where the full event can be assembled.
  • The event building is managed by the Event Building and Distribution System (EBDS).
  • The EBDS distributed protocol runs on all the machines, LDCs and GDCs, participating as data sources or destinations.
  • This protocol is the standard to which the networking industry has converged and it runs on a wide range of hardware layers.
  • It has therefore become the baseline choice for the event-building network.

AFFAIR: the DAQ performance monitoring software

  • The perfomance of a system as large as the ALICE Data Acquisition, including several processes distributed on many processors, needs to be continuously and closely monitored.
  • The fundamental requirement for a detailed, real-time assessment of the DAQ machines (LDCs and GDCs), for the usage of the systems resources, and for the DATE performance is addressed by the AFFAIR package (see p. 217 in [17] ).
  • AFFAIR gathers performance metrics from the LDCs and GDCs and performs the centralised handling of them.
  • Statistics and trends are stored in HTML format and can be viewed using any Web browser.

MOOD: the DAQ framework for the Monitoring Of Online Data

  • To monitor the quality of the data stream created by any of the ALICE detectors, the MOOD [211, 212] toolkit has been developed.
  • MOOD is a data visualisation and data quality monitoring tool.
  • It includes a generic part which implements the interface with DATE and a detector-specific part that can be tailored to detector-specific requirements and setups.
  • MOOD is fully integrated with the ROOT development toolkit, the AliROOT environment, and uses the ALICE common event data format.
  • MOOD can handle online and offline data streams, available on the LDCs and on the GDCs.

AMORE: the DAQ framework for the Automatic MOnitoRing Environment

  • Each detector defines a set of physics plots which have to be continuously filled and checked against reference ones.
  • The AMORE framework [213] includes three components: the client part which collects the data, the server part which accumulates the plots and archives them, and the display program which provides an interactive distributed access to the plots archives.
  • In addition, alarms are raised as soon as collected plots do not conform any more to the expected reference.
  • These alarms are displayed on the operator screens and initiate automatic recovery actions.

6.2.3 System flexibility and scalability

  • The requirements for the DAQ system evolve with data taking and with new ideas of high-level triggers.
  • Currently, the only hard limit present in the design of the DAQ system is the number of DDLs used to read out the TPC and the TRD.
  • The number of links has been fixed so that these two.

6.2.4 Event rates and rare events

  • The full data flow behaviour is modelled, using an interaction rate of 8 kHz at nominal luminosity and realistic event size distributions of different trigger classes.
  • This model is based on the public-domain tool Ptolemy [217] .
  • This rate control is based on a back-pressure from the DAQ that detects the amount of data filled in each computer, and informs the trigger whenever this level reaches a high fraction of the buffer size.
  • When this happens, the frequent events, as the main bandwidth contributors and events of lower priority, would then be blocked by the trigger.
  • In summary, an effective and robust method to maximise the L2 rates of rare and interesting events can be achieved by controlling the rates of peripheral, semi-peripheral and central events with feedback from the LDC occupancy and with downscaling.

6.2.5 Data challenges

  • The anticipated performances of the ALICE DAQ were verified one year before data taking.
  • The two curves of figure 6 .5 show respectively the aggregate bandwidth to and from the event-building, and the bandwidth of the data archived to the MSS in the computing centre.

6.3.2 Architecture

  • The raw data of all ALICE detectors are received via 454 Detector Data Links (DDLs) at layer 1.
  • This is done in part with hardware coprocessors (see section 6.3.2.2) and therefore simultaneously with the receiving of the data.
  • The third layer reconstructs the event for each detector individually.
  • Using the reconstructed physics observables layer 5 performs the selection of events or regions of interest, based on run specific physics selection criteria.
  • The selected data is further subjected to complex data compression algorithms.

6.3.3 Cluster management

  • The HLT Cluster is managed using the SysMES Framework (System Management for Networked Embedded Systems and Clusters) and Lemon (LHC Era Monitoring).
  • SysMES is a scalable, decentralised, fault tolerant, dynamic and rule-based tool set for the monitoring of networks of target systems based on industry standards.
  • The resources to be monitored are the applications running on the computer nodes as well as their hardware, the network switches and the RMS (Rack Moni-.

6.3.4 Software architecture

  • The HLT software is divided into two functional parts, the data transportation framework and the data analysis.
  • The first part is mainly of technical nature, it contains the communication of the components, steering and data transfer between components/nodes.
  • The second part contains the physics analysis.
  • Development of Analysis Components is independent from data transportation which allows for a high degree of flexibility.
  • The Analysis Components can be run in the offline analysis without changes, making a direct comparison of results possible.

6.3.5 HLT interfaces to other online systems and offline

  • The HLT communicates with the other ALICE systems through various external interfaces and portal nodes .
  • Fail safety and redundancy in the design of these interfaces will avoid single points of failure and reduces the risk of time delays or data loss.

6.4 Offline computing 6.4.1 Introduction

  • The role of the Offline Project is the development and operation of the framework for data processing.
  • This includes tasks such as simulation, reconstruction, calibration, alignment, visualisation and analysis.
  • In particular the authors can distinguish three main areas: Distributed computing: Quasi-online operations: During proton-proton collisions, the data is reconstructed and analysed quasi-online.
  • The coordinated operation of all these elements, to realise timely and reliably the physics discovery potential of the data collected by the ALICE experiment, is called the ALICE computing model.

6.4.2 Computing model

  • The inputs to the ALICE Computing Model are the computing resources needed to process the data, the amount of data foreseen and the time lapse between the moment data are recorded and the moment results are needed.
  • The constraints are the foreseeable amount of resources available, their location and the offered service level.
  • In the case of ALICE the number of such centres is around sixty, spread in more than thirty countries.
  • The amount of different tasks to be performed, the necessity to perform them in a timely and documented fashion, and the number of users involved require that this heterogeneous distributed system works as a single integrated computing centre.
  • Large regional computing centres, called Tier-1, share with CERN the roles of a safe storage of the data on high reliably storage media (at present, magnetic tapes) and to perform the bulk of the organised processing of the data.

6.4.4 AliRoot framework

  • The ALICE offline framework, AliRoot [250] , is shown schematically in figure 6 .12.
  • These fundamental technical choices result in one single framework, entirely written in C++, with some external programs (hidden to the users) still in FORTRAN.
  • In the preparation phase, before the start of data taking, it was used to evaluate the physics performance of the full ALICE detector and to assess the functionality of the framework towards the final goal of extracting physics from the data.
  • The data produced by the event generators contain full information about the generated particles: type, momentum, charge, and mother-daughter relationship.
  • Finally, the digits are stored in the specific hardware format of each detector as raw data.

Hardware architecture

  • The hardware architecture of the control system can be divided into three layers; a supervisory, control and field layer as shown in figure 7 .1.
  • The supervisory layer consists of a number of computers (Operator Nodes, ON) that provide the user interfaces to the operators.
  • The supervisory level interfaces to the control layer, where computers (Worker Nodes, WN) and PLC or PLC-like devices interface to the experimental equipment.
  • These devices collect and process information from the lower, so called field layer, and make it available for the supervisory layer.
  • The field layer comprises all field devices such as power supplies and fieldbus nodes, sensors, actuators, etc.

Software architecture

  • The software architecture is a tree-like structure that represents the structure of subdetectors, their sub-systems and devices.
  • The structure is composed of nodes which each have a single 'parent' except for the top node called the 'root node' that has no parent.
  • Nodes may have zero, one or more children.

Control, Logical and Device Units

  • There are three types of nodes, a Control Unit (CU), a Logical Unit (LU) and a Device Unit (DU) that serve as basic building blocks for the entire hierarchical control system.
  • The CU and LU model and control the sub-tree below it and the device unit 'drives' a device.
  • The hierarchy can have an arbitrary number of levels to provide the sub-detectors with as many abstraction layers as required.

Partitioning

  • The hierarchy also offers a high degree of independence between its components and allows, by means of the concept of 'partitioning', for concurrent use.
  • Partitioning is the capability of independently and concurrently controlling and monitoring parts of the system, typically sub-trees of the hierarchical control tree.
  • This functionality is essential during the installation and commissioning phase, where parts of the control system might not yet be available but sub-detectors need to control the installed equipment.
  • During longer shutdown periods, sub-detectors might want 2008 JINST 3 S08002 to run their sub-detector control system while other parts of the control system are still switched off.
  • Only the control units in the control tree can become the root node of a partitioned control tree.

Finite-State Machine

  • The behaviour and functionality of each unit in the control tree is modelled and implemented as a finite-state machine (FSM).
  • The finite-state machine concept is a fundamental component in the control system architecture.
  • It can transit between these states by executing 'actions' that are triggered either by commands from an operator or another component or by other events such as state changes of other components.
  • This concept allows for distributed and decentralised decision making and actions can be performed autonomously, even when controlled centrally.
  • PVSSII itself does not provide any FSM functionality, but this was added in the framework (SMI++, [263] ).

PVSSII

  • The core software of the control system is a commercial SCADA (Supervisory Controls And Data Acquisition) system: PVSSII.
  • This package was selected after an extensive evaluation performed by the CERN-wide Joint Controls Project (JCOP).
  • It offers many of the basic functionalities needed by the control system.

JCOP and ALICE frameworks

  • Around PVSSII a framework was built as a joint effort between the four LHC experiments .
  • This framework provides tools and components for the implementation of all the common tasks that are expected from the control system such as FSM, database access, access control, basic user interfaces, configuration, etc.
  • The JCOP-framework also implemented interfaces to several hardware devices that are commonly used and that hide much of the PVSSII internals from a non-expert end-user.
  • In the same context an ALICE framework was developed to cater for ALICE specific needs.
  • These tools are used by the sub-detector expert to build their applications.

Operating systems

  • All PVSSII applications are running on the Windows operating system on the Worker Nodes (Windows XP) and the Operator Nodes (Windows Server 2003).
  • Specific worker nodes (those interfacing to the Front End Electronics) run the Linux (SLC4) operating system.

Communication and hardware access

  • Two different protocols cover all the needs.
  • OPC (OLE for Process Control) is a widely accepted and an industry standard to communicate with commercial devices.
  • DIM is available for many platforms and libraries are available for several computer languages.
  • DIM implements a client-server mechanism over TCP/IP.
  • In addition PVSSII can also directly communicate with equipment through drivers.

Databases

  • The amount and the use of the data vary widely and it is therefore stored in a collection of databases, each optimised for its particular use, all housed in a dedicated Oracle, RAC database server at the experiment site.
  • PVSSII has its own internal proprietary run-time database which is used to store the values that are read from the devices, information on the configuration of PVSSII itself and any information that is needed for the operation of the PVSSII system.
  • Data archiving is an integral part of PVSSII and is the mechanism to store the history of any data available in the system that the user decides to archive.
  • During first commissioning and tuning of the sub-detector a file based system is used, in the final production system data is archived into the database.
  • The configuration database holds the data needed for the configuration of the whole control system; this includes the configuration of the control system itself, configuration of hardware.

Front-End Electronics

  • The control of Front-End Electronics (FEE) is a complex and delicate task.
  • It involves control of voltage regulators, power switches, error registers, etc., and monitoring of temperatures, voltages, currents, status registers, etc. of the FEE boards.
  • It also involves configuration and initialization of FEE controllers and of all the various custom chips on the detector boards.

Services

  • The DCS interfaces to various services in order to keep the control system up to date with the experiment operation environment.
  • Some of the services allow active control from DCS, however the majority of interactions are monitoring only.
  • Any information needed by the sub-detectors from the gas system for their operation is made available through DIP.
  • These cooling systems (and their controls) are designed, built and installed by the TS/CV team; the control is based on PLC technology.
  • Both gas and cooling systems generate, as backup to the software interface, hardware interlocks allowing sub-detectors to take actions and protect their equipment in case of serious anomalies.

Detector Safety System and Interlocks

  • The Detector Safety System (DSS, [266] ) is a robust part of the ALICE DCS, designed for highavailability and is based on a redundant PLC system.
  • It is designed to monitor the experiments environment (temperature, presence of cooling, water leaks) and to take automatic protective actions (cut power, close water valves) in case of anomalies.
  • Hardware interlocks are implemented at several levels.
  • Sub-detectors have implemented various protection mechanisms on their detector equipment; high temperature detected on the electronics and automatically switches off that piece of electronics.
  • Also the DSS is used as an interlock system where independent sensors are available to detect anomalous conditions; the DSS is then programmed to take protective action.

7.2 Experiment Control System (ECS) 7.2.1 Requirements

  • The control of the ALICE experiment is based on several independent 'online systems'.
  • In the commissioning phase, however, detectors are debugged, and tested as independent objects.
  • It will therefore remain essential during the whole life cycle of ALICE.
  • The Experiment Control System (ECS) coordinates the operations controlled by the 'online systems'.
  • It permits independent, concurrent activities on part of the experiment by different operators and coordinates the functions of the 'online systems' for all the detectors and within every partition.

7.2.2 System architecture Partitions and standalone detectors

  • From the ECS point of view, a partition is defined by a unique name that makes it different from other partitions and by two lists of detectors: the list of detectors 'assigned' to the partition and the list of detectors 'excluded' from the partition.
  • The second list, called 'excluded' detectors list, contains the names of the ALICE detectors that have been assigned to the partition, but are currently not active in it.
  • It interacts with the DAQ and HLT processes that steer the DAQ and HLT activities for the whole partition.
  • The ECS handles an individual detector operation by monitoring the DCS status of the detector, interacting with the DAQ and HLT processes that steer the DAQ and HLT activities for that particular detector, and sending commands to the Local Trigger Units (LTU) associated to it.
  • The tasks performed by a standalone detector are equal to the individual detector operations that are allowed when the detector is active in a partition.

Partition Control Agent (PCA)

  • The main tasks performed by this process are the following: .
  • It handles data-acquisition runs using all the detectors active in the partition.
  • It delegates individual detector functions to the DCAs controlling the detectors active in the partition.
  • It handles the structure of the partition allowing the inclusion/exclusion of detectors whenever these tasks are compatible with the data-taking runs going on for individual detectors or for the whole partition.
  • The PCA accepts commands from one PCAHI at a time.

Human Interfaces

  • One can send commands to the DCA, change the rights granted to the DCA, and send commands directly to objects in the DCS, DAQ, HLT, and TRG 'online systems'.
  • Without the mastership of the DCA, the DCAHI can only get information and cannot issue active commands.
  • An operator can run a partition with a PCA Human Interface having the mastership of a PCA.
  • One can send commands to start global and individual detector tasks, can change the rights granted to the PCA, can change the structure of the partition excluding or including detectors, and can send commands directly to objects in the DCS, DAQ, HLT, and TRG 'online systems'.
  • Without the mastership of the PCA, the PCAHI can only get information and cannot issue active commands.

7.2.3 Interfaces to the online systems

  • The main components of the ECS receive status information from the 'online systems' and send commands to them through interfaces based on Finite-State Machines.
  • The interfaces between the ECS and the 'online systems' contain access control mechanisms that manage the rights granted to the ECS.
  • The 'online systems' can either be under the control of the ECS or be operated as independent systems where the 'online systems' provide status information to the ECS but do not receive commands from it.

ECS/DCS interface

  • The interface between the ECS and the DCS consists of one object per detector: the roots of the sub-trees described above and representing the detectors within the DCS.
  • These objects can provide status information to the DCS and, at the same time, to the ECS.
  • There is only one CTP, but many partitions can be operated at the same time and all of them need access to the CTP.
  • When a detector is operated in standalone mode, the DCA controlling it interacts directly with the LTU associated to the detector and the CTP is ignored.
  • When a global operation is performed in a partition, the PCA controlling the partition interacts with the TPA that in turn interacts with CTP and LTUs.

ECS/DAQ interface

  • The interface between the ECS and the DAQ is made of SMI++ objects representing RC processes.
  • (a) a RC process per detector, which steers the data acquisition for a given detector and for that detector only, also known as These are.
  • (b) a RC process per partition that steers the data acquisition for the whole partition with data produced by all the active detectors.

7.3 Online Detector Calibration

  • The condition data including parameters such as detector response calibration, bad channel maps, pedestal values and so on, are evaluated on line from data collected during normal data taking or during special runs.
  • So-called Detector Algorithms (DA) running on DAQ LDCs, GDCs or.

Performance

  • Charged particle identification and neutral particle detection as expected from Monte Carlo simulations.the authors.
  • The ALICE detector was designed to cope with the highest predicted charged-particle multiplicity densities for central Pb-Pb collisions at LHC (dN ch /dη at mid-rapidity up to 8000; see section 1.3.1 of [20] ).
  • Track finding begins with the reconstruction of the primary vertex using the correlation of the hit positions in the innermost detector (SPD).
  • Charged hadrons are identified combining information provided by the ITS, TPC, TRD, TOF, and HMPID detectors.
  • The PHOS spectrometer detects and identifies photons.

8.1 Track and vertex reconstruction 8.1.1 Primary vertex determination

  • The reconstruction of the primary vertex is based on the information provided by the Silicon Pixel Detectors (SPD), which constitute the two innermost layers of the ITS.
  • The authors select pairs of reconstructed points in the two layers, which are close in azimuthal angle in the transverse plane.
  • This estimate of the primary vertex position is then used to correct the measurement of the z-coordinate, for effects due to an off-axis position of the interaction point in the transverse plane.
  • This measurement of the primary-vertex position is used as an input for the tracking.
  • Transverse x/y-coordinates, are shown as a function of the charged-particle density (for pp collisions).

8.1.2 Track reconstruction

  • The basic method employed for track finding and fitting is the Kalman filter as introduced to this field by P. Billoir [252] .
  • This seeding is done using the space points reconstructed in the TPC.
  • The authors start to combine the space points from a few outermost pad rows using, in the first pass, the primary-vertex position as a constraint.
  • For the same value of the charged-particle density figure 8 .4 shows the expected quality of the TPC-ITS track finding, for different definitions of properly found and of fake tracks.
  • Optionally, the authors proceed with an additional track-finding step using only points from the ITS, after having removed all the ITS space points already assigned to tracks.

8.1.3 Secondary-vertex finding

  • Vertices from strange particle decays are searched for at the reconstruction level.
  • The authors combine opposite-sign secondary tracks and calculate their distance of closest approach.
  • Additional cuts are then imposed in the subsequent analysis phase.
  • Tight cuts were applied on the distance of closest approach and on the pointing angle (the angle between reconstructed Λ momentum and the line joining the primary and secondary vertices), in order to maximize the signal-to-background ratio.
  • Such a Λ would not in general point towards the primary vertex (the parent Ξ − would).

8.2 Particle identification 8.2.1 Charged-hadron identification

  • Several detectors, the ITS, TPC, TRD, TOF and HMPID, participate in the charged-Particle IDentification (PID), each with a different momentum-dependent performance.
  • The information from the individual detectors is then combined in the second PID step, described later in the section.
  • The track probabilities for different particle types are calculated using such detector response functions.
  • In the lower momentum range (left panel), one starts with excellent separations in the 1/β 2 region (below the particle masses).
  • As the authors also saw above, charged-particle identification based on a dE/dx measurement performs well in the 1/β 2 region and, for gas-based detectors, in the multi-GeV region.

8.4 Muon detection

  • The muon tracks are reconstructed by five tracking stations located behind the absorber.
  • The first combines tracklets found in the muon tracking stations (each station measures two space points on the track); the second is based on the Kalman filter.
  • The remaining contribution to the dimuon invariant-mass resolution, due to the error on the determination of the angle between the two muons, is lower.
  • This is due to the fact that the primary-vertex constraint is used in the muon reconstruction, which allows to determine the angle using the positions of the primary-vertex and of the first measured point behind the absorber, and not by just relying on the measured track direction in the tracking stations.
  • The precision on the angle is therefore determined essentially by the resolution on the first measured point.

Did you find this useful? Give us your feedback

Figures (28)

Content maybe subject to copyright    Report

Citations
More filters
Book
Georges Aad1, E. Abat2, Jalal Abdallah3, Jalal Abdallah4  +3029 moreInstitutions (164)
23 Feb 2020
TL;DR: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper, where a brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.
Abstract: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.

3,111 citations

Journal ArticleDOI
TL;DR: The ALICE Time-Projection Chamber (TPC) as discussed by the authors is the main device for pattern recognition, tracking, and identification of charged particles in the ALICE experiment at the CERN LHC.
Abstract: The design, construction, and commissioning of the ALICE Time-Projection Chamber (TPC) is described. It is the main device for pattern recognition, tracking, and identification of charged particles in the ALICE experiment at the CERN LHC. The TPC is cylindrical in shape with a volume close to 90 m(3) and is operated in a 0.5T solenoidal magnetic field parallel to its axis. In this paper we describe in detail the design considerations for this detector for operation in the extreme multiplicity environment of central Pb-Pb collisions at LHC energy. The implementation of the resulting requirements into hardware (field cage, read-out chambers, electronics), infrastructure (gas and cooling system, laser-calibration system), and software led to many technical innovations which are described along with a presentation of all the major components of the detector, as currently realized. We also report on the performance achieved after completion of the first round of stand-alone calibration runs and demonstrate results close to those specified in the TPC Technical Design Report. (C) 2010 CERN for the benefit of the ALICE collaboration. Published by Elsevier B.V. All rights reserved. (Less)

545 citations

Journal ArticleDOI
TL;DR: In this article, comprehensive results on π±, K±, k0 S, p(p) and ¯ Λ(Λ)¯ production at mid-rapidity (0 < yCMS < 0.5) in p-Pb collisions at √sNN = 5.02 TeV, measured by the ALICE detector at the LHC, are reported.

375 citations

Journal ArticleDOI
K. Aamodt1, N. Abel2, U. Abeysekara3, A. Abrahantes Quintana  +1051 moreInstitutions (77)
TL;DR: In this article, the authors measured charged-particle pseudo-rapidity density at the LHC with the ALICE detector at centre-of-mass energies 0.9 TeV and 2.36 TeV in the pseudorapidity range.
Abstract: Charged-particle production was studied in proton-proton collisions collected at the LHC with the ALICE detector at centre-of-mass energies 0.9 TeV and 2.36 TeV in the pseudorapidity range vertical bar eta vertical bar < 1.4. In the central region (vertical bar eta vertical bar < 0.5), at 0.9 TeV, we measure charged-particle pseudo-rapidity density dN(ch)/d eta = 3.02 +/- 0.01(stat.)(-0.05)(+0.08)(syst.) for inelastic interactions, and dN(ch)/d eta = 3.58 +/- 0.01 (stat.)(-0.12)(+0.12)(syst.) for non-single-diffractive interactions. At 2.36 TeV, we find dN(ch)/d eta = 3.77 +/- 0.01(stat.)(-0.12)(+0.25)(syst.) for inelastic, and dN(ch)/d eta = 4.43 +/- 0.01(stat.)(-0.12)(+0.17)(syst.) for non-single-diffractive collisions. The relative increase in charged-particle multiplicity from the lower to higher energy is 24.7% +/- 0.5%(stat.)(-2.8)(+5.7)%(syst.) for inelastic and 23.7% +/- 0.5%(stat.)(-1.1)(+4.6)%(syst.) for non-single-diffractive interactions. This increase is consistent with that reported by the CMS collaboration for non-single-diffractive events and larger than that found by a number of commonly used models. The multiplicity distribution was measured in different pseudorapidity intervals and studied in terms of KNO variables at both energies. The results are compared to proton-antiproton data and to model predictions.

284 citations

Journal ArticleDOI
K. Aamodt1, N. Abel2, U. Abeysekara3, A. Abrahantes Quintana  +1106 moreInstitutions (80)
TL;DR: In this paper, the alignment of the inner tracking system of the ALICE Large Ion Collider Experiment (ALICE ITS) with the Millepede global approach has been studied and the results obtained for the ITS alignment using about 10(5) charged tracks from cosmic rays that have been collected during summer 2008.
Abstract: ALICE (A Large Ion Collider Experiment) is the LHC (Large Hadron Collider) experiment devoted to investigating the strongly interacting matter created in nucleus-nucleus collisions at the LHC energies. The ALICE ITS, Inner Tracking System, consists of six cylindrical layers of silicon detectors with three different technologies; in the outward direction: two layers of pixel detectors, two layers each of drift, and strip detectors. The number of parameters to be determined in the spatial alignment of the 2198 sensor modules of the ITS is about 13,000. The target alignment precision is well below 10 mu m in some cases (pixels). The sources of alignment information include survey measurements, and the reconstructed tracks from cosmic rays and from proton-proton collisions. The main track-based alignment method uses the Millepede global approach. An iterative local method was developed and used as well. We present the results obtained for the ITS alignment using about 10(5) charged tracks from cosmic rays that have been collected during summer 2008, with the ALICE solenoidal magnet switched off.

277 citations

References
More filters
Journal ArticleDOI
S. Agostinelli1, John Allison2, K. Amako3, J. Apostolakis4, Henrique Araujo5, P. Arce4, Makoto Asai6, D. Axen4, S. Banerjee7, G. Barrand, F. Behner4, Lorenzo Bellagamba8, J. Boudreau9, L. Broglia10, A. Brunengo8, H. Burkhardt4, Stephane Chauvie, J. Chuma11, R. Chytracek4, Gene Cooperman12, G. Cosmo4, P. V. Degtyarenko13, Andrea Dell'Acqua4, G. Depaola14, D. Dietrich15, R. Enami, A. Feliciello, C. Ferguson16, H. Fesefeldt4, Gunter Folger4, Franca Foppiano, Alessandra Forti2, S. Garelli, S. Gianì4, R. Giannitrapani17, D. Gibin4, J. J. Gomez Y Cadenas4, I. González4, G. Gracia Abril4, G. Greeniaus18, Walter Greiner15, Vladimir Grichine, A. Grossheim4, Susanna Guatelli, P. Gumplinger11, R. Hamatsu19, K. Hashimoto, H. Hasui, A. Heikkinen20, A. S. Howard5, Vladimir Ivanchenko4, A. Johnson6, F.W. Jones11, J. Kallenbach, Naoko Kanaya4, M. Kawabata, Y. Kawabata, M. Kawaguti, S.R. Kelner21, Paul R. C. Kent22, A. Kimura23, T. Kodama24, R. P. Kokoulin21, M. Kossov13, Hisaya Kurashige25, E. Lamanna26, Tapio Lampén20, V. Lara4, Veronique Lefebure4, F. Lei16, M. Liendl4, W. S. Lockman, Francesco Longo27, S. Magni, M. Maire, E. Medernach4, K. Minamimoto24, P. Mora de Freitas, Yoshiyuki Morita3, K. Murakami3, M. Nagamatu24, R. Nartallo28, Petteri Nieminen28, T. Nishimura, K. Ohtsubo, M. Okamura, S. W. O'Neale29, Y. Oohata19, K. Paech15, J Perl6, Andreas Pfeiffer4, Maria Grazia Pia, F. Ranjard4, A.M. Rybin, S.S Sadilov4, E. Di Salvo8, Giovanni Santin27, Takashi Sasaki3, N. Savvas2, Y. Sawada, Stefan Scherer15, S. Sei24, V. Sirotenko4, David J. Smith6, N. Starkov, H. Stoecker15, J. Sulkimo20, M. Takahata23, Satoshi Tanaka30, E. Tcherniaev4, E. Safai Tehrani6, M. Tropeano1, P. Truscott31, H. Uno24, L. Urbán, P. Urban32, M. Verderi, A. Walkden2, W. Wander33, H. Weber15, J.P. Wellisch4, Torre Wenaus34, D.C. Williams, Douglas Wright6, T. Yamada24, H. Yoshida24, D. Zschiesche15 
TL;DR: The Gelfant 4 toolkit as discussed by the authors is a toolkit for simulating the passage of particles through matter, including a complete range of functionality including tracking, geometry, physics models and hits.
Abstract: G eant 4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics.

18,904 citations

Journal ArticleDOI
TL;DR: The Pythia program as mentioned in this paper can be used to generate high-energy-physics ''events'' (i.e. sets of outgoing particles produced in the interactions between two incoming particles).
Abstract: The Pythia program can be used to generate high-energy-physics ''events'', i.e. sets of outgoing particles produced in the interactions between two incoming particles. The objective is to provide as accurate as possible a representation of event properties in a wide range of reactions, within and beyond the Standard Model, with emphasis on those where strong interactions play a role, directly or indirectly, and therefore multihadronic final states are produced. The physics is then not understood well enough to give an exact description; instead the program has to be based on a combination of analytical results and various QCD-based models. This physics input is summarized here, for areas such as hard subprocesses, initial- and final-state parton showers, underlying events and beam remnants, fragmentation and decays, and much more. Furthermore, extensive information is provided on all program elements: subroutines and functions, switches and parameters, and particle and process data. This should allow the user to tailor the generation task to the topics of interest.

6,300 citations

Journal ArticleDOI
TL;DR: ROOT, written in C++, contains an efficient hierarchical OO database, a C++ interpreter, advanced statistical analysis (multi-dimensional histogramming, fitting, minimization, cluster finding algorithms) and visualization tools.
Abstract: The ROOT system in an Object Oriented framework for large scale data analysis. ROOT written in C++, contains, among others, an efficient hierarchical OO database, a C++ interpreter, advanced statistical analysis (multi-dimensional histogramming, fitting, minimization, cluster finding algorithms) and visualization tools. The user interacts with ROOT via a graphical user interface, the command line or batch scripts. The command and scripting language is C++ (using the interpreter) and large scripts can be compiled and dynamically linked in. The OO database design has been optimized for parallel access (reading as well as writing) by multiple processes.

4,586 citations

Journal ArticleDOI
TL;DR: In this article, the effects of gamma and electron irradiation on virgin and recycled polyethylene were compared and their mechanical, thermal and chemical properties were analyzed, showing that VPE samples showed higher crosslinking percentages than RPE samples in all range of doses studied, unirradiated RPE sample had higher values on their tensile properties than VPE.

1,536 citations

Journal ArticleDOI
TL;DR: A Monte Carlo event generator HIJING is developed to study jet and multiparticle production in high energy {ital pp, {ital pA}, and {ital AA} collisions, and a schematic mechanism of jet interactions in dense matter is described.
Abstract: Combining perturbative-QCD inspired models for multiple jet production with low ${p}_{T}$ multistring phenomenology, we develop a Monte Carlo event generator hijing to study jet and multiparticle production in high energy $\mathrm{pp}$, $\mathrm{pA}$, and $\mathrm{AA}$ collisions. The model includes multiple minijet production, nuclear shadowing of parton distribution functions, and a schematic mechanism of jet interactions in dense matter. Glauber geometry for multiple collisions is used to calculate $\mathrm{pA}$ and $\mathrm{AA}$ collisions. The phenomenological parameters are adjusted to reproduce essential features of $\mathrm{pp}$ multiparticle production data for a wide energy range ($\sqrt{s}=5\ensuremath{-}2000$ GeV). Illustrative tests of the model on $p+A$ and light-ion $B+A$ data at $\sqrt{s}=20$ GeV/nucleon and predictions for Au+Au at energies of the BNL Relativistic Heavy Ion Collider ($\sqrt{s}=200$ GeV/nucleon) are given.

1,180 citations

Related Papers (5)
S. Agostinelli, John Allison, K. Amako, J. Apostolakis, Henrique Araujo, P. Arce, Makoto Asai, D. Axen, S. Banerjee, G. Barrand, F. Behner, Lorenzo Bellagamba, J. Boudreau, L. Broglia, A. Brunengo, H. Burkhardt, Stephane Chauvie, J. Chuma, R. Chytracek, Gene Cooperman, G. Cosmo, P. V. Degtyarenko, Andrea Dell'Acqua, G. Depaola, D. Dietrich, R. Enami, A. Feliciello, C. Ferguson, H. Fesefeldt, Gunter Folger, Franca Foppiano, Alessandra Forti, S. Garelli, S. Gianì, R. Giannitrapani, D. Gibin, J. J. Gomez Y Cadenas, I. González, G. Gracia Abril, G. Greeniaus, Walter Greiner, Vladimir Grichine, A. Grossheim, Susanna Guatelli, P. Gumplinger, R. Hamatsu, K. Hashimoto, H. Hasui, A. Heikkinen, A. S. Howard, Vladimir Ivanchenko, A. Johnson, F.W. Jones, J. Kallenbach, Naoko Kanaya, M. Kawabata, Y. Kawabata, M. Kawaguti, S.R. Kelner, Paul R. C. Kent, A. Kimura, T. Kodama, R. P. Kokoulin, M. Kossov, Hisaya Kurashige, E. Lamanna, Tapio Lampén, V. Lara, Veronique Lefebure, F. Lei, M. Liendl, W. S. Lockman, Francesco Longo, S. Magni, M. Maire, E. Medernach, K. Minamimoto, P. Mora de Freitas, Yoshiyuki Morita, K. Murakami, M. Nagamatu, R. Nartallo, Petteri Nieminen, T. Nishimura, K. Ohtsubo, M. Okamura, S. W. O'Neale, Y. Oohata, K. Paech, J Perl, Andreas Pfeiffer, Maria Grazia Pia, F. Ranjard, A.M. Rybin, S.S Sadilov, E. Di Salvo, Giovanni Santin, Takashi Sasaki, N. Savvas, Y. Sawada, Stefan Scherer, S. Sei, V. Sirotenko, David J. Smith, N. Starkov, H. Stoecker, J. Sulkimo, M. Takahata, Satoshi Tanaka, E. Tcherniaev, E. Safai Tehrani, M. Tropeano, P. Truscott, H. Uno, L. Urbán, P. Urban, M. Verderi, A. Walkden, W. Wander, H. Weber, J.P. Wellisch, Torre Wenaus, D.C. Williams, Douglas Wright, T. Yamada, H. Yoshida, D. Zschiesche 
Frequently Asked Questions (20)
Q1. What are the contributions mentioned in the paper "The alice experiment at the cern lhc" ?

The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification ( PID ) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008. 

Due to charge diffusion during the drift process, the double-track resolution is a function of the drift time for a given separation efficiency. 

Because of the radial dependence of the track density, the readout is segmented radially into two readout chambers with slightly different wire geometry adapted to the varying pad sizes mentioned below. 

In central Pb-Pb collisions, about eight low-pt muons from π and K decays are expected to be detected per event in the spectrometer. 

The EMCal is installed into the support structure as twelve super module units, with the lower two super modules being of one-third size. 

A C interface layer provides access to the analysis components for external applications, which include a PubSub interface wrapper program as well as a simple standalone run environment for the components. 

The fibre spacing is smaller than the radiation length of the absorber, in order to avoid electron absorption in the passive material leading to non-uniformity– 

In normal operation, if a sudden failure of the cooling were to occur, the temperature at the half-stave would increase at a rate of 1 oC/s. 

A liquid circulation system was implemented to purify C6F14, fill and empty the twenty-one radiator trays at a constant flow, independently, remotely and safely. 

At high pt a large fraction of J/ψ’s is produced via b-decay [136]; based on Tevatron measurements [137] the contribution from b-decay to the total J/Ψ yield is ≈ 10% for pt < 3− 4 GeV/c and then it increases linearly to ≈ 40% for pt around 15− 18 GeV/c. 

The radiator trays are supported by a stiff composite panel, consisting of a 50 mm thick layer of Rohacell sandwiched between two thin 0.5 mm layers of aluminium. 

The data formatting introduced by the DDL allows the D-RORC to separate the event fragments in memory, without interrupting the host CPU. 

As the control system controls the often delicate and unique equipment of the sub-detectors the potential danger of serious and irreversible damage imposes a need for an advanced access control mechanism to regulate interactions of the users with the control system components. 

The firmware includes a Detector Control System (DCS) card with processor core for handling the Ethernet connection to the ALICE detector control system. 

Tests and simulations [29, 30] have shown that the SSD can operate in a thermal neutral way with the water temperature about 5 K below ambient temperature. 

The development of the long micro-cable was particularly delicate since it was designed to minimize material while keeping excellent High-Voltage insulation, signal quality and power dissipation. 

In summary, an effective and robust method to maximise the L2 rates of rare and interesting events can be achieved by controlling the rates of peripheral, semi-peripheral and central events with feedback from the LDC occupancy and with downscaling. 

The complexity of the assembly and test procedures (over 20 process steps were necessary) led to an average time of 3 days to have the complete module ready for final test with laser. 

The experiment will use the data-taking periods in the most efficient way by acquiring data for several observables concurrently following different scenarios. 

The mirroring on the distant end of the fibre creates an approximate compensation for the effects of the finite attenuation lengths within the fibre making the longitudinal response of the detector quite uniform.