scispace - formally typeset
Search or ask a question

Showing papers by "Brunel University London published in 2012"


Journal ArticleDOI
TL;DR: In this paper, results from searches for the standard model Higgs boson in proton-proton collisions at 7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.8 standard deviations.

8,857 citations


Journal ArticleDOI
TL;DR: The designations employed and the presentation of the material in this publication do not imply the expression of any opinion whatsoever on the part of UNEP or WHO concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries.

1,192 citations


Journal ArticleDOI
TL;DR: Although there are a set of fault prediction studies in which confidence is possible, more studies are needed that use a reliable methodology and which report their context, methodology, and performance comprehensively.
Abstract: Background: The accurate prediction of where faults are likely to occur in code can help direct test effort, reduce costs, and improve the quality of software. Objective: We investigate how the context of models, the independent variables used, and the modeling techniques applied influence the performance of fault prediction models. Method: We used a systematic literature review to identify 208 fault prediction studies published from January 2000 to December 2010. We synthesize the quantitative and qualitative results of 36 studies which report sufficient contextual and methodological information according to the criteria we develop and apply. Results: The models that perform well tend to be based on simple modeling techniques such as Naive Bayes or Logistic Regression. Combinations of independent variables have been used by models that perform well. Feature selection has been applied to these combinations when models are performing particularly well. Conclusion: The methodology used to build models seems to be influential to predictive performance. Although there are a set of fault prediction studies in which confidence is possible, more studies are needed that use a reliable methodology and which report their context, methodology, and performance comprehensively.

1,012 citations


Journal ArticleDOI
29 Mar 2012
TL;DR: In this article, the authors reported results from searches for the standard model Higgs boson in proton-proton collisions at square root(s) = 7 TeV in five decay modes: gamma pair, b-quark pair, tau lepton pair, W pair, and Z pair.
Abstract: Combined results are reported from searches for the standard model Higgs boson in proton-proton collisions at sqrt(s)=7 TeV in five Higgs boson decay modes: gamma pair, b-quark pair, tau lepton pair, W pair, and Z pair. The explored Higgs boson mass range is 110-600 GeV. The analysed data correspond to an integrated luminosity of 4.6-4.8 inverse femtobarns. The expected excluded mass range in the absence of the standard model Higgs boson is 118-543 GeV at 95% CL. The observed results exclude the standard model Higgs boson in the mass range 127-600 GeV at 95% CL, and in the mass range 129-525 GeV at 99% CL. An excess of events above the expected standard model background is observed at the low end of the explored mass range making the observed limits weaker than expected in the absence of a signal. The largest excess, with a local significance of 3.1 sigma, is observed for a Higgs boson mass hypothesis of 124 GeV. The global significance of observing an excess with a local significance greater than 3.1 sigma anywhere in the search range 110-600 (110-145) GeV is estimated to be 1.5 sigma (2.1 sigma). More data are required to ascertain the origin of this excess.

786 citations


Journal ArticleDOI
TL;DR: The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.
Abstract: In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives.This article1 provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.

711 citations


Journal ArticleDOI
J. P. Lees1, V. Poireau1, V. Tisserand1, J. Garra Tico2  +362 moreInstitutions (77)
TL;DR: In this article, the BaBar data sample was used to investigate the sensitivity of BaBar ratios to new physics contributions in the form of a charged Higgs boson in the type II two-Higgs doublet model.
Abstract: Based on the full BaBar data sample, we report improved measurements of the ratios R(D(*)) = B(B -> D(*) Tau Nu)/B(B -> D(*) l Nu), where l is either e or mu. These ratios are sensitive to new physics contributions in the form of a charged Higgs boson. We measure R(D) = 0.440 +- 0.058 +- 0.042 and R(D*) = 0.332 +- 0.024 +- 0.018, which exceed the Standard Model expectations by 2.0 sigma and 2.7 sigma, respectively. Taken together, our results disagree with these expectations at the 3.4 sigma level. This excess cannot be explained by a charged Higgs boson in the type II two-Higgs-doublet model. We also report the observation of the decay B -> D Tau Nu, with a significance of 6.8 sigma.

660 citations


Journal ArticleDOI
TL;DR: This publication provides an overview and thorough review of existing technologies for nucleic acid amplification, and identifies the factors impeding the integration of the methods discussed in fully automated, sample-to-answer POCT devices.
Abstract: Nucleic Acid Testing (NAT) promises rapid, sensitive and specific diagnosis of infectious, inherited and genetic disease. The next generation of diagnostic devices will interrogate the genetic determinants of such conditions at the point-of-care, affording clinicians prompt reliable diagnosis from which to guide more effective treatment. The complex biochemical nature of clinical samples, the low abundance of nucleic acid targets in the majority of clinical samples and existing biosensor technology indicate that some form of nucleic acid amplification will be required to obtain clinically relevant sensitivities from the small samples used in point-of-care testing (POCT). This publication provides an overview and thorough review of existing technologies for nucleic acid amplification. The different methods are compared and their suitability for POCT adaptation are discussed. Current commercial products employing isothermal amplification strategies are also investigated. In conclusion we identify the factors impeding the integration of the methods discussed in fully automated, sample-to-answer POCT devices.

618 citations


Journal ArticleDOI
TL;DR: In this article, the performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at the LHC in 2010.
Abstract: The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta)<2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.

568 citations


Journal ArticleDOI
TL;DR: An in-depth survey of the state-of-the-art of academic research in the field of EDO and other meta-heuristics in four areas: benchmark problems/generators, performance measures, algorithmic approaches, and theoretical studies is carried out.
Abstract: Optimization in dynamic environments is a challenging but important task since many real-world optimization problems are changing over time. Evolutionary computation and swarm intelligence are good tools to address optimization problems in dynamic environments due to their inspiration from natural self-organized systems and biological evolution, which have always been subject to changing environments. Evolutionary optimization in dynamic environments, or evolutionary dynamic optimization (EDO), has attracted a lot of research effort during the last 20 years, and has become one of the most active research areas in the field of evolutionary computation. In this paper we carry out an in-depth survey of the state-of-the-art of academic research in the field of EDO and other meta-heuristics in four areas: benchmark problems/generators, performance measures, algorithmic approaches, and theoretical studies. The purpose is to for the first time (i) provide detailed explanations of how current approaches work; (ii) review the strengths and weaknesses of each approach; (iii) discuss the current assumptions and coverage of existing EDO research; and (iv) identify current gaps, challenges and opportunities in EDO.

566 citations


Journal ArticleDOI
TL;DR: The transverse momentum spectra of charged particles have been measured in pp and PbPb collisions at 2.76 TeV by the CMS experiment at the LHC as mentioned in this paper.
Abstract: The transverse momentum spectra of charged particles have been measured in pp and PbPb collisions at sqrt(sNN) = 2.76 TeV by the CMS experiment at the LHC. In the transverse momentum range pt = 5-10 GeV/c, the charged particle yield in the most central PbPb collisions is suppressed by up to a factor of 5 compared to the pp yield scaled by the number of incoherent nucleon-nucleon collisions. At higher pt, this suppression is significantly reduced, approaching roughly a factor of 2 for particles with pt in the range pt=40-100 GeV/c.

446 citations


Journal ArticleDOI
TL;DR: There is evidence to suggest that carefully selected music can promote ergogenic and psychological benefits during high-intensity exercise, although it appears to be ineffective in reducing perceptions of exertion beyond the anaerobic threshold.
Abstract: Since a 1997 review by Karageorghis and Terry, which highlighted the state of knowledge and methodological weaknesses, the number of studies investigating musical reactivity in relation to exercise has swelled considerably. In this two-part review paper, the development of conceptual approaches and mechanisms underlying the effects of music are explicated (Part I), followed by a critical review and synthesis of empirical work (spread over Parts I and II). Pre-task music has been shown to optimise arousal, facilitate task-relevant imagery and improve performance in simple motoric tasks. During repetitive, endurance-type activities, self-selected, motivational and stimulative music has been shown to enhance affect, reduce ratings of perceived exertion, improve energy efficiency and lead to increased work output. There is evidence to suggest that carefully selected music can promote ergogenic and psychological benefits during high-intensity exercise, although it appears to be ineffective in reducing perceptions of exertion beyond the anaerobic threshold. The effects of music appear to be at their most potent when it is used to accompany self-paced exercise or in externally valid conditions. When selected according to its motivational qualities, the positive impact of music on both psychological state and performance is magnified. Guidelines are provided for future research and exercise practitioners.

Journal ArticleDOI
04 Jun 2012-PLOS ONE
TL;DR: In a proof-of-concept study, eight patients with depression learned to upregulate brain areas involved in the generation of positive emotions during four neurofeedback sessions, and their clinical symptoms improved significantly.
Abstract: Many patients show no or incomplete responses to current pharmacological or psychological therapies for depression. Here we explored the feasibility of a new brain self-regulation technique that integrates psychological and neurobiological approaches through neurofeedback with functional magnetic resonance imaging (fMRI). In a proof-of-concept study, eight patients with depression learned to upregulate brain areas involved in the generation of positive emotions (such as the ventrolateral prefrontal cortex (VLPFC) and insula) during four neurofeedback sessions. Their clinical symptoms, as assessed with the 17-item Hamilton Rating Scale for Depression (HDRS), improved significantly. A control group that underwent a training procedure with the same cognitive strategies but without neurofeedback did not improve clinically. Randomised blinded clinical trials are now needed to exclude possible placebo effects and to determine whether fMRI-based neurofeedback might become a useful adjunct to current therapies for depression.

Journal ArticleDOI
TL;DR: Numerical simulation results verify the validity of the theory and illustrate the promising potentials of the proposed sensing framework, called Structurally Random Matrix (SRM), which has theoretical sensing performance comparable to that of completely random sensing matrices.
Abstract: This paper introduces a new framework to construct fast and efficient sensing matrices for practical compressive sensing, called Structurally Random Matrix (SRM). In the proposed framework, we prerandomize the sensing signal by scrambling its sample locations or flipping its sample signs and then fast-transform the randomized samples and finally, subsample the resulting transform coefficients to obtain the final sensing measurements. SRM is highly relevant for large-scale, real-time compressive sensing applications as it has fast computation and supports block-based processing. In addition, we can show that SRM has theoretical sensing performance comparable to that of completely random sensing matrices. Numerical simulation results verify the validity of the theory and illustrate the promising potentials of the proposed sensing framework.

Journal ArticleDOI
TL;DR: This paper investigates the robust sliding mode control (SMC) problem for a class of uncertain nonlinear stochastic systems with mixed time delays by employing the idea of delay fractioning and constructing a new Lyapunov-Krasovskii functional.
Abstract: This paper investigates the robust sliding mode control (SMC) problem for a class of uncertain nonlinear stochastic systems with mixed time delays. Both the sectorlike nonlinearities and the norm-bounded uncertainties enter into the system in random ways, and such randomly occurring uncertainties and randomly occurring nonlinearities obey certain mutually uncorrelated Bernoulli distributed white noise sequences. The mixed time delays consist of both the discrete and the distributed delays. The time-varying delays are allowed in state. By employing the idea of delay fractioning and constructing a new Lyapunov-Krasovskii functional, sufficient conditions are established to ensure the stability of the system dynamics in the specified sliding surface by solving a certain semidefinite programming problem. A full-state feedback SMC law is designed to guarantee the reaching condition. A simulation example is given to demonstrate the effectiveness of the proposed SMC scheme.

Journal ArticleDOI
01 Dec 2012
TL;DR: A quantitative survey-based study to examine the relationships between maturity, information quality, analytical decision- making culture, and the use of information for decision-making as significant elements of the success of BIS finds that BIS maturity has a stronger impact on information access quality.
Abstract: The information systems (IS) literature has long emphasized the positive impact of information provided by business intelligence systems (BIS) on decision-making, particularly when organizations operate in highly competitive environments. Evaluating the effectiveness of BIS is vital to our understanding of the value and efficacy of management actions and investments. Yet, while IS success has been well-researched, our understanding of how BIS dimensions are interrelated and how they affect BIS use is limited. In response, we conduct a quantitative survey-based study to examine the relationships between maturity, information quality, analytical decision-making culture, and the use of information for decision-making as significant elements of the success of BIS. Statistical analysis of data collected from 181 medium and large organizations is combined with the use of descriptive statistics and structural equation modeling. Empirical results link BIS maturity to two segments of information quality, namely content and access quality. We therefore propose a model that contributes to understanding of the interrelationships between BIS success dimensions. Specifically, we find that BIS maturity has a stronger impact on information access quality. In addition, only information content quality is relevant for the use of information while the impact of the information access quality is non-significant. We find that an analytical decision-making culture necessarily improves the use of information but it may suppress the direct impact of the quality of the information content.

Journal ArticleDOI
TL;DR: In this paper, a conceptual model for IT innovation adoption process in organizations is developed, which utilizes Diffusion of Innovation (DOI) theory, Theory of Reasoned Action (TRA), Technology Acceptance Model (TAM), Theory of Planned Behaviour (TPB) and a framework that contains characteristics of innovation, organization, environment, chief executive officer (CEO) and user acceptance.
Abstract: In this paper, we develop a conceptual model for IT innovation adoption process in organizations. The model utilizes Diffusion of Innovation (DOI) theory, Theory of Reasoned Action (TRA), Technology Acceptance Model (TAM), Theory of Planned Behaviour (TPB) and a framework that contains characteristics of innovation, organization, environment, chief executive officer (CEO) and user acceptance. The model presents IT adoption as a sequence of stages, progressing from initiation to adoption-decision to implementation. The study presents a model with an interactive process perspective which considers organizational level analysis until acquisition of technology and individual level analysis for the user acceptance of IT.

Journal ArticleDOI
01 Jun 2012
TL;DR: A novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems, which can enable a particle to choose the optimal strategy according to its own local fitness landscape.
Abstract: Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.

Journal ArticleDOI
TL;DR: A new framework is proposed for evaluating competing prediction systems based upon an unbiased statistic, Standardised Accuracy, testing the result likelihood relative to the baseline technique of random 'predictions', that is guessing, and calculation of effect sizes, which leads to meaningful results.
Abstract: Context: Software engineering has a problem in that when we empirically evaluate competing prediction systems we obtain conflicting results. Objective: To reduce the inconsistency amongst validation study results and provide a more formal foundation to interpret results with a particular focus on continuous prediction systems. Method: A new framework is proposed for evaluating competing prediction systems based upon (1) an unbiased statistic, Standardised Accuracy, (2) testing the result likelihood relative to the baseline technique of random 'predictions', that is guessing, and (3) calculation of effect sizes. Results: Previously published empirical evaluations of prediction systems are re-examined and the original conclusions shown to be unsafe. Additionally, even the strongest results are shown to have no more than a medium effect size relative to random guessing. Conclusions: Biased accuracy statistics such as MMRE are deprecated. By contrast this new empirical validation framework leads to meaningful results. Such steps will assist in performing future meta-analyses and in providing more robust and usable recommendations to practitioners.

Journal ArticleDOI
TL;DR: In vivo vitellogenin (VTG) induction studies are used to determine the relative potency of the steroid estrogens to induce VTG and, based on the relative differences between in vivo VTG induction, they derive PNECs of 6 and 60 ng/L for E1 and E3, respectively.
Abstract: The authors derive predicted-no-effect concentrations (PNECs) for the steroid estrogens (estrone [E1], 17β-estradiol [E2], estriol [E3], and 17α-ethinylestradiol [EE2]) appropriate for use in risk assessment of aquatic organisms. In a previous study, they developed a PNEC of 0.35 ng/L for EE2 from a species sensitivity distribution (SSD) based on all available chronic aquatic toxicity data. The present study updates that PNEC using recently published data to derive a PNEC of 0.1 ng/L for EE2. For E2, fish were the most sensitive taxa, and chronic reproductive effects were the most sensitive endpoint. Using the SSD methodology, we derived a PNEC of 2 ng/L for E2. Insufficient data were available to construct an SSD for E1 or E3. Therefore, the authors used in vivo vitellogenin (VTG) induction studies to determine the relative potency of the steroid estrogens to induce VTG. Based on the relative differences between in vivo VTG induction, they derive PNECs of 6 and 60 ng/L for E1 and E3, respectively. Thus, for long-term exposures to steroid estrogens in surface water (i.e., >60 d), the PNECs are 6, 2, 60, and 0.1 ng/L for E1, E2, E3, and EE2, respectively. Higher PNECs are recommended for short-term (i.e., a few days or weeks) exposures.

Journal ArticleDOI
TL;DR: In this paper, the authors explored the relationship between HRQoL and physical activity, and examined how this relationship differs across objective and subjective measures of PA, within the context of a large representative national survey from England.
Abstract: Research on the relationship between Health Related Quality of Life (HRQoL) and physical activity (PA), to date, have rarely investigated how this relationship differ across objective and subjective measures of PA. The aim of this paper is to explore the relationship between HRQoL and PA, and examine how this relationship differs across objective and subjective measures of PA, within the context of a large representative national survey from England. Using a sample of 5,537 adults (40–60 years) from a representative national survey in England (Health Survey for England 2008), Tobit regressions with upper censoring was employed to model the association between HRQoL and objective, and subjective measures of PA controlling for potential confounders. We tested the robustness of this relationship across specific types of PA. HRQoL was assessed using the summary measure of health state utility value derived from the EuroQol-5 Dimensions (EQ-5D) whilst PA was assessed via subjective measure (questionnaire) and objective measure (accelerometer- actigraph model GT1M). The actigraph was worn (at the waist) for 7 days (during waking hours) by a randomly selected sub-sample of the HSE 2008 respondents (4,507 adults – 16 plus years), with a valid day constituting 10 hours. Analysis was conducted in 2010. Findings suggest that higher levels of PA are associated with better HRQoL (regression coefficient: 0.026 to 0.072). This relationship is consistent across different measures and types of PA although differences in the magnitude of HRQoL benefit associated with objective and subjective (regression coefficient: 0.047) measures of PA are noticeable, with the former measure being associated with a relatively better HRQoL (regression coefficient: 0.072). Higher levels of PA are associated with better HRQoL. Using an objective measure of PA compared with subjective shows a relatively better HRQoL.

Journal ArticleDOI
TL;DR: The extended Kalman filtering problem is investigated for a class of nonlinear systems with multiple missing measurements over a finite horizon and it is shown that the desired filter can be obtained in terms of the solutions to two Riccati-like difference equations that are of a form suitable for recursive computation in online applications.

Journal ArticleDOI
TL;DR: In this article, the authors presented the results of a computational study on the energy consumption and related CO2 emissions for heating and cooling of an office building within the Urban Heat Island of London, currently and in the future.

Journal ArticleDOI
TL;DR: In this paper, the suppression of individual nS states in PbPb collisions with respect to their yields in pp data has been measured, and the results demonstrate the sequential suppression of the Υ(nS) states from the dimuon invariant mass spectra.
Abstract: The suppression of the individual Υ(nS) states in PbPb collisions with respect to their yields in pp data has been measured. The PbPb and pp data sets used in the analysis correspond to integrated luminosities of 150 μb^(-1) and 230 nb^(-1), respectively, collected in 2011 by the CMS experiment at the LHC, at a center-of-mass energy per nucleon pair of 2.76 TeV. The Υ(nS) yields are measured from the dimuon invariant mass spectra. The suppression of the Υ(nS) yields in PbPb relative to the yields in pp, scaled by the number of nucleon-nucleon collisions, R_(AA), is measured as a function of the collision centrality. Integrated over centrality, the R_(AA) values are 0.56±0.08(stat)±0.07(syst), 0.12±0.04(stat)±0.02(syst), and lower than 0.10 (at 95% confidence level), for the Υ(1S), Υ(2S), and Υ(3S) states, respectively. The results demonstrate the sequential suppression of the Υ(nS) states in PbPb collisions at LHC energies.

Journal ArticleDOI
TL;DR: Results show that the integrated geometric error modeling, identification and compensation method is effective and applicable in multi-axis machine tools.
Abstract: This paper presents an integrated geometric error modeling, identification and compensation method for machine tools. Regarding a machine tool as a rigid multi-body system (MBS), a geometric error model has been established. It supports the identification of the 21 translational geometric error parameters associated with linear-motion axes based on a laser interferometer, and 6 angular geometric error parameters for each rotation axis based on a ball-bar. Based on this model, a new identification method is proposed to recognize these geometric errors. Finally, the identified geometric errors are compensated by correcting corresponding NC codes. In order to validate our method, a prototype software system has been developed, which can be used for conducting tests on any type of CNC machine tool with not more than five axes. An experiment has been conducted on a five-axis machine center with rotary table and tilting head; the results show that the integrated geometric error modeling, identification and compensation method is effective and applicable in multi-axis machine tools.

Journal ArticleDOI
TL;DR: The addressed synchronization control problem is first formulated as an exponentially mean-square stabilization problem for a new class of dynamical networks that involve both the multiple probabilistic interval delays (MPIDs) and the sector-bounded nonlinearities (SBNs).
Abstract: This technical note is concerned with the sampled-data synchronization control problem for a class of dynamical networks. The sampling period considered here is assumed to be time-varying that switches between two different values in a random way with given probability. The addressed synchronization control problem is first formulated as an exponentially mean-square stabilization problem for a new class of dynamical networks that involve both the multiple probabilistic interval delays (MPIDs) and the sector-bounded nonlinearities (SBNs). Then, a novel Lyapunov functional is constructed to obtain sufficient conditions under which the dynamical network is exponentially mean-square stable. Both Gronwall's inequality and Jenson integral inequality are utilized to substantially simplify the derivation of the main results. Subsequently, a set of sampled-data synchronization controllers is designed in terms of the solution to certain matrix inequalities that can be solved effectively by using available software. Finally, a numerical simulation example is employed to show the effectiveness of the proposed sampled-data synchronization control scheme.

Journal ArticleDOI
TL;DR: In this paper, the basic concepts and applications of the phase-field-crystal (PFC) method, which is one of the latest simulation methodologies in materials science for problems, where atomic and microscales are tightly coupled.
Abstract: Here, we review the basic concepts and applications of the phase-field-crystal (PFC) method, which is one of the latest simulation methodologies in materials science for problems, where atomic- and microscales are tightly coupled. The PFC method operates on atomic length and diffusive time scales, and thus constitutes a computationally efficient alternative to molecular simulation methods. Its intense development in materials science started fairly recently following the work by Elder et al. [Phys. Rev. Lett. 88 (2002), p. 245701]. Since these initial studies, dynamical density functional theory and thermodynamic concepts have been linked to the PFC approach to serve as further theoretical fundamentals for the latter. In this review, we summarize these methodological development steps as well as the most important applications of the PFC method with a special focus on the interaction of development steps taken in hard and soft matter physics, respectively. Doing so, we hope to present today's state of the...

Journal ArticleDOI
TL;DR: In this article, an emic approach is proposed to identify emergent and situated categories of diversity ex post, as embedded in a specific time and place, and a five-step research guide is presented.
Abstract: This paper presents an emic approach, which is sensitive to the emergence of new categories of difference, in intersectional study of workforce diversity. The paper first provides a comprehensive review of the literature on diversity at work in the business and management field, identifying that this literature is predominantly etic in nature, as it focuses on pre-established, rather than emergent, categories of difference. Next, an emic approach to researching diversity at work is offered. In offering an emic approach, the key distinction the paper makes is the direction of the investigation. Unlike the dominant etic approach, which adopts pre-established (ex ante) diversity categories, the emic perspective proposed identifies emergent and situated categories of diversity ex post, as embedded in a specific time and place. In order to operationalize the emic approach, the use of the Bourdieuan theory of capitals is suggested, and a five-step research guide is presented.

Journal ArticleDOI
TL;DR: The H"~ filtering problem is investigated for a class of nonlinear systems with randomly occurring incomplete information, namely, randomly occurring sensor saturation, and the regional l"2 gain filtering feature is specifically developed for the random saturation nonlinearity.

Journal ArticleDOI
TL;DR: In this paper, the basic concepts and applications of the phase-field-crystal (PFC) method, which is one of the latest simulation methodologies in materials science for problems, where atomic and microscales are tightly coupled.
Abstract: Here, we review the basic concepts and applications of the phase-field-crystal (PFC) method, which is one of the latest simulation methodologies in materials science for problems, where atomic- and microscales are tightly coupled. The PFC method operates on atomic length and diffusive time scales, and thus constitutes a computationally efficient alternative to molecular simulation methods. Its intense development in materials science started fairly recently following the work by Elder et al. [Phys. Rev. Lett. 88 (2002), p. 245701]. Since these initial studies, dynamical density functional theory and thermodynamic concepts have been linked to the PFC approach to serve as further theoretical fundaments for the latter. In this review, we summarize these methodological development steps as well as the most important applications of the PFC method with a special focus on the interaction of development steps taken in hard and soft matter physics, respectively. Doing so, we hope to present today's state of the art in PFC modelling as well as the potential, which might still arise from this method in physics and materials science in the nearby future.

Journal ArticleDOI
06 Jun 2012
TL;DR: In this article, the dijet momentum balance and angular correlations are studied as a function of collision centrality and leading jet transverse momentum for PbPb collisions at a nucleon-nucleon center-of-mass energy of 276 TeV.
Abstract: Dijet production in PbPb collisions at a nucleon–nucleon center-of-mass energy of 276 TeV is studied with the CMS detector at the LHC A data sample corresponding to an integrated luminosity of 150 μb−1 is analyzed Jets are reconstructed using combined information from tracking and calorimetry, using the anti-kT algorithm with R=03 The dijet momentum balance and angular correlations are studied as a function of collision centrality and leading jet transverse momentum For the most peripheral PbPb collisions, good agreement of the dijet momentum balance distributions with pp data and reference calculations at the same collision energy is found, while more central collisions show a strong imbalance of leading and subleading jet transverse momenta attributed to the jet-quenching effect The dijets in central collisions are found to be more unbalanced than the reference, for leading jet transverse momenta up to the highest values studied