scispace - formally typeset
Search or ask a question

Showing papers by "Istanbul Technical University published in 2018"


Proceedings ArticleDOI
01 Jan 2018
TL;DR: SPReID as discussed by the authors integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152.
Abstract: Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by ~17% in mAP and ~6% in rank-1, CUHK03 [24] by ~4% in rank-1 and DukeMTMC-reID [50] by ~24% in mAP and ~10% in rank-1.

467 citations


Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Federico Ambrogi1  +2238 moreInstitutions (159)
TL;DR: In this paper, the discriminating variables and the algorithms used for heavy-flavour jet identification during the first years of operation of the CMS experiment in proton-proton collisions at a centre-of-mass energy of 13 TeV, are presented.
Abstract: Many measurements and searches for physics beyond the standard model at the LHC rely on the efficient identification of heavy-flavour jets, i.e. jets originating from bottom or charm quarks. In this paper, the discriminating variables and the algorithms used for heavy-flavour jet identification during the first years of operation of the CMS experiment in proton-proton collisions at a centre-of-mass energy of 13 TeV, are presented. Heavy-flavour jet identification algorithms have been improved compared to those used previously at centre-of-mass energies of 7 and 8 TeV. For jets with transverse momenta in the range expected in simulated events, these new developments result in an efficiency of 68% for the correct identification of a b jet for a probability of 1% of misidentifying a light-flavour jet. The improvement in relative efficiency at this misidentification probability is about 15%, compared to previous CMS algorithms. In addition, for the first time algorithms have been developed to identify jets containing two b hadrons in Lorentz-boosted event topologies, as well as to tag c jets. The large data sample recorded in 2016 at a centre-of-mass energy of 13 TeV has also allowed the development of new methods to measure the efficiency and misidentification probability of heavy-flavour jet identification algorithms. The b jet identification efficiency is measured with a precision of a few per cent at moderate jet transverse momenta (between 30 and 300 GeV) and about 5% at the highest jet transverse momenta (between 500 and 1000 GeV).

454 citations


Journal ArticleDOI
TL;DR: The results are compared with Pythagorean Fuzzy Failure Modes and Effects Analysis and it is revealed that the proposed method produces reliable and informative outcomes better representing the vagueness of decision making process.

344 citations


Journal ArticleDOI
TL;DR: In this article, a new surface-atmospheric dataset for driving ocean-sea-ice models based on Japanese 55-year atmospheric reanalysis (JRA-55), referred to here as JRA55-do, is presented.

340 citations


Journal ArticleDOI
TL;DR: In this paper, the performance of the modified system is studied using proton-proton collision data at center-of-mass energy √s=13 TeV, collected at the LHC in 2015 and 2016.
Abstract: The CMS muon detector system, muon reconstruction software, and high-level trigger underwent significant changes in 2013–2014 in preparation for running at higher LHC collision energy and instantaneous luminosity. The performance of the modified system is studied using proton-proton collision data at center-of-mass energy √s=13 TeV, collected at the LHC in 2015 and 2016. The measured performance parameters, including spatial resolution, efficiency, and timing, are found to meet all design specifications and are well reproduced by simulation. Despite the more challenging running conditions, the modified muon system is found to perform as well as, and in many aspects better than, previously. We dedicate this paper to the memory of Prof. Alberto Benvenuti, whose work was fundamental for the CMS muon detector.

303 citations


Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2240 moreInstitutions (157)
TL;DR: In this article, a measurement of the H→ττ signal strength is performed using events recorded in proton-proton collisions by the CMS experiment at the LHC in 2016 at a center-of-mass energy of 13TeV.

250 citations


Journal ArticleDOI
TL;DR: These are the first direct limits for N mass above 500 GeV and the first limits obtained at a hadron collider for N masses below 40 Ge V.
Abstract: A search for a heavy neutral lepton N of Majorana nature decaying into a W boson and a charged lepton is performed using the CMS detector at the LHC. The targeted signature consists of three prompt charged leptons in any flavor combination of electrons and muons. The data were collected in proton-proton collisions at a center-of-mass energy of 13 TeV, with an integrated luminosity of 35.9 fb^(−1). The search is performed in the N mass range between 1 GeV and 1.2 TeV. The data are found to be consistent with the expected standard model background. Upper limits are set on the values of |V_(eN)|^2and |V_(μN)|^2, where V_(lN) is the matrix element describing the mixing of N with the standard model neutrino of flavor l. These are the first direct limits for N masses above 500 GeV and the first limits obtained at a hadron collider for N masses below 40 GeV.

230 citations


Journal ArticleDOI
TL;DR: In this paper, the design and manufacturing of a highly sensitive capacitive-based soft pressure sensor for wearable electronics applications are presented, which is embedded into a textile glove for grasp motion monitoring during activities of daily living.
Abstract: DOI: 10.1002/admt.201700237 parallel plate capacitive sensing technology is popular due to signal repeatability, temperature insensitivity, and relative simplicity of design and construction.[34,35] In this approach, when an external force is applied to the soft pressure sensor, the dielectric layer thickness of the sensor varies, which leads to a change in the capacitance of the sensor. However, due to relatively small changes in the capacitance of parallel plate sensors under loading, achievable sensitivities are typically very low.[21] Therefore, most studies focus on the modification of the dielectric layer to increase sensitivity. In this context, efforts toward increased sensitivity can be grouped into two main categories: surface modification of the elastomer layers and the creation of micropores within the dielectric layer. In the first approach, topographical features[36–40] (such as nanoscale pyramids, microstructured line patterns, or micrometer-scale circular pillars) are created on the elastomer surface via surface micromachining methods (such as photolithography and molding). However, It should be noted here that, even though high sensitivity can be achieved using surface micromachining, the working range is typically limited to <10 kPa that is undesirable for most wearable applications. The latter approach focuses on the creation of a porous dielectric layer[41–44] and a recent trend is to use solid particle leaching[44–48] to create micropores within the silicone elastomer. As commercially available sugar cubes and silicone elastomers can be used, manufacturing is quick, simple, and low cost. It has been shown that increased sensitivity over the tactile pressure range was achieved using this method due to the reduced stiffness of the dielectric material as well as increased effective dielectric constant due to the presence of air gaps within the microporous structure. Capacitance values are typically on the order of several femtofarads due to the dielectric layer thickness (height of the sugar cube templates is around 10 mm), but a higher baseline capacitance is needed for sufficient signal-to-noise in the presence of parasitic capacitances within the readout circuitry in these systems. Beside, carbon-based materials,[46] conductive thin films[48] are generally employed to construct electrode layers and are used in combination with the modified dielectric layer for the formation of the soft sensor. However, to integrate these sensors into the system for the creation of wearable electronic devices, the sensors themselves must be flexible, robust, and have mechanically In this paper, the design and manufacturing of a highly sensitive capacitivebased soft pressure sensor for wearable electronics applications are presented. Toward this aim, two types of soft conductive fabrics (knitted and woven), as well as two types of sacrificial particles (sugar granules and salt crystals) to create micropores within the dielectric layer of the capacitive sensor are evaluated, and the combined effects on the sensor’s overall performance are assessed. It is found that a combination of the conductive knit electrode and higher dielectric porosity (generated using the larger sugar granules) yields higher sensitivity (121 × 10−4 kPa−1) due to greater compressibility and the formation of air gaps between silicone elastomer and conductive knit electrode among the other design considerations in this study. As a practical demonstration, the capacitive sensor is embedded into a textile glove for grasp motion monitoring during activities of daily living.

219 citations


Journal ArticleDOI
TL;DR: An integrated hybrid methodology for the analysis of Turkey's energy sector using Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis, Analytic Network Process (ANP) process, and weighted fuzzy Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) to formulate and holistically analyze the energy strategy alternatives and priorities is proposed in this paper.
Abstract: Energy planning involves a perpetual process of reevaluating alternative energy strategies. Authorities responsible for energy planning and management have to adjust their strategies according to new and improved alternative solutions based on the sustainability criteria. In this study, we propose an integrated hybrid methodology for the analysis of Turkey’s energy sector using Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis, Analytic Network Process (ANP) process, and weighted fuzzy Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) to formulate and holistically analyze the energy strategy alternatives and priorities. The methodology proposed in this study allowed identifying the relevant criteria and sub-criteria using a SWOT analysis. Then, ANP approach, which is one of the popular multi-criteria decision making (MCDM) methods, is employed to determine the weights of each SWOT factors and sub-factors. Finally, fuzzy TOPSIS methodology is conducted to prioritize alternative energy strategies. We discuss the obtained results for the development of long-range alternative energy strategies. The results showed that turning the country into an energy hub and an energy terminal by effectively using the geo-strategic position within the framework of the regional cooperation is the most important priority. On the other hand, using the nuclear energy technologies within the energy supply strategies found to be the least favored priority.

201 citations


Journal ArticleDOI
TL;DR: In this paper, a unified model for NOMA, including uplink and downlink transmissions, along with the extensions to multiple input multiple output (MIMO) and cooperative communication scenarios is presented.
Abstract: Today’s wireless networks allocate radio resources to users based on the orthogonal multiple access (OMA) principle. However, as the number of users increases, OMA based approaches may not meet the stringent emerging requirements including very high spectral efficiency, very low latency, and massive device connectivity. Nonorthogonal multiple access (NOMA) principle emerges as a solution to improve the spectral efficiency while allowing some degree of multiple access interference at receivers. In this tutorial style paper, we target providing a unified model for NOMA, including uplink and downlink transmissions, along with the extensions to multiple input multiple output and cooperative communication scenarios. Through numerical examples, we compare the performances of OMA and NOMA networks. Implementation aspects and open issues are also detailed.

195 citations


Journal ArticleDOI
TL;DR: In this paper, the LIGO interferometers detected the gravitational wave (GW) signal (GW170817) from the coalescence of binary neutron stars and this signal was also simultaneously seen throughout the electromagnetic (EM) spectrum from radio waves to gamma-rays.
Abstract: On August 17, 2017 the LIGO interferometers detected the gravitational wave (GW) signal (GW170817) from the coalescence of binary neutron stars. This signal was also simultaneously seen throughout the electromagnetic (EM) spectrum from radio waves to gamma-rays. We point out that this simultaneous detection of GW and EM signals rules out a class of modified gravity theories, termed ``dark matter emulators,'' which dispense with the need for dark matter by making ordinary matter couple to a different metric from that of GW. We discuss other kinds of modified gravity theories which dispense with the need for dark matter and are still viable. This simultaneous observation also provides the first observational test of Einstein's Weak Equivalence Principle (WEP) between gravitons and photons. We estimate the Shapiro time delay due to the gravitational potential of the total dark matter distribution along the line of sight (complementary to the calculation in arXiv:1710.05834) to be about 400 days. Using this estimate for the Shapiro delay and from the time difference of 1.7 seconds between the GW signal and gamma-rays, we can constrain violations of WEP using the parameterized post-Newtonian (PPN) parameter $\gamma$, and is given by $|\gamma_{\rm {GW}}-\gamma_{\rm{EM}}|<9.8 \times 10^{-8}$.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2357 moreInstitutions (197)
TL;DR: In this article, a low-mass search for resonances decaying into pairs of jets is performed using proton-proton collision data collected at s√=13 TeV corresponding to an integrated luminosity of up to 36 fb−1.
Abstract: Searches for resonances decaying into pairs of jets are performed using proton-proton collision data collected at s√=13 TeV corresponding to an integrated luminosity of up to 36 fb−1. A low-mass search, for resonances with masses between 0.6 and 1.6 TeV, is performed based on events with dijets reconstructed at the trigger level from calorimeter information. A high-mass search, for resonances with masses above 1.6 TeV, is performed using dijets reconstructed offline with a particle-flow algorithm. The dijet mass spectrum is well described by a smooth parameterization and no evidence for the production of new particles is observed. Upper limits at 95% confidence level are reported on the production cross section for narrow resonances with masses above 0.6 TeV. In the context of specific models, the limits exclude string resonances with masses below 7.7 TeV, scalar diquarks below 7.2 TeV, axigluons and colorons below 6.1 TeV, excited quarks below 6.0 TeV, color-octet scalars below 3.4 TeV, W′ bosons below 3.3 TeV, Z′ bosons below 2.7 TeV, Randall-Sundrum gravitons below 1.8 TeV and in the range 1.9 to 2.5 TeV, and dark matter mediators below 2.6 TeV. The limits on both vector and axial-vector mediators, in a simplified model of interactions between quarks and dark matter particles, are presented as functions of dark matter particle mass and coupling to quarks. Searches are also presented for broad resonances, including for the first time spin-1 resonances with intrinsic widths as large as 30% of the resonance mass. The broad resonance search improves and extends the exclusions of a dark matter mediator to larger values of its mass and coupling to quarks.

Journal ArticleDOI
TL;DR: In this article, the authors investigated meeting electrical energy demand of off-grid vacation homes via photovoltaic/wind/fuel cell hybrid energy systems from a techno-economic perspective.

Journal ArticleDOI
TL;DR: In this paper, the existence of solutions for two types of high order fractional integro-differential equations is studied. But the authors focus on the CFD and DCF derivations.
Abstract: By using the fractional Caputo–Fabrizio derivative, we introduce two types new high order derivations called CFD and DCF. Also, we study the existence of solutions for two such type high order fractional integro-differential equations. We illustrate our results by providing two examples.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2314 moreInstitutions (196)
TL;DR: A statistical combination of several searches for the electroweak production of charginos and neutralinos is presented in this article, where a targeted analysis requiring three or more charged leptons (electrons or muons) is presented, focusing on the challenging scenario in which the difference in mass between the two least massive neutralino is approximately equal to the mass of the Z boson.
Abstract: A statistical combination of several searches for the electroweak production of charginos and neutralinos is presented. All searches use proton-proton collision data at $ \sqrt{s}=13 $ TeV, recorded with the CMS detector at the LHC in 2016 and corresponding to an integrated luminosity of 35.9 fb$^{−1}$. In addition to the combination of previous searches, a targeted analysis requiring three or more charged leptons (electrons or muons) is presented, focusing on the challenging scenario in which the difference in mass between the two least massive neutralinos is approximately equal to the mass of the Z boson. The results are interpreted in simplified models of chargino-neutralino or neutralino pair production. For chargino-neutralino production, in the case when the lightest neutralino is massless, the combination yields an observed (expected) limit at the 95% confidence level on the chargino mass of up to 650 (570) GeV, improving upon the individual analysis limits by up to 40 GeV. If the mass difference between the two least massive neutralinos is approximately equal to the mass of the Z boson in the chargino-neutralino model, the targeted search requiring three or more leptons obtains observed and expected exclusion limits of around 225 GeV on the second neutralino mass and 125 GeV on the lightest neutralino mass, improving the observed limit by about 60 GeV in both masses compared to the previous CMS result. In the neutralino pair production model, the combined observed (expected) exclusion limit on the neutralino mass extends up to 650–750 (550–750) GeV, depending on the branching fraction assumed. This extends the observed exclusion achieved in the individual analyses by up to 200 GeV. The combined result additionally excludes some intermediate gaps in the mass coverage of the individual analyses.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2291 moreInstitutions (195)
TL;DR: In this paper, a search for the direct electroweak production of charginos and neutralinos in signatures with either two or more leptons (electrons or muons) of the same electric charge, or with three or more hadronically decaying tau-leptons.
Abstract: Results are presented from a search for the direct electroweak production of charginos and neutralinos in signatures with either two or more leptons (electrons or muons) of the same electric charge, or with three or more leptons, which can include up to two hadronically decaying tau leptons. The results are based on a sample of proton-proton collision data collected at $ \sqrt{s}=13 $ TeV, recorded with the CMS detector at the LHC, corresponding to an integrated luminosity of 35.9 fb$^{−1}$. The observed event yields are consistent with the expectations based on the standard model. The results are interpreted in simplified models of supersymmetry describing various scenarios for the production and decay of charginos and neutralinos. Depending on the model parameters chosen, mass values between 180 GeV and 1150 GeV are excluded at 95% CL. These results significantly extend the parameter space probed for these particles in searches at the LHC. In addition, results are presented in a form suitable for alternative theoretical interpretations.

Book ChapterDOI
01 Jan 2018
TL;DR: In this article, a new Industry 4.0 maturity model is proposed for assessing processes, products, and organizations and understanding their maturity level, which is suitable for companies planning to transform their businesses and operations for Industry 5.0.
Abstract: Companies that transform their businesses and operations regarding to Industry 4.0 principles face complex processes and high budgets due to dependent technologies that effect process inputs and outputs. In addition, since Industry 4.0 transformation creates a change in a business manner and value proposition , it becomes highly important concept that requires support of top management for the projects and investments. Therefore, it requires a broad perspective on the company’s strategy, organization, operations and products. So, the maturity model is suitable for companies planning to transform their businesses and operations for Industry 4.0. It is a very important technique for Industry 4.0 in terms of companies seeking for assessing their processes, products and organizations and understanding their maturity level. In this chapter, existing maturity models for Industry 4.0 transformation are reviewed and a new Industry 4.0 maturity model is proposed.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2340 moreInstitutions (198)
TL;DR: A measurement of the inelastic proton-proton cross section with the CMS detector at a center-of-mass energy of $ \sqrt{s}=13 $ TeV is presented in this paper.
Abstract: A measurement of the inelastic proton-proton cross section with the CMS detector at a center-of-mass energy of $ \sqrt{s}=13 $ TeV is presented. The analysis is based on events with energy deposits in the forward calorimeters, which cover pseudorapidities of −6.6 4.1 GeV and/or M$_{Y}$ > 13 GeV, where M$_{X}$ and M$_{Y}$ are the masses of the diffractive dissociation systems at negative and positive pseudorapidities, respectively. The results are compared with those from other experiments as well as to predictions from high-energy hadron-hadron interaction models.

Journal ArticleDOI
TL;DR: An emotion based music recommendation framework that learns the emotion of a user from the signals obtained via wearable physiological sensors that can be integrated to any recommendation engine.
Abstract: Most of the existing music recommendation systems use collaborative or content based recommendation engines. However, the music choice of a user is not only dependent to the historical preferences or music contents. But also dependent to the mood of that user. This paper proposes an emotion based music recommendation framework that learns the emotion of a user from the signals obtained via wearable physiological sensors. In particular, the emotion of a user is classified by a wearable computing device which is integrated with a galvanic skin response (GSR) and photo plethysmography (PPG) physiological sensors. This emotion information is feed to any collaborative or content based recommendation engine as a supplementary data. Thus, existing recommendation engine performances can be increased using these data. Therefore, in this paper emotion recognition problem is considered as arousal and valence prediction from multi-channel physiological signals. Experimental results are obtained on 32 subjects’ GSR and PPG signal data with/out feature fusion using decision tree, random forest, support vector machine and k-nearest neighbors algorithms. The results of comprehensive experiments on real data confirm the accuracy of the proposed emotion classification system that can be integrated to any recommendation engine.

Journal ArticleDOI
Albert M. Sirunyan1, Robin Erbacher2, Wagner Carvalho3, Maciej Górski  +2272 moreInstitutions (151)
TL;DR: The first observation of electroweak production of same-sign W boson pairs in proton-proton collisions was reported in this article, where the data sample corresponds to an integrated luminosity of 359 fb^(−1) collected at a center-of-mass energy of 13 TeV with the CMS detector at the LHC Events are selected by requiring exactly two leptons (electrons or muons) of the same charge, moderate missing transverse momentum, and two jets with a large rapidity separation and a large dijet mass.
Abstract: The first observation of electroweak production of same-sign W boson pairs in proton-proton collisions is reported The data sample corresponds to an integrated luminosity of 359 fb^(−1) collected at a center-of-mass energy of 13 TeV with the CMS detector at the LHC Events are selected by requiring exactly two leptons (electrons or muons) of the same charge, moderate missing transverse momentum, and two jets with a large rapidity separation and a large dijet mass The observed significance of the signal is 55 standard deviations, where a significance of 57 standard deviations is expected based on the standard model The ratio of measured event yields to that expected from the standard model at leading order is 090 ± 022 A cross section measurement in a fiducial region is reported Bounds are given on the structure of quartic vector boson interactions in the framework of dimension-8 effective field theory operators and on the production of doubly charged Higgs bosons

Journal ArticleDOI
TL;DR: The development of personal protection systems with improved ballistic performance and reduced weight has received a great interest in the last decade with the unfortunate ever-increasing threats a... as mentioned in this paper, with the exception of the work of as mentioned in this paper.
Abstract: The development of personal protection systems with improved ballistic performance and reduced weight has received a great interest in the last decade with the unfortunate ever-increasing threats a...

Journal ArticleDOI
TL;DR: In this paper, a search for additional neutral Higgs bosons in the τ τ final state in proton-proton collisions at the LHC was performed in the context of the minimal supersymmetric extension of the standard model (MSSM), using the data collected with the CMS detector in 2016 at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1.
Abstract: A search is presented for additional neutral Higgs bosons in the τ τ final state in proton-proton collisions at the LHC. The search is performed in the context of the minimal supersymmetric extension of the standard model (MSSM), using the data collected with the CMS detector in 2016 at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes production of the Higgs boson in association with b quarks. No significant deviation above the expected background is observed. Model-independent limits at 95% confidence level (CL) are set on the product of the branching fraction for the decay into τ leptons and the cross section for the production via gluon fusion or in association with b quarks. These limits range from 18 pb at 90 GeV to 3.5 fb at 3.2 TeV for gluon fusion and from 15 pb (at 90 GeV) to 2.5 fb (at 3.2 TeV) for production in association with b quarks, assuming a narrow width resonance. In the m h hod + scenario these limits translate into a 95% CL exclusion of tan β > 6 for neutral Higgs boson masses below 250 GeV, where tan β is the ratio of the vacuum expectation values of the neutral components of the two Higgs doublets. The 95% CL exclusion contour reaches 1.6 TeV for tan β = 60.

Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Federico Ambrogi1  +2265 moreInstitutions (161)
TL;DR: In this paper, the authors improved the algorithm developed by the CMS Collaboration to reconstruct and identify τ leptons produced in proton-proton collisions at √s=7 and 8 TeV via their decays to hadrons and a neutrino.
Abstract: The algorithm developed by the CMS Collaboration to reconstruct and identify τ leptons produced in proton-proton collisions at √s=7 and 8 TeV, via their decays to hadrons and a neutrino, has been significantly improved. The changes include a revised reconstruction of π0 candidates, and improvements in multivariate discriminants to separate τ leptons from jets and electrons. The algorithm is extended to reconstruct τ leptons in highly Lorentz-boosted pair production, and in the high-level trigger. The performance of the algorithm is studied using proton-proton collisions recorded during 2016 at √s=13 TeV, corresponding to an integrated luminosity of 35.9 fb-1. The performance is evaluated in terms of the efficiency for a genuine τ lepton to pass the identification criteria and of the probabilities for jets, electrons, and muons to be misidentified as τ leptons. The results are found to be very close to those expected from Monte Carlo simulation.

Journal ArticleDOI
TL;DR: In this article, a search for new high-mass resonances decaying into electron or muon pairs is presented, where upper limits on the product of a new resonance production cross section and branching fraction to dileptons are calculated in a model-independent manner.
Abstract: A search is presented for new high-mass resonances decaying into electron or muon pairs. The search uses proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the CMS experiment at the LHC in 2016, corresponding to an integrated luminosity of 36 fb$^{−1}$. Observations are in agreement with standard model expectations. Upper limits on the product of a new resonance production cross section and branching fraction to dileptons are calculated in a model-independent manner. This permits the interpretation of the limits in models predicting a narrow dielectron or dimuon resonance. A scan of different intrinsic width hypotheses is performed. Limits are set on the masses of various hypothetical particles. For the $ {Z}_{\mathrm{SSM}}^{\prime}\left({Z}_{{}^{\psi}}^{\prime}\right) $ particle, which arises in the sequential standard model (superstring-inspired model), a lower mass limit of 4.50 (3.90) TeV is set at 95% confidence level. The lightest Kaluza-Klein graviton arising in the Randall-Sundrum model of extra dimensions, with coupling parameters k/M$_{Pl}$ of 0.01, 0.05, and 0.10, is excluded at 95% confidence level below 2.10, 3.65, and 4.25 TeV, respectively. In a simplified model of dark matter production via a vector or axial vector mediator, limits at 95% confidence level are obtained on the masses of the dark matter particle and its mediator.


Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2239 moreInstitutions (157)
TL;DR: In this paper, a search for the production of Higgs boson pairs in proton-proton collisions at a centre-of-mass energy of 13 TeV is presented, using a data sample corresponding to an integrated luminosity of 35.9fb^(−1) collected with the CMS detector at the LHC.

Journal ArticleDOI
TL;DR: In this paper, a natural fiber material in the form of wood waste is examined experimentally to assess its suitability for use as a thermal insulation material, without the addition of any binder, within a timber frame wall construction.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2295 moreInstitutions (194)
TL;DR: In this article, the authors performed searches for resonant and nonresonant pair-produced Higgs bosons (HH) decaying respectively into l nu l nu, through either W or Z bosons, and b (b) over bar.
Abstract: Searches for resonant and nonresonant pair-produced Higgs bosons (HH) decaying respectively into l nu l nu, through either W or Z bosons, and b (b) over bar are presented The analyses are based on a sample of proton-proton collisions at root s = 13 TeV, collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 359 fb(-1) Data and predictions from the standard model are in agreement within uncertainties For the standard model HH hypothesis, the data exclude at 95% confidence level a product of the production cross section and branching fraction larger than 72 fb, corresponding to 79 times the standard model prediction Constraints are placed on different scenarios considering anomalous couplings, which could affect the rate and kinematics of HH production Upper limits at 95% confidence level are set on the production cross section of narrow-width spin-0 and spin-2 particles decaying to Higgs boson pairs, the latter produced with minimal gravity-like coupling

Journal ArticleDOI
TL;DR: A detailed literature review of organosolv pretreatment, focusing on the effects of each of the pretreatment conditions for biorefinery applications, is presented in this article.
Abstract: The concept of a biorefinery that provides valuable bioproducts from biomass conversion instead of fossil based products is presented. One of the main biorefinery products, bioethanol, can be produced from sugar, starch, or lignocellulosic-based biomass. Lignocellulosic-based bioethanol could be a good alternative to sugar- or starch-based bioethanol. While sugar- and starch-based biomass includes mainly glucose or starch, lignocellulosic biomass contains cellulose, hemicellulose, and lignin. While the cellulose is essential for the biomass-to-bioethanol conversion process, hemicellulose and lignin are undesirable in this context, and therefore pretreatment is necessary to break down the lignocellulose structure and separate hemicellulose and lignin from cellulose. Organosolv pretreatment is an attractive method for separating both cellulose and nearly pure lignin from the lignocellulosic material. In a biorefinery, organosolv pretreatment is one of the best options for producing more than one valuable product (bioethanol and lignin) in the same process. For effective bioethanol production, the delignification rate and enzymatic glucose conversion are fundamental parameters. This paper presents a detailed literature review of organosolv pretreatment, focusing on the effects of each of the pretreatment conditions for biorefinery applications. The organosolv pretreatment method is first described in detail and then each of the pretreatment conditions is explored individually. A number of technical studies are reviewed, and the effects of the various conditions on the delignification rate and on enzymatic glucose conversion for effective bioethanol production are described. The current status of development of organosolv-based biorefineries around the world is discussed. In previous reviews of this topic, only the solvent and catalyst effects have been investigated. This review will contribute to the literature by showing the impacts of all pretreatment conditions on pretreatment efficiency.

Journal ArticleDOI
TL;DR: In this article, the authors present the state-of-the-art on DSSE as an enabler function for smart grid features, and broadly review the development of DSSE, challenges faced by its development, and various DSSE algorithms.
Abstract: State estimation (SE) is well-established at the transmission system level of the electricity grid, where it has been in use for the last few decades and is a most vital component of energy management systems employed in the monitoring and control centers of electric transmission systems. However, its use for the monitoring and control of power distribution systems (DSs) has not yet been widely implemented because DSs have been majorly passive with uni-directional power flows. This scenario is now changing with the advent of smart grid, which is changing the nature of electric distribution networks by embracing more dispersed generation, demand responsive loads, and measurements devices with different data rates. Thus, the development of distribution system state estimation (DSSE) tool is inevitable for the implementation of protection, optimization, and control techniques, and various other features envisioned by the smart grid concept. Due to the inherent characteristics of DS different from those of transmission systems, transmission system state estimation (TSSE) is not applicable directly to DSs. This paper is an attempt to present the state-of-the-art on DSSE as an enabler function for smart grid features. It broadly reviews the development of DSSE, challenges faced by its development, and various DSSE algorithms. Additionally, it identifies some future research lines for DSSE.