scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Arlington published in 2014"


Journal ArticleDOI
TL;DR: Emergent trends and gaps in understanding are identified, new approaches to more fully integrate genomics into speciation research are proposed, and an integrative definition of the field of speciation genomics is provided.
Abstract: Speciation is a fundamental evolutionary process, the knowledge of which is crucial for understanding the origins of biodiversity. Genomic approaches are an increasingly important aspect of this research field. We review current understanding of genome-wide effects of accumulating reproductive isolation and of genomic properties that influence the process of speciation. Building on this work, we identify emergent trends and gaps in our understanding, propose new approaches to more fully integrate genomics into speciation research, translate speciation theory into hypotheses that are testable using genomic tools and provide an integrative definition of the field of speciation genomics.

875 citations


Proceedings ArticleDOI
24 Aug 2014
TL;DR: This paper proposes a novel clustering model to learn the data similarity matrix and clustering structure simultaneously and derives an efficient algorithm to optimize the proposed challenging problem, and shows the theoretical analysis on the connections between the method and the K-means clustering, and spectral clustering.
Abstract: Many clustering methods partition the data groups based on the input data similarity matrix. Thus, the clustering results highly depend on the data similarity learning. Because the similarity measurement and data clustering are often conducted in two separated steps, the learned data similarity may not be the optimal one for data clustering and lead to the suboptimal results. In this paper, we propose a novel clustering model to learn the data similarity matrix and clustering structure simultaneously. Our new model learns the data similarity matrix by assigning the adaptive and optimal neighbors for each data point based on the local distances. Meanwhile, the new rank constraint is imposed to the Laplacian matrix of the data similarity matrix, such that the connected components in the resulted similarity matrix are exactly equal to the cluster number. We derive an efficient algorithm to optimize the proposed challenging problem, and show the theoretical analysis on the connections between our method and the K-means clustering, and spectral clustering. We also further extend the new clustering model for the projected clustering to handle the high-dimensional data. Extensive empirical results on both synthetic data and real-world benchmark data sets show that our new clustering methods consistently outperforms the related clustering approaches.

695 citations


Journal ArticleDOI
TL;DR: The history of pain management is reviewed, the major components of a "true" interdisciplinary pain management program are discussed, and the evidence-based outcomes that have documented the effectiveness of such interciplinary pain management programs are focused on.
Abstract: Chronic pain is a significant and costly problem in the United States as well as throughout the industrialized world. Unfortunately, there have been concerns about the effectiveness of traditional medical interventions, suggesting the need for alternative chronic pain treatment strategies. However, the introduction of the biopsychosocial model of pain during the past decade stimulated the development of more therapeutically effective and cost-effective interdisciplinary chronic pain management programs. In the present article we briefly review the history of pain management, discuss the major components of a "true" interdisciplinary pain management program, focus on the evidence-based outcomes that have documented the effectiveness of such interdisciplinary pain management programs, and note the barriers that have blocked the wider use of such programs. Finally, we discuss future directions in interdisciplinary pain management.

577 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learned with sparse regression to perform feature selection.
Abstract: Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the $\ell_{2,1}$ -norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.

468 citations


Journal ArticleDOI
TL;DR: In this article, a meta-analysis of 26 empirical studies yielding 443 sales elasticities was conducted to examine how these variables relate to retail sales, including review valence and review volume.

454 citations


Journal ArticleDOI
TL;DR: In this paper, an assessment of the challenges of understanding basic physics through utilizing rotating detonations in aerospace platforms is provided, ranging from understanding the basic physics of the system to its feasibility.
Abstract: Rotating detonation engines (RDEs), also known as continuous detonation engines, have gained much worldwide interest lately. Such engines have huge potential benefits arising from their simplicity of design and manufacture, lack of moving parts, high thermodynamic efficiency and high rate of energy conversion that may be even more superior than pulse detonation engines, themselves the subject of great interest. However, due to the novelty of the concept, substantial work remains to demonstrate feasibility and bring the RDE to reality. An assessment of the challenges, ranging from understanding basic physics through utilizing rotating detonations in aerospace platforms, is provided.

451 citations


Journal ArticleDOI
TL;DR: This formulation extends the integral reinforcement learning (IRL) technique, a method for solving optimal regulation problems, to learn the solution to the OTCP, and it also takes into account the input constraints a priori.

440 citations


Journal ArticleDOI
TL;DR: An integral reinforcement learning algorithm on an actor-critic structure is developed to learn online the solution to the Hamilton-Jacobi-Bellman equation for partially-unknown constrained-input systems and it is shown that using this technique, an easy-to-check condition on the richness of the recorded data is sufficient to guarantee convergence to a near-optimal control law.

410 citations


Journal ArticleDOI
TL;DR: A novel approach based on the Q -learning algorithm is proposed to solve the infinite-horizon linear quadratic tracker (LQT) for unknown discrete-time systems in a causal manner and the optimal control input is obtained by only solving an augmented ARE.

397 citations


Journal ArticleDOI
TL;DR: A distributed algorithm is presented to solve the economic power dispatch with transmission line losses and generator constraints based on two consensus algorithms running in parallel using a consensus strategy called consensus on the most up-to-date information.
Abstract: A distributed algorithm is presented to solve the economic power dispatch with transmission line losses and generator constraints. The proposed approach is based on two consensus algorithms running in parallel. The first algorithm is a first-order consensus protocol modified by a correction term which uses a local estimation of the system power mismatch to ensure the generation-demand equality. The second algorithm performs the estimation of the power mismatch in the system using a consensus strategy called consensus on the most up-to-date information. The proposed approach can handle networks of different size and topology using the information about the number of nodes which is also evaluated in a distributed fashion. Simulations performed on standard test cases demonstrate the effectiveness of the proposed approach for both small and large systems.

384 citations


Journal ArticleDOI
TL;DR: A recently introduced dynamic programming algorithm for estimating species trees that bypasses MCMC integration over gene trees with sophisticated methods for estimating marginal likelihoods, needed for Bayesian model selection, are combined to provide a rigorous and computationally tractable technique for genome-wide species delimitation.
Abstract: The multispecies coalescent has provided important progress for evolutionary inferences, including increasing the statistical rigor and objectivity of comparisons among competing species delimitation models. However, Bayesian species delimitation methods typically require brute force integration over gene trees via Markov chain Monte Carlo (MCMC), which introduces a large computation burden and precludes their application to genomic-scale data. Here we combine a recently introduced dynamic programming algorithm for estimating species trees that bypasses MCMC integration over gene trees with sophisticated methods for estimating marginal likelihoods, needed for Bayesian model selection, to provide a rigorous and computationally tractable technique for genome-wide species delimitation. We provide a critical yet simple correction that brings the likelihoods of different species trees, and more importantly their corresponding marginal likelihoods, to the same common denominator, which enables direct and accurate comparisons of competing species delimitation models using Bayes factors. We test this approach, which we call Bayes factor delimitation (*with genomic data; BFD*), using common species delimitation scenarios with computer simulations. Varying the numbers of loci and the number of samples suggest that the approach can distinguish the true model even with few loci and limited samples per species. Misspecification of the prior for population size θ has little impact on support for the true model. We apply the approach to West African forest geckos (Hemidactylus fasciatus complex) using genome-wide SNP data. This new Bayesian method for species delimitation builds on a growing trend for objective species delimitation methods with explicit model assumptions that are easily tested. [Bayes factor; model testing; phylogeography; RADseq; simulation; speciation.].

Journal ArticleDOI
TL;DR: In this article, a distributed adaptive droop mechanism is proposed for secondary/primary control of dc microgrids, where the conventional secondary control that adjusts the voltage set point for the local droop mechanisms is replaced by a voltage regulator.
Abstract: A distributed-adaptive droop mechanism is proposed for secondary/primary control of dc microgrids. The conventional secondary control that adjusts the voltage set point for the local droop mechanism is replaced by a voltage regulator. A current regulator is also added to fine-tune the droop coefficient for different loading conditions. The voltage regulator uses an observer that processes neighbors' data to estimate the average voltage across the microgrid. This estimation is further used to generate a voltage correction term to adjust the local voltage set point. The current regulator compares the local per-unit current of each converter with the neighbors' on a communication graph and, accordingly, provides an impedance correction term. This term is then used to update the droop coefficient and synchronize per-unit currents or, equivalently, provide proportional load sharing. The proposed controller precisely accounts for the transmission/distribution line impedances. The controller on each converter exchanges data with only its neighbor converters on a sparse communication graph spanned across the microgrid. Global dynamic model of the microgrid is derived with the proposed controller engaged. A low-voltage dc microgrid prototype is used to verify the controller performance, link-failure resiliency, and the plug-and-play capability.

Journal ArticleDOI
TL;DR: A set of principles is identified to narrow the scope of the discussion and point to pragmatic approaches to help design and research learning experiences where important ethical and privacy issues are considered.
Abstract: The massive adoption of technology in learning processes comes with an equally large capacity to track learners. Learning analytics aims at using the collected information to understand and improve the quality of a learning experience. The privacy and ethical issues that emerge in this context are tightly interconnected with other aspects such as trust, accountability and transparency. In this paper, a set of principles is identified to narrow the scope of the discussion and point to pragmatic approaches to help design and research learning experiences where important ethical and privacy issues are considered. [ABSTRACT FROM AUTHOR]

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2871 moreInstitutions (167)
TL;DR: In this article, the authors presented the electron and photon energy calibration achieved with the ATLAS detector using about 25 fb(-1) of LHC proton-proton collision data taken at center-of-mass energies of root s = 7 and 8 TeV.
Abstract: This paper presents the electron and photon energy calibration achieved with the ATLAS detector using about 25 fb(-1) of LHC proton-proton collision data taken at centre-of-mass energies of root s = 7 and 8 TeV. The reconstruction of electron and photon energies is optimised using multivariate algorithms. The response of the calorimeter layers is equalised in data and simulation, and the longitudinal profile of the electromagnetic showers is exploited to estimate the passive material in front of the calorimeter and reoptimise the detector simulation. After all corrections, the Z resonance is used to set the absolute energy scale. For electrons from Z decays, the achieved calibration is typically accurate to 0.05% in most of the detector acceptance, rising to 0.2% in regions with large amounts of passive material. The remaining inaccuracy is less than 0.2-1% for electrons with a transverse energy of 10 GeV, and is on average 0.3% for photons. The detector resolution is determined with a relative inaccuracy of less than 10% for electrons and photons up to 60 GeV transverse energy, rising to 40% for transverse energies above 500 GeV.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2911 moreInstitutions (209)
TL;DR: In this paper, a measurement of the Z/gamma* boson transverse momentum spectrum using ATLAS proton-proton collision data at a centre-of-mass energy of root s = 7TeV at the LHC is described.
Abstract: This paper describes a measurement of the Z/gamma* boson transverse momentum spectrum using ATLAS proton-proton collision data at a centre-of-mass energy of root s = 7TeV at the LHC. The measurement is performed in the Z/gamma* -> e(+)e(-) and Z/gamma* -> mu(+)mu(-) channels, using data corresponding to an integrated luminosity of 4.7 fb(-1). Normalized differential cross sections as a function of the Z/gamma* boson transverse momentum are measured for transverse momenta up to 800 GeV. The measurement is performed inclusively for Z/gamma* rapidities up to 2.4, as well as in three rapidity bins. The channel results are combined, compared to perturbative and resummed QCD calculations and used to constrain the parton shower parameters of Monte Carlo generators.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2916 moreInstitutions (211)
TL;DR: In this article, a search for squarks and gluinos in final states containing high-p T jets, missing transverse momentum and no electrons or muons is presented.
Abstract: A search for squarks and gluinos in final states containing high-p T jets, missing transverse momentum and no electrons or muons is presented. The data were recorded in 2012 by the ATLAS experiment in s√=8 TeV proton-proton collisions at the Large Hadron Collider, with a total integrated luminosity of 20.3 fb−1. Results are interpreted in a variety of simplified and specific supersymmetry-breaking models assuming that R-parity is conserved and that the lightest neutralino is the lightest supersymmetric particle. An exclusion limit at the 95% confidence level on the mass of the gluino is set at 1330 GeV for a simplified model incorporating only a gluino and the lightest neutralino. For a simplified model involving the strong production of first- and second-generation squarks, squark masses below 850 GeV (440 GeV) are excluded for a massless lightest neutralino, assuming mass degenerate (single light-flavour) squarks. In mSUGRA/CMSSM models with tan β = 30, A 0 = −2m 0 and μ > 0, squarks and gluinos of equal mass are excluded for masses below 1700 GeV. Additional limits are set for non-universal Higgs mass models with gaugino mediation and for simplified models involving the pair production of gluinos, each decaying to a top squark and a top quark, with the top squark decaying to a charm quark and a neutralino. These limits extend the region of supersymmetric parameter space excluded by previous searches with the ATLAS detector.

Proceedings ArticleDOI
24 Mar 2014
TL;DR: An evaluation of the current state of the field of learning analytics through analysis of articles and citations occurring in the LAK conferences and identified special issue journals suggests that there is some fragmentation in the major disciplines regarding conference and journal representation.
Abstract: This paper provides an evaluation of the current state of the field of learning analytics through analysis of articles and citations occurring in the LAK conferences and identified special issue journals. The emerging field of learning analytics is at the intersection of numerous academic disciplines, and therefore draws on a diversity of methodologies, theories and underpinning scientific assumptions. Through citation analysis and structured mapping we aimed to identify the emergence of trends and disciplinary hierarchies that are influencing the development of the field to date. The results suggest that there is some fragmentation in the major disciplines (computer science and education) regarding conference and journal representation. The analyses also indicate that the commonly cited papers are of a more conceptual nature than empirical research reflecting the need for authors to define the learning analytics space. An evaluation of the current state of learning analytics provides numerous benefits for the development of the field, such as a guide for under-represented areas of research and to identify the disciplines that may require more strategic and targeted support and funding opportunities.

Journal ArticleDOI
12 Dec 2014-Science
TL;DR: An exceptionally slow rate of genome evolution within crocodilians at all levels is observed, consistent with a single underlying cause of a reduced rate of evolutionary change rather than intrinsic differences in base repair machinery.
Abstract: ?? To provide context for the diversification of archosaurs—the group that includes crocodilians, dinosaurs, and birds—we generated draft genomes of three crocodilians: Alligator mississippiensis (the American alligator), Crocodylus porosus (the saltwater crocodile), and Gavialis gangeticus (the Indian gharial). We observed an exceptionally slow rate of genome evolution within crocodilians at all levels, including nucleotide substitutions, indels, transposable element content and movement, gene family evolution, and chromosomal synteny. When placed within the context of related taxa including birds and turtles, this suggests that the common ancestor of all of these taxa also exhibited slow genome evolution and that the comparatively rapid evolution is derived in birds. The data also provided the opportunity to analyze heterozygosity in crocodilians, which indicates a likely reduction in population size for all three taxa through the Pleistocene. Finally, these data combined with newly published bird genomes allowed us to reconstruct the partial genome of the common ancestor of archosaurs, thereby providing a tool to investigate the genetic starting material of crocodilians, birds, and dinosaurs.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2870 moreInstitutions (169)
TL;DR: The performance of the ATLAS muon reconstruction during the LHC run withpp collisions at s=7–8 TeV in 2011–2012 is presented, focusing mainly on data collected in 2012.
Abstract: This paper presents the performance of the ATLAS muon reconstruction during the LHC run with pp collisions at root s = 7-8 TeV in 2011-2012, focusing mainly on data collected in 2012. Measurements ...

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2885 moreInstitutions (169)
TL;DR: In this article, the electron reconstruction and identification efficiencies of the ATLAS detector at the LHC have been evaluated using proton-proton collision data collected in 2011 at TeV and corresponding to an integrated luminosity of 4.7 fb.
Abstract: Many of the interesting physics processes to be measured at the LHC have a signature involving one or more isolated electrons. The electron reconstruction and identification efficiencies of the ATLAS detector at the LHC have been evaluated using proton-proton collision data collected in 2011 at TeV and corresponding to an integrated luminosity of 4.7 fb. Tag-and-probe methods using events with leptonic decays of and bosons and mesons are employed to benchmark these performance parameters. The combination of all measurements results in identification efficiencies determined with an accuracy at the few per mil level for electron transverse energy greater than 30 GeV.

Journal ArticleDOI
TL;DR: The results revealed the main research themes that could form a framework of the future MOOC research: i) student engagement and learning success, ii) MOOC design and curriculum, iii) self-regulated learning and social learning, iv) social network analysis and networked learning, and v) motivation, attitude and success criteria.
Abstract: This paper reports on the results of an analysis of the research proposals submitted to the MOOC Research Initiative (MRI) funded by the Gates Foundation and administered by Athabasca University. The goal of MRI was to mobilize researchers to engage into critical interrogation of MOOCs. The submissions – 266 in Phase 1, out of which 78 was recommended for resubmission in the extended form in Phase 2, and finally, 28 funded – were analyzed by applying conventional and automated content analysis methods as well as citation network analysis methods. The results revealed the main research themes that could form a framework of the future MOOC research: i) student engagement and learning success, ii) MOOC design and curriculum, iii) self-regulated learning and social learning, iv) social network analysis and networked learning, and v) motivation, attitude and success criteria. The theme of social learning received the greatest interest and had the highest success in attracting funding. The submissions that planned on using learning analytics methods were more successful. The use of mixed methods was by far the most popular. Design-based research methods were also suggested commonly, but the questions about their applicability arose regarding the feasibility to perform multiple iterations in the MOOC context and rather a limited focus on technological support for interventions. The submissions were dominated by the researchers from the field of education (75% of the accepted proposals). Not only was this a possible cause of a complete lack of success of the educational technology innovation theme, but it could be a worrying sign of the fragmentation in the research community and the need to increased efforts towards enhancing interdisciplinarity.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a microgrid concept, which is a small-scale power system consisting of local generation, local loads, and energy storage systems, which provides guaranteed power quality for local loads such as hospitals, economic centers, apartments and universities.
Abstract: Existing electric power distribution networks are operating near full capacity and facing rapid changes to address environmental concerns and improve their reliability and sustainability. These concerns are satisfied through the effective integration and coordination of distributed generators (DGs), which facilitate the exploitation of renewable energy resources, including wind power, photovoltaics, and fuel cells [1]. Although DGs can be of rotating machinery type, more recently, DGs have been designed to support renewable energy resources by electronic interfacing through voltage source inverters (VSI). Each DG corresponds to one energy source, and its control inputs are given to the interface VSI [1]-[5]. The successful coordination of DGs can be realized through microgrids, which are small-scale power systems consisting of local generation, local loads, and energy storage systems. Microgrids are autonomous subsystems with dedicated control systems that provide guaranteed power quality for local loads such as hospitals, economic centers, apartments, and universities. The microgrid concept, with its local control and power quality support, allows for the scalable integration of local power resources and loads into the existing power grid and enables a high penetration of distributed generation [5]-[10].

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2868 moreInstitutions (187)
TL;DR: In this paper, an improved measurement of the mass of the Higgs boson is derived from a combined fit to the reconstructed invariant mass spectra of the decay channels H -> gamma gamma and H -> ZZ* -> 4l.
Abstract: An improved measurement of the mass of the Higgs boson is derived from a combined fit to the reconstructed invariant mass spectra of the decay channels H -> gamma gamma and H -> ZZ* -> 4l. The analysis uses the pp collision data sample recorded by the ATLAS experiment at the CERN Large Hadron Collider at center-of-mass energies of 7 TeV and 8 TeV, corresponding to an integrated luminosity of 25 fb(-1). The measured value of the Higgs boson mass is m(H) = 125.36 +/- 0.37(stat) +/- 0.18 (syst) GeV. This result is based on improved energy-scale calibrations for photons, electrons, and muons as well as other analysis improvements, and supersedes the previous result from ATLAS. Upper limits on the total width of the Higgs boson are derived from fits to the invariant mass spectra of the H -> gamma gamma and H -> ZZ* -> 4l decay channels.

Journal ArticleDOI
TL;DR: In this paper, the authors present details of a study that deals with determination of engineering properties, identification of phases of major hydration products, and microstructural characteristics of a zinc-c...
Abstract: This paper presents details of a study that deals with determination of engineering properties, identification of phases of major hydration products, and microstructural characteristics of a zinc-c...

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2916 moreInstitutions (196)
TL;DR: In this paper, a measurement of the production processes of the recently discovered Higgs boson is performed in the two-photon final state using 4.5 fb(-1) of proton-proton collisions data at root s = 7 TeV and 20.4 GeV.
Abstract: A measurement of the production processes of the recently discovered Higgs boson is performed in the two-photon final state using 4.5 fb(-1) of proton-proton collisions data at root s = 7 TeV and 20.3 fb(-1) at root s = 8 TeV collected by the ATLAS detector at the Large Hadron Collider. The number of observed Higgs boson decays to diphotons divided by the corresponding Standard Model prediction, called the signal strength, is found to be mu = 1.17 +/- 0.27 at the value of the Higgs boson mass measured by ATLAS, m(H) = 125.4 GeV. The analysis is optimized to measure the signal strengths for individual Higgs boson production processes at this value of m(H). They are found to be mu(ggF) = 1.32 +/- 0.38, mu(VBF) = 0.8 +/- 0.7, mu(WH) = 1.0 +/- 1.6, mu(ZH) = 0.1(-0.1)(+3.7), and mu t (t) over barH = 1.6(-1.8)(+2.7), for Higgs boson production through gluon fusion, vector-boson fusion, and in association with a W or Z boson or a top-quark pair, respectively. Compared with the previously published ATLAS analysis, the results reported here also benefit from a new energy calibration procedure for photons and the subsequent reduction of the systematic uncertainty on the diphoton mass resolution. No significant deviations from the predictions of the Standard Model are found.

Journal ArticleDOI
TL;DR: This note uses an inverse optimality approach together with partial stability to consider the cooperative consensus and pinning control in distributed cooperative control protocols that guarantee consensus and are globally optimal with respect to a positive semi-definite quadratic performance criterion.
Abstract: This note brings together stability and optimality theory to design distributed cooperative control protocols that guarantee consensus and are globally optimal with respect to a positive semi-definite quadratic performance criterion. A common problem in cooperative optimal control is that global optimization problems generally require global information, which is not available to distributed controllers. Optimal control for multi-agent systems is complicated by the fact that the communication graph topology interplays with the agent system dynamics. In the note we use an inverse optimality approach together with partial stability to consider the cooperative consensus and pinning control. Agents with identical linear time-invariant dynamics are considered. Communication graphs are assumed directed and having fixed topology. Structured quadratic performance indices are derived that capture the topology of the graph, which allows for global optimal control that is implemented using local distributed protocols. A new class of digraphs is defined that admits a distributed solution to the global optimal control problem, namely those with simple graph Laplacian matrices.

Journal ArticleDOI
20 Feb 2014-Nature
TL;DR: The observed spatial distribution rules out symmetric explosions even with a high level of convective mixing, as well as highly asymmetric bipolar explosions resulting from a fast-rotating progenitor, providing strong evidence for the development of low-mode convective instabilities in core-collapse supernovae.
Abstract: Asymmetry is required by most numerical simulations of stellar core-collapse explosions, but the form it takes differs significantly among models. The spatial distribution of radioactive ^(44)Ti, synthesized in an exploding star near the boundary between material falling back onto the collapsing core and that ejected into the surrounding medium, directly probes the explosion asymmetries. Cassiopeia A is a young, nearby, core-collapse remnant from which ^(44)Ti emission has previously been detected but not imaged. Asymmetries in the explosion have been indirectly inferred from a high ratio of observed ^(44)Ti emission to estimated ^(56)Ni emission, from optical light echoes, and from jet-like features seen in the X-ray and optical ejecta. Here we report spatial maps and spectral properties of the ^(44)Ti in Cassiopeia A. This may explain the unexpected lack of correlation between the ^(44)Ti and iron X-ray emission, the latter being visible only in shock-heated material. The observed spatial distribution rules out symmetric explosions even with a high level of convective mixing, as well as highly asymmetric bipolar explosions resulting from a fast-rotating progenitor. Instead, these observations provide strong evidence for the development of low-mode convective instabilities in core-collapse supernovae.

Journal ArticleDOI
TL;DR: A distributed adaptive consensus protocol is proposed to ensure the boundedness of the consensus error of linear multi-agent systems subject to different matching uncertainties for both the cases without and with a leader of bounded unknown control input.

Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek4  +2870 moreInstitutions (169)
01 Nov 2014
TL;DR: In this paper, the performance of ATLAS muon reconstruction during the LHC run with pp collisions at root s = 7-8 TeV in 2011-2012, focusing mainly on data collected in 2012.
Abstract: This paper presents the performance of the ATLAS muon reconstruction during the LHC run with pp collisions at root s = 7-8 TeV in 2011-2012, focusing mainly on data collected in 2012. Measurements ...

Journal ArticleDOI
TL;DR: In contrast to a conventional symmetric Lorentzian resonance, Fano resonance is predominantly used to describe asymmetric-shaped resonances, which arise from the constructive and destructive interference of discrete resonance states with broadband continuum states as discussed by the authors.