scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Arlington published in 2016"


Proceedings Article
12 Feb 2016
TL;DR: This work develops two versions of the Constrained Laplacian Rank (CLR) method, based upon the L1-norm and the L2-norm, which yield two new graph-based clustering objectives and derives optimization algorithms to solve them.
Abstract: Graph-based clustering methods perform clustering on a fixed input data graph. If this initial construction is of low quality then the resulting clustering may also be of low quality. Moreover, existing graph-based clustering methods require post-processing on the data graph to extract the clustering indicators. We address both of these drawbacks by allowing the data graph itself to be adjusted as part of the clustering procedure. In particular, our Constrained Laplacian Rank (CLR) method learns a graph with exactly k connected components (where k is the number of clusters). We develop two versions of this method, based upon the L1-norm and the L2-norm, which yield two new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic datasets and real-world benchmark datasets exhibit the effectiveness of this new graph-based clustering method.

546 citations


Journal ArticleDOI
05 Jan 2016-JAMA
TL;DR: Among obese older patients with clinically stable HFPEF, caloric restriction or aerobic exercise training increased peak V̇O2, and the effects may be additive, and neither intervention had a significant effect on quality of life as measured by the MLHF Questionnaire.
Abstract: Importance More than 80% of patients with heart failure with preserved ejection fraction (HFPEF), the most common form of heart failure among older persons, are overweight or obese. Exercise intolerance is the primary symptom of chronic HFPEF and a major determinant of reduced quality of life (QOL). Objective To determine whether caloric restriction (diet) or aerobic exercise training (exercise) improves exercise capacity and QOL in obese older patients with HFPEF. Design, Setting, and Participants Randomized, attention-controlled, 2 × 2 factorial trial conducted from February 2009 through November 2014 in an urban academic medical center. Of 577 initially screened participants, 100 older obese participants (mean [SD]: age, 67 years [5]; body mass index, 39.3 [5.6]) with chronic, stable HFPEF were enrolled (366 excluded by inclusion and exclusion criteria, 31 for other reasons, and 80 declined participation). Interventions Twenty weeks of diet, exercise, or both; attention control consisted of telephone calls every 2 weeks. Main Outcomes and Measures Exercise capacity measured as peak oxygen consumption (Vo 2 , mL/kg/min; co–primary outcome) and QOL measured by the Minnesota Living with Heart Failure (MLHF) Questionnaire (score range: 0–105, higher scores indicate worse heart failure–related QOL; co–primary outcome). Results Of the 100 enrolled participants, 26 participants were randomized to exercise; 24 to diet; 25 to exercise + diet; 25 to control. Of these, 92 participants completed the trial. Exercise attendance was 84% (SD, 14%) and diet adherence was 99% (SD, 1%). By main effects analysis, peak Vo 2 was increased significantly by both interventions: exercise, 1.2 mL/kg body mass/min (95% CI, 0.7 to 1.7), P P 2 (joint effect, 2.5 mL/kg/min). There was no statistically significant change in MLHF total score with exercise and with diet (main effect: exercise, −1 unit [95% CI, −8 to 5], P = .70; diet, −6 units [95% CI, −12 to 1], P = .08). The change in peak Vo 2 was positively correlated with the change in percent lean body mass ( r = 0.32; P = .003) and the change in thigh muscle:intermuscular fat ratio ( r = 0.27; P = .02). There were no study-related serious adverse events. Body weight decreased by 7% (7 kg [SD, 1]) in the diet group, 3% (4 kg [SD, 1]) in the exercise group, 10% (11 kg [SD, 1] in the exercise + diet group, and 1% (1 kg [SD, 1]) in the control group. Conclusions and Relevance Among obese older patients with clinically stable HFPEF, caloric restriction or aerobic exercise training increased peak Vo 2 , and the effects may be additive. Neither intervention had a significant effect on quality of life as measured by the MLHF Questionnaire. Trial Registration clinicaltrials.gov Identifier:NCT00959660

545 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2828 moreInstitutions (191)
TL;DR: In this article, the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015 was evaluated using the Monte Carlo simulations.
Abstract: This article documents the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015. Using a large sample of J/ψ→μμ and Z→μμ decays from 3.2 fb−1 of pp collision data, measurements of the reconstruction efficiency, as well as of the momentum scale and resolution, are presented and compared to Monte Carlo simulations. The reconstruction efficiency is measured to be close to 99% over most of the covered phase space (|η| 2.2, the pT resolution for muons from Z→μμ decays is 2.9% while the precision of the momentum scale for low-pT muons from J/ψ→μμ decays is about 0.2%.

440 citations


Journal ArticleDOI
TL;DR: In this paper, the mediating effects of salesperson information communication behaviors between social media use and customer satisfaction were investigated using salesperson-reported data, within a B2B context, empirically test a model using structural equation modeling.

409 citations


Journal ArticleDOI
TL;DR: In this article, the pore structure and fractal dimension of the pores in O3w-S1l shale formation in the Jiaoshiba area were investigated using field emission scanning electron microscopy (FE-SEM).

404 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2812 moreInstitutions (207)
TL;DR: In this paper, an independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b tagging algorithm used in the online trigger are also presented.
Abstract: The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag rate for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.

362 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2862 moreInstitutions (191)
TL;DR: The methods employed in the ATLAS experiment to correct for the impact of pile-up on jet energy and jet shapes, and for the presence of spurious additional jets, are described, with a primary focus on the large 20.3 kg-1 data sample.
Abstract: The large rate of multiple simultaneous protonproton interactions, or pile-up, generated by the Large Hadron Collider in Run 1 required the development of many new techniques to mitigate the advers ...

316 citations


Journal ArticleDOI
TL;DR: In this paper, a new vortex identification criterion called W -method is proposed based on the ideas that vorticity overtakes deformation in vortex and W = 0.52 is a quantity to approximately define the vortex boundary.
Abstract: A new vortex identification criterion called W -method is proposed based on the ideas that vorticity overtakes deformation in vortex. The comparison with other vortex identification methods like Q -criterion and l 2-method is conducted and the advantages of the new method can be summarized as follows: (1) the method is able to capture vortex well and very easy to perform; (2) the physical meaning of W is clear while the interpretations of iso-surface values of Q and l 2 chosen to visualize vortices are obscure; (3) being different from Q and l 2 iso-surface visualization which requires wildly various thresholds to capture the vortex structure properly, W is pretty universal and does not need much adjustment in different cases and the iso-surfaces of W =0.52 can always capture the vortices properly in all the cases at different time steps, which we investigated; (4) both strong and weak vortices can be captured well simultaneously while improper Q and l 2 threshold may lead to strong vortex capture while weak vortices are lost or weak vortices are captured but strong vortices are smeared; (5) W =0.52 is a quantity to approximately define the vortex boundary. Note that, to calculate W , the length and velocity must be used in the non-dimensional form. From our direct numerical simulation, it is found that the vorticity direction is very different from the vortex rotation direction in general 3-D vortical flow, the Helmholtz velocity decomposition is reviewed and vorticity is proposed to be further decomposed to vortical vorticity and non-vortical vorticity.

305 citations


Journal ArticleDOI
TL;DR: In this paper, a multilevel model that examines the effects of employee involvement climate on the individual-level process linking employee regulatory focus (promotion and prevention) to innovation via thriving was proposed and tested.

286 citations


Journal ArticleDOI
Morad Aaboud, Alexander Kupco1, P. Davison2, Samuel Webb3  +2869 moreInstitutions (194)
TL;DR: The luminosity determination for the ATLAS detector at the LHC during pp collisions at s√= 8 TeV in 2012 is presented in this article, where the evaluation of the luminosity scale is performed using several luminometers.
Abstract: The luminosity determination for the ATLAS detector at the LHC during pp collisions at s√= 8 TeV in 2012 is presented. The evaluation of the luminosity scale is performed using several luminometers ...

286 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2851 moreInstitutions (208)
TL;DR: The results suggest that the ridge in pp collisions arises from the same or similar underlying physics as observed in p+Pb collisions, and that the dynamics responsible for the ridge has no strong sqrt[s] dependence.
Abstract: ATLAS has measured two-particle correlations as a function of relative azimuthal-angle, $\Delta \phi$, and pseudorapidity, $\Delta \eta$, in $\sqrt{s}$=13 and 2.76 TeV $pp$ collisions at the LHC using charged particles measured in the pseudorapidity interval $|\eta|$<2.5. The correlation functions evaluated in different intervals of measured charged-particle multiplicity show a multiplicity-dependent enhancement at $\Delta \phi \sim 0$ that extends over a wide range of $\Delta\eta$, which has been referred to as the "ridge". Per-trigger-particle yields, $Y(\Delta \phi)$, are measured over 2<$|\Delta\eta|$<5. For both collision energies, the $Y(\Delta \phi)$ distribution in all multiplicity intervals is found to be consistent with a linear combination of the per-trigger-particle yields measured in collisions with less than 20 reconstructed tracks, and a constant combinatoric contribution modulated by $\cos{(2\Delta \phi)}$. The fitted Fourier coefficient, $v_{2,2}$, exhibits factorization, suggesting that the ridge results from per-event $\cos{(2\phi)}$ modulation of the single-particle distribution with Fourier coefficients $v_2$. The $v_2$ values are presented as a function of multiplicity and transverse momentum. They are found to be approximately constant as a function of multiplicity and to have a $p_{\mathrm{T}}$ dependence similar to that measured in $p$+Pb and Pb+Pb collisions. The $v_2$ values in the 13 and 2.76 TeV data are consistent within uncertainties. These results suggest that the ridge in $pp$ collisions arises from the same or similar underlying physics as observed in $p$+Pb collisions, and that the dynamics responsible for the ridge has no strong $\sqrt{s}$ dependence.

Journal ArticleDOI
TL;DR: The PROGRESS CTO score is a novel useful tool for estimating technical success in CTO PCI performed using the hybrid approach.
Abstract: Objectives This study sought to develop a novel parsimonious score for predicting technical success of chronic total occlusion (CTO) percutaneous coronary intervention (PCI) performed using the hybrid approach. Background Predicting technical success of CTO PCI can facilitate clinical decision making and procedural planning. Methods We analyzed clinical and angiographic parameters from 781 CTO PCIs included in PROGRESS CTO (Prospective Global Registry for the Study of Chronic Total Occlusion Intervention) using a derivation and validation cohort (2:1 sampling ratio). Variables with strong association with technical success in multivariable analysis were assigned 1 point, and a 4-point score was developed from summing all points. The PROGRESS CTO score was subsequently compared with the J-CTO (Multicenter Chronic Total Occlusion Registry in Japan) score in the validation cohort. Results Technical success was 92.9%. On multivariable analysis, factors associated with technical success included proximal cap ambiguity (beta coefficient [b] = 0.88), moderate/severe tortuosity (b = 1.18), circumflex artery CTO (b = 0.99), and absence of “interventional” collaterals (b = 0.88). The resulting score demonstrated good calibration and discriminatory capacity in the derivation (Hosmer-Lemeshow chi-square = 2.633; p = 0.268, and receiver-operator characteristic [ROC] area = 0.778) and validation (Hosmer-Lemeshow chi-square = 5.333; p = 0.070, and ROC area = 0.720) subset. In the validation cohort, the PROGRESS CTO and J-CTO scores performed similarly in predicting technical success (ROC area 0.720 vs. 0.746, area under the curve difference = 0.026, 95% confidence interval = −0.093 to 0.144). Conclusions The PROGRESS CTO score is a novel useful tool for estimating technical success in CTO PCI performed using the hybrid approach.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2838 moreInstitutions (148)
TL;DR: In this article, a search for a high-mass Higgs boson in the,,, and decay modes using the ATLAS detector at the CERN Large Hadron Collider is presented.
Abstract: A search is presented for a high-mass Higgs boson in the , , , and decay modes using the ATLAS detector at the CERN Large Hadron Collider. The search uses proton-proton collision data at a centre-of-mass energy of 8 TeV corresponding to an integrated luminosity of 20.3 fb. The results of the search are interpreted in the scenario of a heavy Higgs boson with a width that is small compared with the experimental mass resolution. The Higgs boson mass range considered extends up to for all four decay modes and down to as low as 140 , depending on the decay mode. No significant excess of events over the Standard Model prediction is found. A simultaneous fit to the four decay modes yields upper limits on the production cross-section of a heavy Higgs boson times the branching ratio to boson pairs. 95 % confidence level upper limits range from 0.53 pb at GeV to 0.008 pb at GeV for the gluon-fusion production mode and from 0.31 pb at GeV to 0.009 pb at GeV for the vector-boson-fusion production mode. The results are also interpreted in the context of Type-I and Type-II two-Higgs-doublet models.

Journal ArticleDOI
TL;DR: In this paper, a cooperative distributed secondary/primary control paradigm for AC microgrids is proposed, which replaces the centralized secondary control and the primary-level droop mechanism of each inverter with three separate regulators: voltage, reactive power, and active power regulators.
Abstract: A cooperative distributed secondary/primary control paradigm for AC microgrids is proposed. This solution replaces the centralized secondary control and the primary-level droop mechanism of each inverter with three separate regulators: voltage, reactive power, and active power regulators. A sparse communication network is spanned across the microgrid to facilitate limited data exchange among inverter controllers. Each controller processes its local and neighbors' information to update its voltage magnitude and frequency (or, equivalently, phase angle) set points. A voltage estimator finds the average voltage across the microgrid, which is then compared to the rated voltage to produce the first-voltage correction term. The reactive power regulator at each inverter compares its normalized reactive power with those of its neighbors, and the difference is fed to a subsequent PI controller that generates the second-voltage correction term. The controller adds the voltage correction terms to the microgrid rated voltage (provided by the tertiary control) to generate the local voltage magnitude set point. The voltage regulators collectively adjust the average voltage of the microgrid at the rated voltage. The voltage regulators allow different set points for different bus voltages and, thus, account for the line impedance effects. Moreover, the reactive power regulators adjust the voltage to achieve proportional reactive load sharing. The third module, the active power regulator, compares the local normalized active power of each inverter with its neighbors' and uses the difference to update the frequency and, accordingly, the phase angle of that inverter. The global dynamic model of the microgrid, including distribution grid, regulator modules, and the communication network, is derived, and controller design guidelines are provided. Steady-state performance analysis shows that the proposed controller can accurately handle the global voltage regulation and proportional load sharing. An AC microgrid prototype is set up, where the controller performance, plug-and-play capability, and resiliency to the failure in the communication links are successfully verified.

Journal ArticleDOI
TL;DR: In this paper, a distributed control method is proposed to handle power sharing among a cluster of dc microgrids, which uses a cooperative approach to adjust voltage set points for individual micro-grids and, accordingly, navigate the power flow among them.
Abstract: A distributed control method is proposed to handle power sharing among a cluster of dc microgrids. The hierarchical control structure of microgrids includes primary, secondary, and tertiary levels. While the load sharing among the sources within a dc microgrid is managed through primary and secondary controllers, a tertiary control level is required to provide the higher level load sharing among microgrids within a cluster. Power transfer between microgrids enables maximum utilization of renewable sources and suppresses stress and aging of the components, which improves its reliability and availability, reduces the maintenance costs, and expands the overall lifespan of the network. The proposed control mechanism uses a cooperative approach to adjust voltage set points for individual microgrids and, accordingly, navigate the power flow among them. Loading mismatch among neighbor microgrids is used in an updating policy to adjust voltage set point and mitigate such mismatches. While the voltage adjustment policy handles the load sharing among the microgrids within each cluster, at a lower level, each microgrid carries a communication network that is in contact with the secondary control system. It is this lower level network that propagates voltage set points across all sources within a microgrid. Load sharing and set point propagation are analytically studied for the higher and lower level controllers, respectively. Experimental studies on two cluster setups demonstrate excellent controller performance and validate its resiliency against converter failures and communication losses.

Journal ArticleDOI
02 Aug 2016
TL;DR: The background and key features of data deduplication are reviewed, the main applications and industry trend are discussed, and the state-of-the-art research in data dedeplication is classified according to the key workflow of the data dedUplication process.
Abstract: Data deduplication, an efficient approach to data reduction, has gained increasing attention and popularity in large-scale storage systems due to the explosive growth of digital data. It eliminates redundant data at the file or subfile level and identifies duplicate content by its cryptographically secure hash signature (i.e., collision-resistant fingerprint), which is shown to be much more computationally efficient than the traditional compression approaches in large-scale storage systems. In this paper, we first review the background and key features of data deduplication, then summarize and classify the state-of-the-art research in data deduplication according to the key workflow of the data deduplication process. The summary and taxonomy of the state of the art on deduplication help identify and understand the most important design considerations for data deduplication systems. In addition, we discuss the main applications and industry trend of data deduplication, and provide a list of the publicly available sources for deduplication research and studies. Finally, we outline the open problems and future research directions facing deduplication-based storage systems.

Journal ArticleDOI
TL;DR: Amplification and functional divergence of genes associated with specialized feeding on plants, including genes originally obtained via horizontal gene transfer from fungi and bacteria, contributed to the addition, expansion, and enhancement of the metabolic repertoire of the Asian longhorned beetle and to a lesser degree, other phytophagous insects.
Abstract: Relatively little is known about the genomic basis and evolution of wood-feeding in beetles. We undertook genome sequencing and annotation, gene expression assays, studies of plant cell wall degrading enzymes, and other functional and comparative studies of the Asian longhorned beetle, Anoplophora glabripennis, a globally significant invasive species capable of inflicting severe feeding damage on many important tree species. Complementary studies of genes encoding enzymes involved in digestion of woody plant tissues or detoxification of plant allelochemicals were undertaken with the genomes of 14 additional insects, including the newly sequenced emerald ash borer and bull-headed dung beetle. The Asian longhorned beetle genome encodes a uniquely diverse arsenal of enzymes that can degrade the main polysaccharide networks in plant cell walls, detoxify plant allelochemicals, and otherwise facilitate feeding on woody plants. It has the metabolic plasticity needed to feed on diverse plant species, contributing to its highly invasive nature. Large expansions of chemosensory genes involved in the reception of pheromones and plant kairomones are consistent with the complexity of chemical cues it uses to find host plants and mates. Amplification and functional divergence of genes associated with specialized feeding on plants, including genes originally obtained via horizontal gene transfer from fungi and bacteria, contributed to the addition, expansion, and enhancement of the metabolic repertoire of the Asian longhorned beetle, certain other phytophagous beetles, and to a lesser degree, other phytophagous insects. Our results thus begin to establish a genomic basis for the evolutionary success of beetles on plants.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This work has shown that real time software decoding of 4K (3840×2160) video with HEVC is feasible on current desktop CPUs using four CPU cores and that encoding 4K video in real time on the other hand is a challenge.
Abstract: In the family of video coding standards, HEVC has the promise and potential to replace/supplement all the existing standards (MPEG and H.26x series including H.264/AVC). While the complexity of the HEVC encoder is several times that of the H.264/AVC, the decoder complexity is within the range of the latter. Researchers are exploring about reducing the HEVC encoder complexity · Kim et al have shown that motion estimation (ME) occupies 77–81% of HEVC encoder implementation. Hence the focus has been in reducing the ME complexity. Several researchers have implemented performance comparison of HEVC with other standards such as H.264/AVC, MPEG-4 Part 2 visual, H.262/PEG-2 Video, H.263, and VP9 and also with image coding standards such as JPEG2000, JPEG-LS, and JPEG-XR. Several tests have shown that HEVC provides improved compression efficiency up to 50% bit rate reduction for the same subjective video quality compared to H.264/AVC. Besides addressing all current applications, HEVC is designed and developed to focus on two key issues: increased video resolution — up to 8k×4k — and increased use of parallel processing architecture. Brief description of the HEVC is provided. However for details and implementation, the reader is referred to the JCT-VC documents, overview papers, keynote speeches, tutorials, panel discussions, poster sessions, special issues, test models (TM/HM), web/ftp site, open source software, test sequences, anchor bit streams and the latest books on HEVC. Also researchers are exploring transcoding between HEVC and other standards such as MPEG-2 and H.264. Further extensions to HEVC are scalable video coding (SVC), 3D video/multiview video coding and range extensions which include screen content coding (SCC), bit depths larger than 10 bits and color sampling of 4:2:2 and 4:4:4. SCC in general refers to computer generated objects and screen shots from computer applications (both images and videos) and may require lossless coding. Some of these extensions have been finalized by the end of 2014 (time frame for SCC is late 2016). They also provide fertile ground for R & D. Iguchi et al have already developed a hardware encoder for super hi-vision (SHV) i.e., ultra HDTV at 7680×4320 pixel resolution. Also real-time hardware implementation of HEVC encoder for 1080p HD video has been done. NHK is planning SHV experimental broadcasting in 2016. A 249-Mpixel/s HEVC video decoder chip for 4k Ultra-HD applications has already been developed. Bross et al have shown that real time software decoding of 4K (3840×2160) video with HEVC is feasible on current desktop CPUs using four CPU cores. They also state that encoding 4K video in real time on the other hand is a challenge. Multimedia research group (MRC) predicts 2 billion HEVC based devices by end of 2016.

Journal ArticleDOI
TL;DR: A fuzzy multi-objective optimization model with related constraints to minimize the total economic cost and network loss of microgrid and test results show that the proposed CBPSO has better convergence performance than BPSO.
Abstract: Based on fuzzy mathematics theory, this paper proposes a fuzzy multi-objective optimization model with related constraints to minimize the total economic cost and network loss of microgrid. Uncontrollable microsources are considered as negative load, and stochastic net load scenarios are generated for taking the uncertainty of their output power and load into account. Cooperating with storage devices of the optimal capacity controllable microsources are treated as variables in the optimization process with the consideration of their start and stop strategy. Chaos optimization algorithm is introduced into binary particle swarm optimization (BPSO) to propose chaotic BPSO (CBPSO). Search capability of BPSO is improved via the chaotic search approach of chaos optimization algorithm. Tests of four benchmark functions show that the proposed CBPSO has better convergence performance than BPSO. Simulation results validate the correctness of the proposed model and the effectiveness of CBPSO.

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Jalal Abdallah3  +2898 moreInstitutions (216)
TL;DR: In this paper, a measurement of the inelastic proton-proton cross section using 60''μb^{-1} of pp collisions at a center-of-mass energy sqrt[s] of 13'TeV with the ATLAS detector at the LHC is presented.
Abstract: This Letter presents a measurement of the inelastic proton-proton cross section using 60 μb^{-1} of pp collisions at a center-of-mass energy sqrt[s] of 13 TeV with the ATLAS detector at the LHC. Inelastic interactions are selected using rings of plastic scintillators in the forward region (2.07 10^{-6}, where M_{X} is the larger invariant mass of the two hadronic systems separated by the largest rapidity gap in the event. In this ξ range the scintillators are highly efficient. For diffractive events this corresponds to cases where at least one proton dissociates to a system with M_{X}>13 GeV. The measured cross section is compared with a range of theoretical predictions. When extrapolated to the full phase space, a cross section of 78.1±2.9 mb is measured, consistent with the inelastic cross section increasing with center-of-mass energy.

Journal ArticleDOI
TL;DR: A summary of motivating electrospinning techniques to enhance cell infiltration of electrospun scaffolds, which may inspire new electrosp spinning techniques and their new biomedical applications are provided.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2814 moreInstitutions (212)
TL;DR: In this article, the authors describe a model-agnostic search for pairs of jets (dijets) produced by resonant and non-resonant phenomena beyond the Standard Model.

Journal ArticleDOI
TL;DR: An optimal control method is developed for unknown continuous-time systems with unknown disturbances in this paper and it is proven that the weight errors are uniformly ultimately bounded based on Lyapunov techniques.
Abstract: An optimal control method is developed for unknown continuous-time systems with unknown disturbances in this paper. The integral reinforcement learning (IRL) algorithm is presented to obtain the iterative control. Off-policy learning is used to allow the dynamics to be completely unknown. Neural networks are used to construct critic and action networks. It is shown that if there are unknown disturbances, off-policy IRL may not converge or may be biased. For reducing the influence of unknown disturbances, a disturbances compensation controller is added. It is proven that the weight errors are uniformly ultimately bounded based on Lyapunov techniques. Convergence of the Hamiltonian function is also proven. The simulation study demonstrates the effectiveness of the proposed optimal control method for unknown systems with disturbances.

Journal ArticleDOI
TL;DR: In this article, an interdisciplinary review on the co-evolving technical and social dynamics of decentralized energy systems focusing on Distributed Generation (DG), MicroGrids (MG), and Smart Microgrids (SMG), in order to draw insights for their integration in urban planning and policy, in particular reference to climate change mitigation and adaptation planning.
Abstract: The growth of Decentralized Energy Systems (DES) signals a new frontier in urban energy planning and design of local energy systems. As affordability of renewable energy technologies (RET) increases, cities and urban regions become the venues, not only for energy consumption but also for generation and distribution, which calls for systemic and paradigmatic change in local energy infrastructure. The decentralizing transitions of urban energy systems, particularly solar photovoltaic and thermal technologies, require a comprehensive assessment of their sociotechnical co-evolution – how technologies and social responses evolve together and how their co-evolution affects urban planning and energy policies. So far, urban planning literature has mainly focused on the impact of physical urban forms on efficiency of energy consumption, overlooking how the dynamics of new energy technologies and associated social responses affect local systems of energy infrastructure, the built environments and their residents. This paper provides an interdisciplinary review on the co-evolving technical and social dynamics of DES focusing on Distributed Generation (DG), MicroGrids (MG), and Smart MicroGrids (SMG), in order to draw insights for their integration in urban planning and policy, in particular reference to climate change mitigation and adaptation planning.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: From the extensive experiments on the National Lung Screening Trial (NLST) lung cancer data, it is shown that the proposed DeepConvSurv model improves significantly compared with four state-of-the-art methods.
Abstract: Traditional Cox proportional hazard model for survival analysis are based on structured features like patients' sex, smoke years, BMI, etc. With the development of medical imaging technology, more and more unstructured medical images are available for diagnosis, treatment and survival analysis. Traditional survival models utilize these unstructured images by extracting human-designed features from them. However, we argue that those hand-crafted features have limited abilities in representing highly abstract information. In this paper, we for the first time develop a deep convolutional neural network for survival analysis (DeepConvSurv) with pathological images. The deep layers in our model could represent more abstract information compared with hand-crafted features from the images. Hence, it will improve the survival prediction performance. From our extensive experiments on the National Lung Screening Trial (NLST) lung cancer data, we show that the proposed DeepConvSurv model improves significantly compared with four state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this article, the authors explored the global process of urban shrinkage in different contexts and argued that the phenomenon is anchored at the local level and subject to particular manifestations, and the way in which policies implemented in shrinking cities differ in various national contexts.

Book ChapterDOI
26 Sep 2016
TL;DR: This paper proposes a novel, lightweight defense based on Adaptive Padding that provides a sufficient level of security against website fingerprinting, particularly in realistic evaluation conditions.
Abstract: Website Fingerprinting attacks enable a passive eavesdropper to recover the user’s otherwise anonymized web browsing activity by matching the observed traffic with prerecorded web traffic templates. The defenses that have been proposed to counter these attacks are impractical for deployment in real-world systems due to their high cost in terms of added delay and bandwidth overhead. Further, these defenses have been designed to counter attacks that, despite their high success rates, have been criticized for assuming unrealistic attack conditions in the evaluation setting. In this paper, we propose a novel, lightweight defense based on Adaptive Padding that provides a sufficient level of security against website fingerprinting, particularly in realistic evaluation conditions. In a closed-world setting, this defense reduces the accuracy of the state-of-the-art attack from 91 % to 20 %, while introducing zero latency overhead and less than 60 % bandwidth overhead. In an open-world, the attack precision is just 1 % and drops further as the number of sites grows.

Journal ArticleDOI
17 May 2016-JAMA
TL;DR: It is suggested that the pathogenesis of reflux esophagitis may be cytokine-mediated rather than the result of chemical injury, and stopping PPI medication was associated with T lymphocyte-predominant esophageal inflammation and basal cell and papillary hyperplasia without loss of surface cells.
Abstract: Importance The histologic changes associated with acute gastroesophageal reflux disease (GERD) have not been studied prospectively in humans. Recent studies in animals have challenged the traditional notion that reflux esophagitis develops when esophageal surface epithelial cells are exposed to lethal chemical injury from refluxed acid. Objective To evaluate histologic features of esophageal inflammation in acute GERD to study its pathogenesis. Design, Setting, and Participants Patients from the Dallas Veterans Affairs Medical Center who had reflux esophagitis successfully treated with proton pump inhibitors (PPIs) began 24-hour esophageal pH and impedance monitoring and esophagoscopy (including confocal laser endomicroscopy [CLE]) with biopsies from noneroded areas of distal esophagus at baseline (taking PPIs) and at 1 week and 2 weeks after stopping the PPI medication. Enrollment began May 2013 and follow-up ended July 2015. Interventions PPIs stopped for 2 weeks. Main Outcomes and Measures Twelve patients (men, 11; mean age, 57.6 year [SD, 13.1]) completed the study. Primary outcome was change in esophageal inflammation 2 weeks after stopping the PPI medication, determined by comparing lymphocyte, eosinophil, and neutrophil infiltrates (each scored on a 0-3 scale) in esophageal biopsies. Also evaluated were changes in epithelial basal cell and papillary hyperplasia, surface erosions, intercellular space width, endoscopic grade of esophagitis, esophageal acid exposure, and mucosal impedance (an index of mucosal integrity). Results At 1 week and 2 weeks after discontinuation of PPIs, biopsies showed significant increases in intraepithelial lymphocytes, which were predominantly T cells (median [range]: 0 (0-2) at baseline vs 1 (1-2) at both 1 week [ P = .005] and 2 weeks [ P = .002]); neutrophils and eosinophils were few or absent. Biopsies also showed widening of intercellular spaces (confirmed by CLE), and basal cell and papillary hyperplasia developed without surface erosions. Two weeks after stopping the PPI medication, esophageal acid exposure increased (median: 1.2% at baseline to 17.8% at 2 weeks; Δ, 16.2% [95% CI, 4.4%-26.5%], P = .005), mucosal impedance decreased (mean: 2671.3 Ω at baseline to 1508.4 Ω at 2 weeks; Δ, 1162.9 Ω [95% CI, 629.9-1695.9], P = .001), and all patients had evidence of esophagitis. Conclusions and Relevance In this preliminary study of 12 patients with severe reflux esophagitis successfully treated with PPI therapy, stopping PPI medication was associated with T lymphocyte–predominant esophageal inflammation and basal cell and papillary hyperplasia without loss of surface cells. If replicated, these findings suggest that the pathogenesis of reflux esophagitis may be cytokine-mediated rather than the result of chemical injury. Trial Registration clinicaltrials.gov Identifier:NCT01733810.

Journal ArticleDOI
TL;DR: The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task.
Abstract: An intelligent human–robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human–robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot’s dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an ${x}$ - ${y}$ table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

Journal ArticleDOI
TL;DR: This article conducted a meta-analysis of 21 papers examining the effect of leniency on purchase and return decisions, and demonstrated that overall, leniency increases purchase more than return, and that the return policy factors that influence purchase (money and effort leniency increase purchase) are different from the return policies factors that influenced returns (scope leniency increased returns while time and exchange leniency reduced returns).