scispace - formally typeset
Search or ask a question
Author

James Taylor

Bio: James Taylor is an academic researcher from Newcastle University. The author has contributed to research in topics: Laser & Fiber laser. The author has an hindex of 95, co-authored 1161 publications receiving 39945 citations. Previous affiliations of James Taylor include Institut national de la recherche agronomique & European Spallation Source.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article , a zero-parameter energy-truncation model was proposed to predict the mass loss associated with tidal stripping, with the particles that are the least bound being removed first.
Abstract: Accurate models of the structural evolution of dark matter subhaloes, as they orbit within larger systems, are fundamental to understanding the detailed distribution of dark matter at the present day. Numerical simulations of subhalo evolution support the idea that the mass loss associated with tidal stripping is most naturally understood in energy space, with the particles that are the least bound being removed first. Starting from this premise, we recently proposed a zero-parameter ‘energy-truncation model’ for subhalo evolution. We tested this model with simulations of tidal stripping of satellites with initial NFW profiles, and showed that the energy-truncation model accurately predicts both the mass loss and density profiles. In this work, we apply the model to a variety of Hernquist, Einasto and King profiles. We show that it matches the simulation results quite closely in all cases, indicating that it may serve as a universal model to describe tidally stripped collisionless systems. A key prediction of the energy-truncation model is that the central density of dark matter subhaloes is conserved as they lose mass; this has important implications for dark matter annihilation calculations, and for other observational tests of dark matter.

2 citations

Proceedings ArticleDOI
26 Jun 1991
TL;DR: It has been learned that much more can be done to provide a fully supportive environment for controls engineering, and it has also become clear that certain things might better be done differently.
Abstract: Recent and future efforts at GE to develop modern environments for Computer-Aided Control Engineering (CACE) are discussed. The basic elements of these systems are: *a User Interface which combines a "point-and-click" menu- and forms-driven interface with other access modes for the more experienced user, *a Data-Base Manager organized in terms of projects, models and corresponding results and other related data elements and including control, *an Expert System Shell, which performs routine higher-level CACE tasks, and *a data-driven Supervisor that integrates the above elements with existing CACE packages for linear and nonlinear simulation, analysis and design. As is usually the case, it has been learned that much more can be done to provide a fully supportive environment for controls engineering, and it has also become clear that certain things might better be done differently. This presentation will focus on such areas.

2 citations

Posted Content
TL;DR: In this paper, the authors presented 90% confidence level (CL) upper-limit maps of GW strain power with typical values between 2-20x10^-50 strain and 5-35x10−49 strain for pointlike and extended sources respectively.
Abstract: The gravitational-wave (GW) sky may include nearby pointlike sources as well as astrophysical and cosmological stochastic backgrounds. Since the relative strength and angular distribution of the many possible sources of GWs are not well constrained, searches for GW signals must be performed in a model-independent way. To that end we perform two directional searches for persistent GWs using data from the LIGO S5 science run: one optimized for pointlike sources and one for arbitrary extended sources. The latter result is the first of its kind. Finding no evidence to support the detection of GWs, we present 90% confidence level (CL) upper-limit maps of GW strain power with typical values between 2-20x10^-50 strain^2 Hz^-1 and 5-35x10^-49 strain^2 Hz^-1 sr^-1 for pointlike and extended sources respectively. The limits on pointlike sources constitute a factor of 30 improvement over the previous best limits. We also set 90% CL limits on the narrow-band root-mean-square GW strain from interesting targets including Sco X-1, SN1987A and the Galactic Center as low as ~7x10^-25 in the most sensitive frequency range near 160 Hz. These limits are the most constraining to date and constitute a factor of 5 improvement over the previous best limits.

2 citations

Proceedings ArticleDOI
04 May 2008
TL;DR: In this article, the authors reported a 29 W CW supercontinuum spanning 1.06-1.67 mum with a spectral power density of 50 mW/nm up to 1.4 mum generated in a double-zero PCF.
Abstract: The authors report a 29 W CW supercontinuum spanning 1.06-1.67 mum with a spectral power density of 50 mW/nm up to 1.4 mum generated in a double-zero PCF. The dynamics of formation are analyzed.

2 citations

Proceedings ArticleDOI
01 Jan 2004
TL;DR: In this article, the passive mode locking of a Cr4+:YAG laser generating pulses shorter than 100 fs is described, and the cavity group velocity dispersion has been measured and local resonances have been observed which appear to affect the mode locking.
Abstract: The passive mode locking of a Cr4+:YAG laser generating pulses shorter than 100 fs is described. The cavity group velocity dispersion has been measured and local resonances have been observed which appear to affect the mode locking of the laser.

2 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 Jan 2016
TL;DR: The using multivariate statistics is universally compatible with any devices to read, allowing you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for downloading using multivariate statistics. As you may know, people have look hundreds times for their favorite novels like this using multivariate statistics, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some harmful bugs inside their laptop. using multivariate statistics is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the using multivariate statistics is universally compatible with any devices to read.

14,604 citations

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 1994
TL;DR: In this paper, the authors present a brief history of LMIs in control theory and discuss some of the standard problems involved in LMIs, such as linear matrix inequalities, linear differential inequalities, and matrix problems with analytic solutions.
Abstract: Preface 1. Introduction Overview A Brief History of LMIs in Control Theory Notes on the Style of the Book Origin of the Book 2. Some Standard Problems Involving LMIs. Linear Matrix Inequalities Some Standard Problems Ellipsoid Algorithm Interior-Point Methods Strict and Nonstrict LMIs Miscellaneous Results on Matrix Inequalities Some LMI Problems with Analytic Solutions 3. Some Matrix Problems. Minimizing Condition Number by Scaling Minimizing Condition Number of a Positive-Definite Matrix Minimizing Norm by Scaling Rescaling a Matrix Positive-Definite Matrix Completion Problems Quadratic Approximation of a Polytopic Norm Ellipsoidal Approximation 4. Linear Differential Inclusions. Differential Inclusions Some Specific LDIs Nonlinear System Analysis via LDIs 5. Analysis of LDIs: State Properties. Quadratic Stability Invariant Ellipsoids 6. Analysis of LDIs: Input/Output Properties. Input-to-State Properties State-to-Output Properties Input-to-Output Properties 7. State-Feedback Synthesis for LDIs. Static State-Feedback Controllers State Properties Input-to-State Properties State-to-Output Properties Input-to-Output Properties Observer-Based Controllers for Nonlinear Systems 8. Lure and Multiplier Methods. Analysis of Lure Systems Integral Quadratic Constraints Multipliers for Systems with Unknown Parameters 9. Systems with Multiplicative Noise. Analysis of Systems with Multiplicative Noise State-Feedback Synthesis 10. Miscellaneous Problems. Optimization over an Affine Family of Linear Systems Analysis of Systems with LTI Perturbations Positive Orthant Stabilizability Linear Systems with Delays Interpolation Problems The Inverse Problem of Optimal Control System Realization Problems Multi-Criterion LQG Nonconvex Multi-Criterion Quadratic Problems Notation List of Acronyms Bibliography Index.

11,085 citations