scispace - formally typeset
Search or ask a question

Showing papers by "James Taylor published in 2013"


Journal ArticleDOI
TL;DR: How providers initiate and pursue vaccine recommendations is associated with parental vaccine acceptance, and parents had significantly higher odds of resisting vaccine recommendations if the provider used a participatory rather than a presumptive initiation format.
Abstract: OBJECTIVE: To characterize provider-parent vaccine communication and determine the influence of specific provider communication practices on parent resistance to vaccine recommendations. METHODS: We conducted a cross-sectional observational study in which we videotaped provider-parent vaccine discussions during health supervision visits. Parents of children aged 1 to 19 months old were screened by using the Parent Attitudes about Childhood Vaccines survey. We oversampled vaccine-hesitant parents (VHPs), defined as a score ≥50. We developed a coding scheme of 15 communication practices and applied it to all visits. We used multivariate logistic regression to explore the association between provider communication practices and parent resistance to vaccines, controlling for parental hesitancy status and demographic and visit characteristics. RESULTS: We analyzed 111 vaccine discussions involving 16 providers from 9 practices; 50% included VHPs. Most providers (74%) initiated vaccine recommendations with presumptive (eg, “Well, we have to do some shots”) rather than participatory (eg, “What do you want to do about shots?”) formats. Among parents who voiced resistance to provider initiation (41%), significantly more were VHPs than non-VHPs. Parents had significantly higher odds of resisting vaccine recommendations if the provider used a participatory rather than a presumptive initiation format (adjusted odds ratio: 17.5; 95% confidence interval: 1.2–253.5). When parents resisted, 50% of providers pursued their original recommendations (eg, “He really needs these shots”), and 47% of initially resistant parents subsequently accepted recommendations when they did. CONCLUSIONS: How providers initiate and pursue vaccine recommendations is associated with parental vaccine acceptance.

374 citations


Journal ArticleDOI
TL;DR: Scores on the PACV predict childhood immunization status and have high reliability, and these results should be validated in different geographic and demographic samples of parents.
Abstract: Importance Acceptance of childhood vaccinations is waning, amplifying interest in developing and testing interventions that address parental barriers to immunization acceptance. Objective To determine the predictive validity and test-retest reliability of the Parent Attitudes About Childhood Vaccines survey (PACV), a recently developed measure of vaccine hesitancy. Design, Setting, and Participants Prospective cohort of English-speaking parents of children aged 2 months and born from July 10 through December 10, 2010, who belonged to an integrated health care delivery system based in Seattle and who returned a completed baseline PACV. Parents who completed a follow-up survey 8 weeks later were included in the reliability analysis. Parents who remained continuous members in the delivery system until their child was 19 months old were included in the validity analysis. Exposure The PACV, scored on a scale of 0 to 100 (100 indicates high vaccine hesitancy). Main Outcomes and Measures Child’s immunization status as measured by the percentage of days underimmunized from birth to 19 months of age. Results Four hundred thirty-seven parents completed the baseline PACV (response rate, 50.5%), and 220 (66.5%) completed the follow-up survey. Of the 437 parents who completed a baseline survey, 310 (70.9%) maintained continuous enrollment. Compared with parents who scored less than 50, parents who scored 50 to 69 on the survey had children who were underimmunized for 8.3% (95% CI, 3.6%-12.8%) more days from birth to 19 months of age; those who scored 70 to 100, 46.8% (40.3%-53.3%) more days. Baseline and 8-week follow-up PACV scores were highly concordant (ρ = 0.844). Conclusions and Relevance Scores on the PACV predict childhood immunization status and have high reliability. Our results should be validated in different geographic and demographic samples of parents.

227 citations


Journal ArticleDOI
TL;DR: Maternal report of inconsolable infant crying may have a stronger association with postpartum depressive symptoms than infant colic, and asking a mother about her ability to soothe her infant may be more relevant for potential intervention than questions about crying and fussing duration alone.
Abstract: OBJECTIVE: To quantify the extent to which maternal report of inconsolable infant crying, rather than colic (defined by Wessel’s criteria of daily duration of fussing and crying >3 hours), is associated with maternal postpartum depressive symptoms. METHODS: Participants were 587 mothers who were recruited shortly before or after delivery and followed longitudinally. At 5 to 6 weeks postpartum, mothers recorded the duration and mode (fussing, crying, or inconsolable crying) of their infant’s distress by using the Baby ’ s Day Diary . The Edinburgh Postnatal Depression Scale (EPDS) was administered at enrollment and at 8 weeks postpartum. Using regression models that included baseline EPDS scores and multiple confounders, we examined associations of colic and inconsolable crying with later maternal EPDS scores at 8 weeks postpartum. RESULTS: Sixty mothers (10%) met the EPDS threshold for “possible depression” (score ≥9) at 8 weeks postpartum. For mothers reporting >20 minutes of inconsolable crying per day, the adjusted odds ratio for an EPDS score ≥9 was 4.0 (95% confidence interval: 2.0–8.1), whereas the adjusted odds ratio for possible depression in mothers whose infants had colic was 2.0 (95% confidence interval: 1.1–3.7). These associations persisted after adjusting for baseline depression symptoms. CONCLUSIONS: Maternal report of inconsolable infant crying may have a stronger association with postpartum depressive symptoms than infant colic. Asking a mother about her ability to soothe her infant may be more relevant for potential intervention than questions about crying and fussing duration alone. * Abbreviations: aOR — : adjusted odds ratio CI — : confidence interval EPDS — : Edinburgh Postnatal Depression Scale RCT — : randomized controlled trial

108 citations


Journal ArticleDOI
TL;DR: In this article, the authors adopt a rule-based approach, which allows incorporation of prior expert knowledge of load profiles into the statistical model, and use triple seasonal Holt-Winters-Taylor (HWT) exponential smoothing, triple seasonal autoregressive moving average (ARMA), artificial neural networks (ANNs), and triple seasonal intraweek singular value decomposition (SVD) based exponential smoothed.
Abstract: Numerous methods have been proposed for forecasting load for normal days. Modeling of anomalous load, however, has often been ignored in the research literature. Occurring on special days, such as public holidays, anomalous load conditions pose considerable modeling challenges due to their infrequent occurrence and significant deviation from normal load. To overcome these limitations, we adopt a rule-based approach, which allows incorporation of prior expert knowledge of load profiles into the statistical model. We use triple seasonal Holt-Winters-Taylor (HWT) exponential smoothing, triple seasonal autoregressive moving average (ARMA), artificial neural networks (ANNs), and triple seasonal intraweek singular value decomposition (SVD) based exponential smoothing. These methods have been shown to be competitive for modeling load for normal days. The methodological contribution of this paper is to demonstrate how these methods can be adapted to model load for special days, when used in conjunction with a rule-based approach. The proposed rule-based method is able to model normal and anomalous load in a unified framework. Using nine years of half-hourly load for Great Britain, we evaluate point forecasts, for lead times from one half-hour up to a day ahead. A combination of two rule-based methods generated the most accurate forecasts.

92 citations


Book
10 Apr 2013
TL;DR: Precision agriculture is an approach to managing the variability in production agriculture in a more economic and environmentally efficient manner as mentioned in this paper, which has been pioneered as a management tool in the grains industry, and while its development and uptake continues to grow amongst grain farmers worldwide, a broad range of other cropping industries have embraced the concept.
Abstract: Precision Agriculture (PA) is an approach to managing the variability in production agriculture in a more economic and environmentally efficient manner. It has been pioneered as a management tool in the grains industry, and while its development and uptake continues to grow amongst grain farmers worldwide, a broad range of other cropping industries have embraced the concept. This book explains general PA theory, identifies and describes essential tools and techniques, and includes practical examples from the grains industry. Readers will gain an understanding of the magnitude, spatial scale and seasonality of measurable variability in soil attributes, plant growth and environmental conditions. They will be introduced to the role of sensing systems in measuring crop, soil and environment variability, and discover how this variability may have a significant impact on crop production systems. Precision Agriculture for Grain Production Systems will empower crop and soil science students, agronomy and agricultural engineering students, as well as agronomic advisors and farmers to critically analyse the impact of observed variation in resources on crop production and management decisions.

78 citations


Journal ArticleDOI
TL;DR: A mid-infrared Raman-soliton continuum extending from 1.9 to 3 µm in a highly germanium-doped silica-clad fiber, pumped by a nanotube mode-locked thulium- doped fiber system, delivering 12 kW sub-picosecond pulses at 1.95 µm is demonstrated.
Abstract: We demonstrate a mid-infrared Raman-soliton continuum extending from 1.9 to 3 µm in a highly germanium-doped silica-clad fiber, pumped by a nanotube mode-locked thulium-doped fiber system, delivering 12 kW sub-picosecond pulses at 1.95 µm. This simple and robust source of light covers a portion of the atmospheric transmission window.

74 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe missed opportunities for meningococcal (MCV), tetanus, diphtheria, acellular pertussis (Tdap), and human papillomavirus (HPV) vaccination among adolescents.

69 citations


Journal ArticleDOI
TL;DR: In this article, a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility is proposed to estimate the S&P500 daily index returns.
Abstract: This paper proposes VaR estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the market’s expectation of risk. Forecast combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models, a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residual. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P500 daily returns.

46 citations


Journal ArticleDOI
TL;DR: This data-driven approach focuses on automated methods for defining a suite of plausibility test parameter thresholds, which scrutinize the data range and variance of each measurement type by employing a suites of binary checks.
Abstract: National and international networks and observatories of terrestrial-based sensors are emerging rapidly. As such, there is demand for a standardized approach to data quality control, as well as interoperability of data among sensor networks. The National Ecological Observatory Network (NEON) has begun constructing their first terrestrial observing sites, with 60 locations expected to be distributed across the US by 2017. This will result in over 14 000 automated sensors recording more than > 100 Tb of data per year. These data are then used to create other datasets and subsequent "higher-level" data products. In anticipation of this challenge, an overall data quality assurance plan has been developed and the first suite of data quality control measures defined. This data-driven approach focuses on automated methods for defining a suite of plausibility test parameter thresholds. Specifically, these plausibility tests scrutinize the data range and variance of each measurement type by employing a suite of binary checks. The statistical basis for each of these tests is developed, and the methods for calculating test parameter thresholds are explored here. While these tests have been used elsewhere, we apply them in a novel approach by calculating their relevant test parameter thresholds. Finally, implementing automated quality control is demonstrated with preliminary data from a NEON prototype site.

44 citations


Journal ArticleDOI
01 Feb 2013-Geoderma
TL;DR: In this paper, a regression tree approach was used to model soil depth and watertable depth, recorded as a binary 'deep' or'shallow' response, using the ASTER imagery-derived covariates and digital terrain attributes (DTAs).

40 citations


Journal ArticleDOI
TL;DR: In this article, an empirically derived spatial model was proposed to extrapolate midday stem water potential (MSWP) measurements over a furrow irrigated grapevine field from a single reference site.

Journal ArticleDOI
TL;DR: In this article, a brief discussion on the need to consider auto-correlation and the effective sample size when using Pearson's correlation in precision agriculture research is presented, supported by an example using spatial data on vine size and canopy vigour in a juice-grape vineyard.
Abstract: Pearson’s correlation is a commonly used descriptive statistic in many published precision agriculture studies, not only in the Precision Agriculture Journal, but also in other journals that publish in this domain. Very few of these articles take into consideration auto-correlation in data when performing correlation analysis, despite a statistical solution being available. A brief discussion on the need to consider auto-correlation and the effective sample size when using Pearson’s correlation in precision agriculture research is presented. The discussion is supported by an example using spatial data on vine size and canopy vigour in a juice-grape vineyard. The example data demonstrated that the p-value of the correlation between vine size and canopy vigour increased when auto-correlation was accounted for, potentially to a non-significant level depending on the desired α-level. The example data also demonstrated that the method by which data are processed (interpolated) to achieve co-located data will also affect the amount of auto-correlation and the effective sample size. The results showed that for the same variables, with different approaches to data co-location, a lower r-value may have a lower p-value and potentially hold more statistical significance.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate passive mode-locking of a Raman fiber laser at 1.12 μm using a nanotube-based saturable absorber, and a regular train of pulses, with a duration of 236 ps at the fundamental repetition frequency of the cavity, are generated by the all-normal dispersion oscillator.
Abstract: We demonstrate passive mode-locking of a Raman fiber laser at 1.12 μm, using a nanotube-based saturable absorber. A regular train of pulses, with a duration of 236 ps at the fundamental repetition frequency of the cavity, are generated by the all-normal dispersion oscillator. Importantly, this simple system is pumped with a continuous wave Yb fiber laser, removing the need for complex synchronous pumping schemes, where pulse-shaping depends on the action of the saturable absorber and a balance of dissipative processes. These results illustrate the flexibility of combining Raman amplification with a nanotube-based absorber for wavelength versatile pulsed sources.

Journal ArticleDOI
TL;DR: A near-visible parametric wavelength converter comprising a polarization-maintaining photonic crystal fiber pumped by a highly versatile diode-seeded master-oscillator power amplifier system based around 1.06 μm is reported.
Abstract: We report a near-visible parametric wavelength converter comprising a polarization-maintaining photonic crystal fiber (PM-PCF) pumped by a highly versatile diode-seeded master-oscillator power amplifier system based around 1.06 μm. The device is broadly tunable in wavelength (0.74–0.81 μm), pulse duration (0.2–1.5 ns) and repetition rate (1–30 MHz). A maximum anti-Stokes slope conversion efficiency of 14.9% is achieved with corresponding anti-Stokes average output powers of 845 mW, at a wavelength of 0.775 μm.

Journal ArticleDOI
TL;DR: The spectral masking of a phase modulated diode laser is used to produce a train of picosecond pulses which are compressed using a fibre-grating compressor followed by subsequent adiabatic soliton compression to the femtosecond regime using a tapered photonic crystal fiber.
Abstract: We present a laser system capable of producing 190 femtosecond pulses at a repetition rate of 20 GHz. The spectral masking of a phase modulated diode laser is used to produce a train of picosecond pulses which are compressed using a fibre-grating compressor followed by subsequent adiabatic soliton compression to the femtosecond regime using a tapered photonic crystal fiber.

Journal ArticleDOI
TL;DR: In this article, the authors examined the asymmetric relationship between price and implied volatility and the associated extreme quantile dependence using linear and non linear quantile regression approach and demonstrated that the relationship between the volatility and market return as quantified by Ordinary Least Square (OLS) regression is not uniform across the distribution of the volatility price return pairs using quantile regressions.
Abstract: The purpose of this paper is to examine the asymmetric relationship between price and implied volatility and the associated extreme quantile dependence using linear and non linear quantile regression approach Our goal in this paper is to demonstrate that the relationship between the volatility and market return as quantified by Ordinary Least Square (OLS) regression is not uniform across the distribution of the volatility-price return pairs using quantile regressions We examine the bivariate relationship of six volatility-return pairs, viz CBOE-VIX and S&P-500, FTSE-100 Volatility and FTSE-100, NASDAQ-100 Volatility (VXN) and NASDAQ, DAX Volatility (VDAX) and DAX-30, CAC Volatility (VCAC) and CAC-40 and STOXX Volatility (VSTOXX) and STOXX The assumption of a normal distribution in the return series is not appropriate when the distribution is skewed and hence OLS does not capture the complete picture of the relationship Quantile regression on the other hand can be set up with various loss functions, both parametric and non-parametric (linear case) and can be evaluated with skewed marginal based copulas (for the non linear case) Which is helpful in evaluating the non-normal and non-linear nature of the relationship between price and volatility In the empirical analysis we compare the results from linear quantile regression (LQR) and copula based non linear quantile regression known as copula quantile regression (CQR) The discussion of the properties of the volatility series and empirical findings in this paper have significance for portfolio optimization, hedging strategies, trading strategies and risk management in general

Journal ArticleDOI
TL;DR: In this paper, the authors examined and quantified the spatial and temporal variation in vine size, rather than a canopy sensor response, at a block (∼1 ha) level.
Abstract: Background and Aims Studies on vine size variation have generally been limited to small plot studies, particularly for correlation with canopy imaging. Research and anecdotal reports indicate there is a temporal stability in the spatial patterns of imagery of the canopy. This study directly examines and quantifies the spatial and temporal variation in vine size, rather than a canopy sensor response, at a block (∼1 ha) level. Methods and Results The mass of pruned wood for each individual vine was measured for 3 consecutive years in a 0.93-ha vineyard. The spatial and temporal variability in pruning mass was interrogated with geostatistics and map comparison methods. Potential management units were derived from these data and used to verify the temporal response in vine size. Conclusions The majority of variance in pruning mass occurred at a vine-to-vine scale; however, the autocorrelated variance exhibited a strong, stable spatial structure over 3 consecutive years. Map comparison methods were shown to be an alternate and visually demonstrable method of comparing spatio-temporal patterns. Significance of the Study The temporal stability of spatial patterns in vine size would indicate that it is not necessary to measure vine size annually and that historical information can drive site- or zone-specific management decisions. The large vine-to-vine variation indicates that high spatial resolution, vine-specific sensing and decision support systems are needed if the objective is to manage as much of the variability in vine size as possible.

Proceedings ArticleDOI
14 Nov 2013
TL;DR: In this study a much more difficult process was treated and the method presented by Laylabadi and Taylor was explored, refined and extended to increase efficiency (reduce computation time) and makes NDDR more feasible for a wider variety of applications.
Abstract: Data reconciliation is a well-known method in on-line process control engineering aimed at estimating the true values of corrupted measurements under constraints. Early nonlinear dynamic data reconciliation (NDDR) studies considered models that were simple and of low order. In such cases the ability to run the NDDR algorithm in real time for relatively slow processes is not a serious problem, despite the heavy computational burden imposed by NDDR. In this study a much more difficult process was treated and the method presented by Laylabadi and Taylor [1], [2] was explored, refined and extended to increase efficiency (reduce computation time). In addition, a new hybrid NDDR method is proposed and a demonstrative example performed to show the promise of this approach in reducing the computational burden and handling industrial processes for which a realistic dynamic model does not exist. This contribution makes NDDR more feasible for a wider variety of applications.

Journal ArticleDOI
TL;DR: An open source framework designed for integrating georeferenced and temporal data into decision support tools is presented, based upon open source toolboxes, and its design is inspired by the fuzzy software capabilities developed in FisPro for ordinary non-georeFerenced data.
Abstract: In many fields, due to the increasing number of automatic sensors and devices, there is an emerging need to integrate georeferenced and temporal data into decision support tools. Geographic Information Systems (GIS) and Geostatistics lack some functionalities for modelling and reasoning using georeferenced data. Soft computing techniques and software suited to these needs may be useful to implement new functionalities and use them for modelling and decision making. This work presents an open source framework designed for that purpose. It is based upon open source toolboxes, and its design is inspired by the fuzzy software capabilities developed in FisPro for ordinary non-georeferenced data. Two real world applications in Agronomy are included, and some perspectives are given to meet the challenge of using soft computing for georeferenced data.

Journal ArticleDOI
TL;DR: Decision-making in pediatrics is often intrinsically imperfect because there is simply an absence of or paucity of data to help quantify many of the risks, benefits, and outcomes associated with different possible therapeutic options.
Abstract: Risk is implicit in all clinical decision-making. Whether a clinician refrains from ordering a head computed tomography scan for a patient with a headache or decides to obtain blood work on a child with a fever, each decision involves a balance between accepting and limiting risk. Ideally, clinicians strive to achieve this balance. Clinicians accept some risk (eg, the risk of missing a rare disease) to avoid placing undue burden on the patient and the health care system with invasive, expensive, and/or potentially unnecessary testing. Yet, clinicians also strive not to assume too much risk so that timely diagnoses are made and morbidity and mortality prevented.1 Achieving this balance is difficult for a number of reasons. First, clinicians often lack precise estimates of the risks and harms for a given clinical scenario. There is simply an absence of or paucity of data to help quantify many of the risks, benefits, and outcomes associated with different possible therapeutic options. This confounds the ability to know whether pursuing 1 particular option corresponds to accepting too much or too little risk. As a result, decision-making is often intrinsically imperfect. Second, even if precise estimates of the involved risks are known, determining the threshold constituting acceptable risk (ie, the level above which too much risk would be assumed) is largely subjective. The issue of acceptable risk is inherently a matter of values, which, in pediatrics, includes not only the clinician’s values but also those of the parent and sometimes the child. Although there is general agreement that it is unacceptable for a parent to assume high levels of preventable risk and harm on behalf of his or her child, what constitutes a high level of risk and harm? Is it a 1 in 100 or 1 in 100 000 risk of the child …

DOI
01 Jan 2013
TL;DR: In this article, a weekly survey of canopy NDVI with a proximal-mounted canopy sensor was undertaken in a cool-climate juicegrape vineyard, where sensing was performed at different positions in the canopy.
Abstract: A weekly survey of canopy NDVI with a proximal-mounted canopy sensor was undertaken in a cool-climate juicegrape vineyard. Sensing was performed at different positions in the canopy. Sensing around the top-wire led to saturation problems, however sensing in the growing region of the canopy led to consistently non-saturated results throughout the season. With this directed sensing, a spatial pattern in NDVI 2–4 weeks after flowering could be generated that approximated the spatial pattern in NDVI at the end of the season. This is earlier than has been previously reported and may allow for proactive within-season canopy management.

Journal ArticleDOI
01 Oct 2013-Vaccine
TL;DR: A minority of King County parents use an ACIS and the type of ACIS used does not predominate, and there was a significant association between ACIS use and non-Hispanic white parents and parents of children 12-23 months old.

Journal ArticleDOI
TL;DR: The nonlinear saturable absorption of an ionically-doped colored glass filter is measured directly using a Z-scan technique and the potential of this material as a saturable asborber in fiber lasers is demonstrated for the first time.
Abstract: The nonlinear saturable absorption of an ionically-doped colored glass filter is measured directly using a Z-scan technique. For the first time, we demonstrate the potential of this material as a saturable asborber in fiber lasers. We achieve mode-locking of an ytterbium doped system. Mode-locking of cavities with all-positive and net-negative group velocity dispersion are demonstrated, achieving pulse durations of 60 ps and 4.1 ps, respectively. This inexpensive and optically robust material, with the potential for broadband operation, could surplant other saturable absorber devices in affordable mode-locked fiber lasers.

01 Jan 2013
TL;DR: This paper proposed value-at-risk estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility, which merges information from the historical time series and the different information supplied by the market's expectation of risk.
Abstract: This paper proposes value-at risk (VaR) estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the market's expectation of risk. Forecast-combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models—a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residuals. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P 500 daily returns.

Journal ArticleDOI
TL;DR: A variety of methods and ideas have been tried for load forecasting, with varying degrees of success as mentioned in this paper, and exponential smoothing stands out as a simple yet powerful approach whereby the prediction is constructed from a weighted average of past observations with exponentially smaller weights being assigned to older observations.
Abstract: Rainer Gob, Kristina Lurz and Antonio Pievatolo (hereinafter GLP) address a very important issue in power systems management—load forecasting. Generally, load forecasting is concerned with the accurate prediction of the electric load (or demand) for specific geographical locations and over the different periods of the planning horizon. However, as in the discussed paper, the quantity of interest is usually the hourly total load, and the planning horizon is short-term—it ranges from a few minutes to a few weeks. Short-term load forecasting has become increasingly important since the rise of competitive energy markets. Electric utilities (called ‘energy vendors’ by GLP) are the most vulnerable as they typically cannot pass costs directly to the retail consumers. When electricity sectors were regulated, utility monopolies used short-term load forecasting to ensure the reliability of supply (prevent overloading, reduce occurrences of equipment failures, etc.). Nowadays, the costs of over-contracting/under-contracting and then selling/buying power in the balancing (or real-time) market are typically so high that they can lead to huge financial losses of the utility and bankruptcy in the extreme case. Load forecasting has become the central and integral process in the planning and operation not only of electric utilities (as GLP clearly point out) but also of energy suppliers, system operators and other market participants. A variety of methods and ideas have been tried for load forecasting, with varying degrees of success. Following Weron [1], GLP classify them into two broad categories: (i) statistical approaches, including similar-day (or naive), exponential smoothing, regression and time series methods; and (ii) artificial intelligence-based techniques. Among them, exponential smoothing stands out as a simple yet powerful approach, whereby the prediction is constructed from a weighted average of past observations with exponentially smaller weights being assigned to older observations. More complex variants—such as the Holt-Winters method—have been developed to model time series with seasonal and trend components [1, 2]. Application of exponential smoothing to hourly electricity load data requires further generalization to accommodate the prevailing seasonalities (daily, weekly and possibly annual) and weather-related exogenous variables (called ‘covariates’ by GLP). This discussion paper offers comments in three sections. The first section discusses the model and its position within the exponential smoothing literature on load forecasting. The second section comments on the empirical part of the paper. The final section offers recommendations for further research.

DOI
01 Jan 2013
TL;DR: In this paper, an univariate segmentation algorithm was adapted to a bivariate analysis to investigate zoning based on both yield and protein responses in an eastern Australian wheat field, which provided a zone-by-zone interpretation of the agronomic response to nitrogen (N).
Abstract: A univariate segmentation algorithm has recently been developed for precision agricultural applications. This algorithm is adapted to a bivariate analysis to investigate zoning based on both yield and protein responses in an eastern Australian wheat field. The intention is to provide a zone-by-zone interpretation of the agronomic response to nitrogen (N). The segmentation algorithm provided management zone results comparable with the more widely used k-means classification. The algorithm is still under development but allows expert-knowledge to be incorporated into the zone delineation process.

Proceedings ArticleDOI
09 Jun 2013
TL;DR: In this article, the authors demonstrate a Raman-soliton continuum extending from 2 to 3 μm, in a highly germanium-doped silica-clad fiber, pumped by a nanotube mode-locked thulium doped fiber system delivering 12 kW sub-picosecond pulses at 1.95 μm.
Abstract: We demonstrate a Raman-soliton continuum extending from 2 to 3 μm, in a highly germanium-doped silica-clad fiber, pumped by a nanotube mode-locked thulium-doped fiber system delivering 12 kW sub-picosecond pulses at 1.95 μm.

Proceedings ArticleDOI
09 Jun 2013
TL;DR: In this article, a PM-PCF based near-visible parametric wavelength converter, pumped by a diode-seeded master-oscillator power amplifier, is presented.
Abstract: We report a PM-PCF based near-visible parametric wavelength converter, pumped by a diode-seeded master-oscillator power amplifier. The system is broadly tunable in wavelength (740–810 nm), pulse duration (0.3–2 ns) and repetition rate (1–30 MHz).