scispace - formally typeset
Search or ask a question
Author

Chen-Ching Liu

Bio: Chen-Ching Liu is an academic researcher from Virginia Tech. The author has contributed to research in topics: Electric power system & Electricity market. The author has an hindex of 57, co-authored 269 publications receiving 12126 citations. Previous affiliations of Chen-Ching Liu include Washington State University & Purdue University.


Papers
More filters
Journal ArticleDOI
TL;DR: An iterative algorithm is proposed for utilization of P-AVM and ripple function for large-signal time-domain transient simulations, which uses the actual duty ratio as the control input instead of the continuous duty ratio used by the traditional state-space AVM.
Abstract: State-space average-value models (AVMs) of pulse width modulation converters are widely used in both small-signal frequency-domain analysis and large-signal time-domain transient simulations. This paper is focused on the latter category. The limitations of the traditional state-space AVM (T-AVM) are discussed. A piecewise state-space AVM, P-AVM, is proposed, which uses the actual duty ratio as the control input instead of the continuous duty ratio used by the T-AVM. In order to consider the effect of switching ripples, an approximate ripple function is obtained. An iterative algorithm is proposed for utilization of P-AVM and ripple function for large-signal time-domain transient simulations. A boost converter and a two-level three-phase ac–dc converter are used to validate the performance of the P-AVM in comparison with the T-AVM under large ripple and large disturbance conditions. The detailed models developed in PSCAD/EMTDC are used as benchmarks. Improvement in accuracy is demonstrated. The efficiency of the iterative algorithm is discussed. Experiments on a 50-kVA three-phase ac–dc converter are conducted to validate the proposed method.

29 citations

ReportDOI
01 Oct 2016
TL;DR: In this paper, the authors provide an early assessment of research and development needs by examining the benefits of, risks created by, and risks to networked micro-grids, based on inputs, estimations, and literature reviews by subject matter experts.
Abstract: Much like individual microgrids, the range of opportunities and potential architectures of networked microgrids is very diverse. The goals of this scoping study are to provide an early assessment of research and development needs by examining the benefits of, risks created by, and risks to networked microgrids. At this time there are very few, if any, examples of deployed microgrid networks. In addition, there are very few tools to simulate or otherwise analyze the behavior of networked microgrids. In this setting, it is very difficult to evaluate networked microgrids systematically or quantitatively. At this early stage, this study is relying on inputs, estimations, and literature reviews by subject matter experts who are engaged in individual microgrid research and development projects, i.e., the authors of this study The initial step of the study gathered input about the potential opportunities provided by networked microgrids from these subject matter experts. These opportunities were divided between the subject matter experts for further review. Part 2 of this study is comprised of these reviews. Part 1 of this study is a summary of the benefits and risks identified in the reviews in Part 2 and synthesis of the research needs required to enable networked microgrids.

29 citations

Proceedings ArticleDOI
01 Oct 2006
TL;DR: In this article, a factor model is proposed to forecast shadow prices for a market based on locational marginal prices, which is a useful tool for market participants as well as market operators in a wholesale electricity market.
Abstract: Day-ahead shadow price forecasting is a useful tool for market participants as well as market operators in a wholesale electricity market. Shadow price forecasting is seen by market operators as an additional decision-making support tool for congestion management. Similarly, different market participants may use shadow price forecasting as a tool for strategy improvement in day-ahead or spot markets. This paper proposes a factor model to forecast shadow prices for a market based on locational marginal prices. The proposed approach handles time series using least-squares estimation. This method performs day-ahead shadow price forecasting and provides interpretable signals for different congestive conditions

29 citations

Journal ArticleDOI
01 Mar 2001
TL;DR: In this paper, value-at-risk (VAR) is introduced as a technique that is applicable to quantifying price risk exposure in power systems and the methodology for applying VAR using changes in prices from corresponding hours on previous days is presented.
Abstract: Extreme short-term price volatility in competitive electricity markets creates the need for price risk management for electric utilities. Recent methods in California provide examples of lessons that can be applied to other markets worldwide. Value-at-Risk (VAR), a method for quantifying risk exposure in the financial industry, is introduced as a technique that is applicable to quantifying price risk exposure in power systems. The methodology for applying VAR using changes in prices from corresponding hours on previous days is presented. Prices for electricity for the summer of 2000 are examined against previous periods to understand how the hourly VAR entity is exposed when the power system is obligated to serve a load and does not have a contract for supply. The VAR methodology introduced is then applied to a sample company in California that is serving a 100 MW load. Proposed remedies for the problems observed in the competitive California electric power industry are introduced.

28 citations

Journal ArticleDOI
TL;DR: In this paper, a system-theoretic method is proposed for identification of the fault location based on the limited data available, which is being implemented for the field test in Monterey, California.
Abstract: The objective of the North Eastern Pacific Time-Series Undersea Networked Experiment (NEPTUNE) program is to construct an underwater cabled observatory on the floor of the Pacific Ocean, encompassing the Juan de Fuca Tectonic Plate. The power system associated with the proposed observatory is unlike conventional terrestrial power systems in many ways due to the unique operating conditions of underwater cabled observatories. In the event of a backbone cable fault, the location of the fault must be identified accurately so that a repair ship can be sent to repair the cable. Due to the proposed networked, mesh structure, traditional techniques for cable fault identification can not achieve the desired level of accuracy. In this paper, a system-theoretic method is proposed for identification of the fault location based on the limited data available. The method has been tested with extensive simulations and is being implemented for the field test in Monterey, California. In this study, a lab test is performed for the fault location function

28 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: The Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC) at CERN as mentioned in this paper was designed to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 10(34)cm(-2)s(-1)
Abstract: The Compact Muon Solenoid (CMS) detector is described. The detector operates at the Large Hadron Collider (LHC) at CERN. It was conceived to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 10(34)cm(-2)s(-1) (10(27)cm(-2)s(-1)). At the core of the CMS detector sits a high-magnetic-field and large-bore superconducting solenoid surrounding an all-silicon pixel and strip tracker, a lead-tungstate scintillating-crystals electromagnetic calorimeter, and a brass-scintillator sampling hadron calorimeter. The iron yoke of the flux-return is instrumented with four stations of muon detectors covering most of the 4 pi solid angle. Forward sampling calorimeters extend the pseudo-rapidity coverage to high values (vertical bar eta vertical bar <= 5) assuring very good hermeticity. The overall dimensions of the CMS detector are a length of 21.6 m, a diameter of 14.6 m and a total weight of 12500 t.

5,193 citations

01 Jan 2003

3,093 citations

Journal ArticleDOI
TL;DR: In this paper, the authors survey the literature till 2011 on the enabling technologies for the Smart Grid and explore three major systems, namely the smart infrastructure system, the smart management system, and the smart protection system.
Abstract: The Smart Grid, regarded as the next generation power grid, uses two-way flows of electricity and information to create a widely distributed automated energy delivery network. In this article, we survey the literature till 2011 on the enabling technologies for the Smart Grid. We explore three major systems, namely the smart infrastructure system, the smart management system, and the smart protection system. We also propose possible future directions in each system. colorred{Specifically, for the smart infrastructure system, we explore the smart energy subsystem, the smart information subsystem, and the smart communication subsystem.} For the smart management system, we explore various management objectives, such as improving energy efficiency, profiling demand, maximizing utility, reducing cost, and controlling emission. We also explore various management methods to achieve these objectives. For the smart protection system, we explore various failure protection mechanisms which improve the reliability of the Smart Grid, and explore the security and privacy issues in the Smart Grid.

2,433 citations

01 Jan 2012
TL;DR: This article surveys the literature till 2011 on the enabling technologies for the Smart Grid, and explores three major systems, namely the smart infrastructure system, the smart management system, and the smart protection system.

2,337 citations