scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Pattern Classification and PSO Optimal Weights Based Sky Images Cloud Motion Speed Calculation Method for Solar PV Power Forecasting

TL;DR: A pattern classification and PSO optimal weights based sky images cloud motion speed calculation method for solar PV power forecasting (PCPOW) and comparisons with various benchmark methods show the effectiveness of the proposed approaches over cloud tracing.
Abstract: The motion of cloud over photovoltaic (PV) power station will directly cause the change of solar irradiance, which indirectly affects the prediction of minute-level PV power, so the tracking of cloud motion is very crucial. In this study, Block-matching algorithm, Optical Flow algorithm and feature matching algorithm are three prevailing methods. However, as a rigid registration method, Block-matching cannot obtain the parameters of cloud deformation or rotation. The accuracy of the optical flow, which is based on the assumption that the image grayscale is not changed, is easily disturbed by noise. When the image texture information is not rich enough, the accuracy of the feature matching will also be reduced. That is, in order to improve their robustness, they must be combined through a certain strategy. Therefore, a pattern classification and PSO optimal weights based sky images cloud motion speed calculation method for solar PV power forecasting (PCPOW) is proposed in this paper. The method consists of two parts. Firstly, we use k-means clustering method and texture features based on Gray-Level Co-occurrence Matrix (GLCM) to classify the clouds. Because texture can adequately reflect image information, compared with other image features, it can better take into account both the macro nature and the fine structure of images. Secondly, for different cloud classes, we build the corresponding combined calculation modeling to obtain cloud motion speed. The Particle Swarm Optimization algorithm is used to give different weights to different methods to adapt to different clouds. The performances of the method are investigated using real data recorded at Yunnan Electric Power Research Institute. Under the measurement of common precision index, the comparisons with various benchmark methods show the effectiveness of the proposed approaches over cloud tracing.

Summary (2 min read)

Introduction

  • Previously, many researches [12]-[14] do not consider the cloud motion speed in PV power forecasting.
  • Because the clouds in the sky images may have various forms, there are above demerits that cannot be overcome when the authors use the same single method for all kinds of clouds.
  • As for the second chapter, the authors will give a detailed description of the proposed method “PCPOW”.

A. Pattern Classification of Clouds

  • When the ijP value distribution is more dispersed, the energy is smaller.
  • Compared with other clustering algorithms, it has the advantages of simple and efficient, low time and space complexity.
  • When K is too small, the classification model cannot distinguish all the modes.

B. The Three Submethods of Combined Calculation Modeling

  • The authors explain briefly the principles of the three sub methods mentioned above firstly, and then introduce the so-called combined calculation modeling.
  • Owning to the principle of minimum difference of accumulated gray value, the sub block which resembles the previous sub block most is found, namely matching block.
  • In the combined calculation modeling below, the authors use LK optical flow method to accomplish the mission of cloud speed calculation.
  • In order to locate the feature points, the extreme suppression method in the 3D space is used to find the extreme points.

C. Combined Calculation Modeling

  • Particle Swarm Optimization (PSO) is similar to Genetic Algorithm (GA) and is also an iterative optimization algorithm.
  • Compared with GA, it is easier to achieve and does not need to adjust too many parameters [38]-[39].
  • The authors use PSO to assign different weights to each method in the combined calculation modeling on the basis of diverse cloud classes.
  • Here, the authors use the correlation coefficient 1R (as shown in Formula 10) as the optimization function, and the reciprocal of 1R as the fitness function of the particles in the PSO.
  • Original Image 1 Original Image 2 Shift Image 1 Shift Image 2 Fig.2 Cutting Process.

A. Data description

  • Among them, 300 images were selected as training samples and 200 images were used as testing samples.
  • The toolboxes based on Matlab platform: Histogram Equalization, Gray-level Co-occurrence Matrix, Calinski-Harabasz, Kmeans, Particle Image Velocimetry, Piotr’s Computer Vision, and Computer Vision System.

B. Results and Comparison with Other Classical Algorithms

  • The trend of index when K is optimized Obviously, when K = 10, the value of criterion is small adequately and the decline trend becomes slow enough.
  • Because the texture information of the image is not rich enough, the texture features of each area have little difference, which results in no obvious difference in the 64 dimensional SURF description operator, so the mismatch phenomenon is easy to occur.
  • In other words, the pixels of cloud lack a “common” displacement vector, so when the authors take the mean value of the displacements of these pixels as the displacement of the cloud, the accuracy of the calculation results will be reduced.
  • Obviously, the sub blocks marked by the red box in the two images are identified as the most similar blocks by the algorithm.

C. Discussion

  • In order to avoid the contingency of the simulation results and verify the robustness of the proposed model, the authors applied cross-validation method to further develop the simulation.
  • For the 500 sample images selected in the section A, random selection is performed in a 3:2 ratio for each class of sky image set to generate new training and testing sets and the authors get a total of 300 training samples and 200 testing samples.
  • Shi J, Lee WJ, Liu Y, Yang Y, and Wang P, “Forecasting power output of photovoltaic systems based on weather classification and support vector machines,” IEEE Trans.
  • Currently, he is a Professor with the Department of Electrical Engineering and the State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources at NCEPU, Baoding and Beijing, China.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

This is a self-archived parallel published version of this article in the
publication archive of the University of Vaasa. It might differ from the original.
Pattern Classification and PSO Optimal Weights
Based Sky Images Cloud Motion Speed
Calculation Method for Solar PV Power
Forecasting
Author(s):
Zhen, Zhao; Pang, Shuaijie; Wang, Fei; Li, Kangping; Li, Zhigang; Ren,
Hui; Shafie-khah, Miadreza
Title:
Pattern Classification and PSO Optimal Weights Based Sky Images
Cloud Motion Speed Calculation Method for Solar PV Power
Forecasting
Year:
2019
Version:
Accepted manuscript
Copyright
© 2019 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component
of this work in other works.
Please cite the original version:
Zhen, Z., Pang, S., Wang, F., Li, K., Li, Z., Ren, H. & Shafie-khah, M.
(2019). Pattern Classification and PSO Optimal Weights Based Sky Images
Cloud Motion Speed Calculation Method for Solar PV Power Forecasting.
IEEE Transactions on Industry Applications 55(4), 3331-3342.
https://doi.org/10.1109/TIA.2019.2904927

Pattern Classification and PSO Optimal Weights Based Sky
Images Cloud Motion Speed Calculation Method for Solar
PV Power Forecasting
AbstractThe motion of cloud over PV power station will di-
rectly cause the change of solar irradiance, which indirectly affects
the prediction of minute-level PV power. Therefore, the calculation
of cloud motion speed is very crucial for PV power forecasting.
However, due to the influence of complex cloud motion process, it
is very difficult to achieve accurate result using a single traditional
algorithm. In order to improve the computation accuracy, a
pattern classification and PSO optimal weights-based sky images
cloud motion speed calculation method for solar PV power fore-
casting (PCPOW) is proposed. The method consists of two parts.
Firstly, we use k-means clustering method and texture features
based on Gray-Level Co-occurrence Matrix (GLCM) to classify the
clouds. Secondly, for different cloud classes, we build the corre-
sponding combined calculation model to obtain cloud motion speed.
Real data recorded at Yunnan Electric Power Research Institute is
used for simulation
, the results show that the cloud classification
and optimal combination model are effective, and the PCPOW can
improve the accuracy of displacement calculation.
Index Terms--Cloud Motion Speed; Combined Modeling; Op-
timal Weights; Pattern Classification; Sky Image, Power Forecast-
ing.
I. INTRODUCTION
As a significant fashion to utilize the solar energy, the pho-
tovoltaic (PV) power has gained burgeoning expansion of late
years owing to its merits of no fuel consumption, no pollutant
emission, and flexible configuration. However, PV belongs to
intermittent power supplies, and to be more exact, there are ran-
domness and volatility in PV output as it is affected by meteoro-
logical factors, i.e. solar irradiance, ambient temperature, mois-
ture, wind velocity and barometric pressure, etc. [1]-[3]. These
shortcomings may bring rigorous challenges to power balance,
security stability and economic operation of the power system
[4]-[7]. From the above, we can see that in order to provide a
credible understructure for power system scheduling decision-
making behavior and ameliorate its capacity to consume inter-
mittent power, it is the time to put forward an accurate and ef-
fective prediction scheme of PV [8]-[11]. Previously, many
researches [12]-[14] do not consider the cloud motion speed in
PV power forecasting. The time scale of these prediction meth-
ods is 15 minutes, which not only reduces the accuracy sharply
under cloudy conditions, but also fails to meet the requirements
of real-time grid dispatching. The cloud motion, such as birth,
dissipation and deformation, are pivotal elements of the trans-
formation of solar irradiance, thus giving rise to the change of
PV output [15]-[16]. Therefore, the investigation of cloud mo-
tion has turned out to be one of the extremely critical tasks to
complete the above-mentioned forecast methodology.
In the related research of cloud motion, most investigators
take advantage of satellite images for analysis and processing
[17]-[-20] in an early phase. However, satellite images are not
ideal for the elaboration of regional or low cloud info as a result
of low spatial and temporal resolution [21]. Therefore, their
prediction accuracy cannot meet people’s needs. Currently, in
order to track the movement of the local cloud (especially over
the PV power station) more accurately, scholars obtain the
speed of cloud by means of ground-based images.
There are three main ways to calculate the cloud speed:
Block-Matching, Optical Flow and feature matching algorithm.
The Block-Matching algorithm performs the work by measuring
the similarity of the sub blocks between adjacent images. And it
is adopted by Chow et al. in [22], Huang in [23] and Peng et al.
in [24]. The Optical Flow algorithm is a pixel-level non-rigid
Z. Zhen, S. Pang, F. Wang, K. Li and H. Ren and is with the De-
partment of Electrical Engineering, North China Electric Power Uni-
versity, Baoding 071003, China; F. Wang also with the State Key
Laboratory of Alternate Electrical Power System with Renewable
Energy Sources (North China Electric Power University), Beijing
102206, China, and also with the Hebei Key Laboratory of Distributed
Energy Storage and Microgrid (North China Electric Power Universi-
ty), Baoding 071003, China (e-mail: feiwang@ncepu.edu.cn).
Z. Li is with South China University of Technology, Guangzhou
510006, China (e-mail: lizg16@scut.edu.cn).
M. Shafie-khah is with University of Vaasa, Finland (e-mail: mi-
adreza@gmail.com)
J. P. S. Catalão is with INESC TEC and the Faculty of Engineering
of the University of Porto, Porto 4200-465, Portugal, also with C-
MAST, University of Beira Interior, Covil6201-001, Portugal, and
also with INESC-ID, Instituto Superior Técnico, University of Lisbon,
Lisbon 1049-001, Portugal (e-mail: catalao@ubi.pt).

registration method, according to the assumption that the gray
level of the image remains unchanged. Following original ap-
proaches presented in Horn and Schunck (HS) in [25] as well as
Lucas and Kanade (LK) in [26], Optical Flow algorithm is used
by Wood-Bradley in [27], Chow in [28], and Brox in [29]. As
for the feature matching algorithm, this is a broad concept. You
need to find corresponding features (e.g. Harris points, SIFT
points or SURF points) in adjacent images to complete the task
of regional matching. Based on the idea of feature matching,
Cheng adopted SIFT algorithm in [30], and F Su used SURF
algorithm in [31]. Compared with previous studies based on
satellite images, these literatures have better prediction accuracy
and local applicability. However, a full-fledged and effective
cloud tracing method has not been proposed up to present. As a
rigid registration method, Block-Matching cannot obtain the
non-rigid motion parameters of cloud, such as rotation and de-
formation. The accuracy of the optical flow, which is based on
the assumption that the image grayscale is not changed, is easily
disturbed by noise. For example, in the case of uneven illumina-
tion, the computational accuracy is on the low side. As for fea-
ture matching, because the definition of a feature point usually
requires a lot of texture information, it is poorly matched in re-
gions where the texture information is not rich enough. In short,
the above three methods have poor robustness. Because the
clouds in the sky images may have various forms, there are
above demerits that cannot be overcome when we use the same
single method for all kinds of clouds. Furthermore, any simple
combined calculation modeling without cloud classification is
difficult to achieve good results as well.
In this paper, we propose a pattern classification and PSO
optimal weights-based sky images cloud motion speed calcula-
tion method for solar PV power forecasting. The related works
are mainly divided into the following two parts: Firstly, we clas-
sify cloud according to the texture feature information of sky
images captured by ground-based sky imager. Secondly, for
different cloud classes obtained in the previous step, we utilize
PSO to optimize the weights of three methods: Block-Matching,
Optical Flow and SURF feature matching algorithm, and build
the corresponding combined calculation modeling.
In the light of the method proposed, we can choose different
calculation strategies (set different weights of three methods in
the combined modeling) according to different clouds. In other
words, it could improve the weaknesses of the traditional single
method in the applicable scope and it is a more universal model-
ing suitable for most cloud scenes. As for the second chapter,
we will give a detailed description of the proposed method
PCPOW”. In the third chapter, we will utilize measured data to
validate the proposed method and compare the results with tho-
se of single common algorithm. Finally, in the fourth chapter,
we will summarize the work of the full paper and discuss the
future work.
II. T
HE METHOD OF PCPOW
The flowchart of the PCPOW is shown in Fig. 1.
A. Pattern Classification of Clouds
For clouds in sky images, we can describe them in terms of
brightness, size, shape, spectrum, and texture features, etc. As a
regional feature, texture is a description of the spatial distribu-
tion of each pixel in an image.
We can simply understand that texture consists of texture
primitives that are repeated in accordance with certain rules or
statistical rules. Because texture can adequately reflect image
information, compared with other image features, it can better
take into account both the macro nature and the fine structure of
images. In this paper, we use the texture features based on Gray-
Level Co-occurrence Matrix (GLCM) to classify the clouds.
Read Raw
RGB Image
Convert to
Gray Image
Perform
Histogram
Equalization
Extract Features
of GLCM
Class A
Class B
Class C
Class K
...
Block-Matching
SURF-Matching
Weight is
Optimized by
Particle Swarm
Optimization
Image
Preprocessing
Pattern
Classification
of Clouds
Combined
Calculation
Model
Optical Flow
K-Means
Clustering
Fig.1 Flowchart of the PCPOW
1) Histogram Equalization
Many of the sky image textures are not rich enough, and in
order to improve the speed of the program, we have compressed
the resolution of original sky images. These will lead to the loss
of image information, so we must enhance the image firstly.In
this paper, we employ histogram equalization to enhance the
image.
If the pixels of an image share many gray levels and are
evenly distributed, then such images tend to have high contrast
and variable gray tones. Histogram equalization is a
transformation that can automatically achieve this effect by
relying only on the histogram information of the input image.
Its basic idea is to widen the gray levels which contain more
pixels in the image, and to compress the gray levels which
contain fewer pixels in the image. Thus, the dynamic range of
the pixel gray value is extended, and the contrast and the change

degree of the gray tone hue are improved, and a clearer image is
generated.
2) Gray-Level Co-occurrence Matrix
Take points (x, y) and (x + a, y + b) as a point pair in the
image. The gray value of the point pair is (i, j), and that is to say,
the gray value of the point (x, y) is i, and the gray value of the
point (x + a, y + b) is j. Fix
a
and
b
, and move the point (x, y)
on the whole image, you'll get all sorts of (i, j). Let the gray
level of the image be L, then the combination of i and j has a
total of
2
L
species. In the whole image, the number of occur-
rences of (i, j) is counted, and then they are normalized to the
probability
ij
P
, then the square
ij
LL
P
⎡⎤
⎣⎦
is the GLCM.
For slowly changing textures (i.e. coarse textures), the
GLCM has a larger diagonal value, and the two sides are small-
er, that is, GLCM tends to be diagonal distributions; on the con-
trary, for fast changing textures (i.e. fine textures), it tends to be
evenly distributed. Obviously, different (a, b) combinations can
get different GLCMs.
3) Features based on Gray-Level Co-occurrence Matrix
Based on GLCM, we select some characteristic quantities to
reflect the condition of the matrix[32], as shown:
Energy:
11
2
1
00
LL
ij
ij
fP
−−
==
=
∑∑
(1)
It reflects the uniformity of the image gray scale distribution
and the texture roughness. When the
ij
P
, value distribution is
more concentrated, the energy is larger. When the
ij
P
value
distribution is more dispersed, the energy is smaller.
Correlation:
11
2
00
1
()( )
LL
xyij
ij
xy
fijP
µµ
σσ
−−
==
=
∑∑
(2)
Where:
11
00
LL
x ij
ij
iP
µ
−−
==
=
∑∑
(3)
11
00
LL
yij
ji
jP
µ
−−
==
=
∑∑
(4)
11
22
00
()
LL
x x ij
ij
iP
σµ
−−
==
=
∑∑
(5)
11
22
00
()
LL
y y ij
ji
jP
σµ
−−
==
=
∑∑
(6)
It measures the similarity of the matrix elements in the row
and column directions, that is, the local gray correlation in the
image is reflected. When the matrix element gap is small, the
correlation value is large. When the matrix elements differ
greatly, the correlation value is small.
Entropy:
11
32
00
log
LL
ij ij
ij
fPP
−−
==
=
∑∑
(7)
Entropy is a measure of the amount of information of an im-
age, which represents the complexity of the texture. If the dis-
tribution of
ij
P
is relatively uniform, then the entropy is large. If
the distribution of
ij
P
is more concentrated, the entropy is
smaller.
Contrast:
11
2
4
00
LL
ij
ij
fijP
−−
==
=
∑∑
(8)
For coarse textures, contrast is small. For fine textures, con-
trast is large.
In this paper, we take (a, b) of 4 combinations (1, 0), (-1, 1),
(-1, 0) and (-1, 1) to generate 4 GLCMs for each sky image, and
the corresponding energy, correlation, entropy and contrast
mean values are calculated as the four dimensional texture fea-
ture vectors of each image. Then, we take the mean of the fea-
ture vectors of two adjacent images as their common feature
vectors and take them as samples of pattern classification.
4) K-means clustering
The k-means clustering algorithm is a widely used
unsupervised clustering algorithm. It iterative partitioning min-
imizes the sum, over all clusters, of the within-cluster sums of
point-to-cluster-centroid distances. The squared Euclidean dis-
tance is used in this approach [33]. Compared with other
clustering algorithms, it has the advantages of simple and
efficient, low time and space complexity. Therefore, this paper
chooses k-means to cluster the cloud.
The k-means clustering method is approached to achieve the
classification according to the feature vectors obtained in the
previous step. However, the number of clusters K of k-means
needs to be specified by humans. When K is too small, the
classification model cannot distinguish all the modes. And when
K is too large, the classification model is easy to overfit (cannot
effectively classify new samples). It is unrealistic to artificially
obtain the optimal K(simulation test on a large amount of
historical data). So we use the optimal cluster number criterion
in the Matlab toolbox.
Compared with other criterions, the Calinski-Harabasz(CH)
is simple, easy to understand and widely used, so we use the CH
criterion to determine the optimal K.

1
1
i
K
i
i
K
i
ipC
xx
CH
px
=
=
=
∑∑
(9)
Where:
p
is the sample point in the class
i
C
,
i
x
is the cluster-
ing center of C, and
x
is the sample mean. Obviously, the larger
the value of CH, the more optimized the number of clusters. In
this paper, in order to visually represent the change in the value
of the criterion, we take CH to the inverse. In this way, we need
to select the corresponding K value when the value of criterion
is small adequately and the decline trend becomes slow enough.
B. The Three Submethods of Combined Calculation Modeling
We explain briefly the principles of the three sub methods
mentioned above firstly, and then introduce the so-called com-
bined calculation modeling.
1) Block-Matching algorithm
The rationale of Block-Matching is that every last image in
the frame sequence is subdivided into sub blocks, and afterward,
compares all the candidate sub blocks in a given search area of
the current frame to one sub block of the previous frame.
Owning to the principle of minimum difference of accumulated
gray value, the sub block which resembles the previous sub
block most is found, namely matching block. In this way, the
displacement between the previous block and the matching
block is the motion vector of the block.
According to literature [22] and [34], the following four
formulas are used to measure the difference in gray values be-
tween the two images:
Correlation coefficient:
112 2
11
1
22
11 2 2
11 11
[(, ) ][ (, ) ]
[(, ) ] [ (, ) ]
MN
ij ij
ij
MN MN
ij ij
ij ij
fxy f f xy f
R
fxy f f xy f
==
== ==
−−
=
−−
∑∑
∑∑ ∑∑
(10)
12
11
2
22
12
11 11
(, ) (, )
(, ) (, )
MN
ij ij
ij
MN MN
ij ij
ij ij
fxy f xy
R
fxy f xy
==
== ==
=
∑∑
∑∑ ∑∑
(11)
Variance:
2
312
11
1
=[(,)(,)]
MN
ij ij
ij
Rfxyfxy
MN
==
∑∑
(12)
Sum of grey value differences:
41 2
11
=|(,)(,)|
==
∑∑
MN
ij ij
ij
Rfxyfxy
(13)
Where:
1
(, )
ij
fxy
and
2
(, )
ij
fxy
are the gray function of two
images
1
I
and
2
I
.
1
I
is the previous frame image and
2
I
is the
next frame image.
1
f
and
2
f
are the gray average,
*MN
is the
size of images, that is to say, it is the size of the corresponding
gray scale matrixes.
Obviously, when
1
R
and
2
R
are at their maximum, or
3
R
and
4
R
are at their minimum, the two images are the most simi-
lar.
However, the calculation of the original Block-Matching al-
gorithm is very large, so that it is difficult to meet the require-
ments of real-time computing. In order to reduce the workload,
a Block-Matching Fast Fourier Transform algorithm [35] is
widely used.
In this method, the digital image is regarded as a
sequence of discrete two-dimensional signal field with time, and
the computation speed is greatly improved by using the method
of signal analysis.
To sum up, Block-Matching is a widely used and very basic
method for cloud velocity calculation.
In the combined calcula-
tion modeling below, we used the corresponding convenient
Particle Image Velocimetry toolbox in Matlab to accomplish the
mission of cloud speed calculation.
2) Optical Flow algorithm
The optical flow is the instantaneous velocity (u, v)of the
moving object in the observed image plane. As for Optical Flow
algorithm, its essence is to establish the optical flow constraint
equation in line with the truth of image intensity conservation.
By solving the equation above, the velocity parameters can be
acquired.
There are three premise hypotheses of optical flow:
Hypothesis 1: The gray levels of the corresponding pixels in
adjacent images are constant.
Hypothesis 2: The displacement of the target in the adjacent
image is relatively small.
Hypothesis 3: A pixel has the same displacement as the pix-
el in the neighborhood.
The first step: according to hypothesis 1, we can get that:
(, ,) ( , , )fxyt fx dxy dyt dt+++=
(14)
Where:
(, ,)fxyt
is the gray function of the image. And
(, )xy
represent the pixel coordinates, and t represents the time.
Take Taylor decomposition on the right side of the formula,
we can get that:

Citations
More filters
Journal ArticleDOI
TL;DR: A support vector machine based forecasting model is proposed to forecast the aggregated SHs’ DR capacity in the day-ahead market and the case study indicates that the proposed forecasting framework could provide good performance in terms of stability and accuracy.
Abstract: The technological advancement in the communication and control infrastructure helps those smart households (SHs) that more actively participate in the incentive-based demand response (IBDR) programs. As the agent facilitating the SHs’ participation in the IBDR program, load aggregators (LAs) need to comprehend the available SHs’ demand response (DR) capacity before trading in the day-ahead market. However, there are few studies that forecast the available aggregated DR capacity from LAs’ perspective. Therefore, this article proposes a forecasting model aiming to aid LAs forecast the available aggregated SHs’ DR capacity in the day-ahead market. First, a home energy management system is implemented to perform optimal scheduling for SHs and to model the customers’ responsive behavior in the IBDR program; second, a customer baseline load estimation method is applied to quantify the SHs’ aggregated DR capacity during DR days; third, several features which may have significant impacts on the aggregated DR capacity are extracted and they are processed by principal component analysis; and finally, a support vector machine based forecasting model is proposed to forecast the aggregated SHs’ DR capacity in the day-ahead market. The case study indicates that the proposed forecasting framework could provide good performance in terms of stability and accuracy.

132 citations


Cites background from "Pattern Classification and PSO Opti..."

  • ...There will be an IBDR program in the electricity market when system operator calls for DR services [11] due to load shortage, penetration of distributed generations [12], and other reliability problem [13]....

    [...]

Journal ArticleDOI
TL;DR: A hybrid mapping model based on deep learning applied for solar PV power forecasting has higher accuracy and can maintain robustness under different weather conditions and is proposed in this article.
Abstract: With the increase of solar photovoltaic (PV) penetration in power system, the impact of random fluctuation of PV power on the secure operation of power grid becomes more and more serious. High-precision PV power forecasting can effectively promote the grid's accommodation of PV power generation. Cloud is the most important factor affecting the surface irradiance and PV power. For the ultra-short-term solar PV power forecast considering the influence of cloud movement, it is necessary to be able to obtain the surface irradiance according to the sky cloud observation data. Therefore, in order to accurately achieve the real-time mapping relationship between sky image and surface irradiance, a hybrid mapping model based on deep learning applied for solar PV power forecasting is proposed in this article. First, the sky image data are clustered based on the feature extraction of convolutional autoencoder and K -means clustering algorithm after preprocess stage. Second, a hybrid mapping model based on deep learning methods are established for surface irradiance. Finally, the simulation results are compared and evaluated with different deep learning methods (CNN, LSTM, and ANN). The results show that the proposed model in this article has higher accuracy and can maintain robustness under different weather conditions.

116 citations


Cites background from "Pattern Classification and PSO Opti..."

  • ...Regarding the second step of the second category, a model that can accurately capture the mapping relationship between sky image and solar irradiance data is significant for fulfilling the ultra-short-term SPF task [18], [19]....

    [...]

  • ...Based on the previous research works [7], [15], [19], [39]– [42], there are still a lot of future works need to be done....

    [...]

Journal ArticleDOI
TL;DR: Results show that the cloud classification and optimal combination model are effective, and the PCPOW can improve the accuracy of displacement calculation.
Abstract: The motion of cloud over a photovoltaic (PV) power station will directly cause the change of solar irradiance, which indirectly affects the prediction of minute-level PV power. Therefore, the calculation of cloud motion speed is very crucial for PV power forecasting. However, due to the influence of complex cloud motion process, it is very difficult to achieve accurate result using a single traditional algorithm. In order to improve the computation accuracy, a pattern classification and particle swarm optimization optimal weights based sky images cloud motion speed calculation method for solar PV power forecasting (PCPOW) is proposed. The method consists of two parts. First, we use a k -means clustering method and texture features based on a gray-level co-occurrence matrix to classify the clouds. Second, for different cloud classes, we build the corresponding combined calculation model to obtain cloud motion speed. Real data recorded at Yunnan Electric Power Research Institute are used for simulation ; the results show that the cloud classification and optimal combination model are effective, and the PCPOW can improve the accuracy of displacement calculation.

85 citations


Cites background from "Pattern Classification and PSO Opti..."

  • ...In the future study based on the previous works [40]–[44], we should pay attention to the preprocessing of sky images, not only to ensure the operation speed, but also to ensure that the image information is not lost too much....

    [...]

Journal ArticleDOI
TL;DR: An ultra-short-term PV power forecasting model based on the optimal frequency-domain decomposition and deep learning that can improve both forecasting accuracy and time efficiency significantly is proposed.
Abstract: Ultra-short-term photovoltaic (PV) power forecasting can support the real-time dispatching of the power grid. However, PV power has great fluctuations due to various meteorological factors, which increase energy prices and cause difficulties in managing the grid. This article proposes an ultra-short-term PV power forecasting model based on the optimal frequency-domain decomposition and deep learning. First, the optimal frequency demarcation points for decomposition components are obtained through frequency-domain analysis. Then, the PV power is decomposed into the low-frequency and high-frequency components, which supports the rationality of decomposition results and solves the problem that the current decomposition model only uses the direct decomposition method and the decomposition components are not physical. Then, a convolutional neural network (CNN) is used to forecast the low-frequency and high-frequency components, and the final forecasting result is obtained by addition reconstruction. Based on the actual PV data in heavy rain days, the mean absolute percentage error (MAPE) of the proposed forecasting model is decreased by 52.97%, 64.07%, and 31.21%, compared with discrete wavelet transform, variational mode decomposition, and direct prediction models. In addition, compared with recurrent neural network and long–short-term memory model, the MAPE of the CNN forecasting model is decreased by 23.64% and 46.22%, and the training efficiency of the CNN forecasting model is improved by 85.63% and 87.68%. The results fully show that the proposed model in this article can improve both forecasting accuracy and time efficiency significantly.

58 citations


Cites background from "Pattern Classification and PSO Opti..."

  • ...which will affect stable operation of the power grid[4-5]....

    [...]

Journal ArticleDOI
TL;DR: An association rule mining based quantitative analysis framework is built to explore the impact of household characteristics on PDR under TOU price making up for the deficiencies in current research and will associate retailer to improve the benefits of TOU programs and guide policy makers to design more efficient energy saving policies for residents.

56 citations


Cites background from "Pattern Classification and PSO Opti..."

  • ...Although several technologies, including non-intermittent capacity, renewable energy forecasting that mainly includes solar energy forecasting[5] and wind power forecasting covering various time scales such as short term[6,7] as well as ultra-short term[8], microgrids, which are widely considered as an effective means to integrate distributed renewable generations into the main grid[9,10], and electricity storage[11,12], are introduced to mitigate the problem, these methods are not satisfactory after taking cost, accuracy and efficiency into consideration....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image, and an iterative implementation is shown which successfully computes the Optical Flow for a number of synthetic image sequences.

10,727 citations

Proceedings ArticleDOI
12 Nov 1981
TL;DR: In this article, a method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image, and an iterative implementation is shown which successfully computes the Optical Flow for a number of synthetic image sequences.
Abstract: Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.

8,078 citations

Journal ArticleDOI
TL;DR: In this article, the directional ambiguity associated with PIV and LSV is resolved by implementing local spatial cross-correlations between two sequential single-exposed particle images, and the recovered velocity data are used to compute the spatial and temporal vorticity distribution and the circulation of the vortex ring.
Abstract: Digital particle image velocimetry (DPIV) is the digital counterpart of conventional laser speckle velocitmetry (LSV) and particle image velocimetry (PIV) techniques. In this novel, two-dimensional technique, digitally recorded video images are analyzed computationally, removing both the photographic and opto-mechanical processing steps inherent to PIV and LSV. The directional ambiguity generally associated with PIV and LSV is resolved by implementing local spatial cross-correlations between two sequential single-exposed particle images. The images are recorded at video rate (30 Hz or slower) which currently limits the application of the technique to low speed flows until digital, high resolution video systems with higher framing rates become more economically feasible. Sequential imaging makes it possible to study unsteady phenomena like the temporal evolution of a vortex ring described in this paper. The spatial velocity measurements are compared with data obtained by direct measurement of the separation of individual particle pairs. Recovered velocity data are used to compute the spatial and temporal vorticity distribution and the circulation of the vortex ring.

1,976 citations


"Pattern Classification and PSO Opti..." refers methods in this paper

  • ...In order to reduce the workload, a Block-Matching Fast Fourier Transform algorithm [26] is widely used....

    [...]

Journal ArticleDOI
TL;DR: A way to approach the problem of dense optical flow estimation by integrating rich descriptors into the variational optical flow setting, while reaching out to new domains of motion analysis where the requirement of dense sampling in time is no longer satisfied is presented.
Abstract: Optical flow estimation is classically marked by the requirement of dense sampling in time. While coarse-to-fine warping schemes have somehow relaxed this constraint, there is an inherent dependency between the scale of structures and the velocity that can be estimated. This particularly renders the estimation of detailed human motion problematic, as small body parts can move very fast. In this paper, we present a way to approach this problem by integrating rich descriptors into the variational optical flow setting. This way we can estimate a dense optical flow field with almost the same high accuracy as known from variational optical flow, while reaching out to new domains of motion analysis where the requirement of dense sampling in time is no longer satisfied.

1,429 citations

Journal ArticleDOI
TL;DR: In this paper, a method for intra-hour, sub-kilometer cloud forecasting and irradiance nowcasting using a ground-based sky imager at the University of California, San Diego is presented.

544 citations

Frequently Asked Questions (2)
Q1. What contributions have the authors mentioned in the paper "Pattern classification and pso optimal weights based sky images cloud motion speed calculation method for solar pv power forecasting" ?

In this paper, a pattern classification and PSO optimal weights-based sky images cloud motion speed calculation method for solar PV power forecasting ( PCPOW ) is proposed. 

In the future study based on the previous works [ 40 ] - [ 44 ], the authors should pay attention to the preprocessing of sky image, not only to ensure the operation speed, but also to ensure the image information is not lost too much.