Extreme learning machine: Theory and applications
Citations
4,835 citations
Cites background or methods from "Extreme learning machine: Theory an..."
...ELM [12], [13] and its variants [14]–[16], [24]–[28] mainly focus on the regression applications....
[...]
...ELM is to minimize the training error as well as the norm of the output weights [12], [13]...
[...]
...The original solutions (21) of ELM [12], [13], [26], TERELM [22], and the weighted regularized ELM [21] are not able to apply kernels in their implementations....
[...]
...The minimal norm least square method instead of the standard optimization method was used in the original implementation of ELM [12], [13]...
[...]
2,413 citations
2,095 citations
Cites methods from "Extreme learning machine: Theory an..."
...The ELM was introduced for efficient feature pooling and classification, making the ship detection accurate and fast....
[...]
...Tang et al. [106] offered a compressed-domain ship detection framework combined with SDA and an extreme learning machine (ELM) [107] for optical spaceborne images....
[...]
...[106] offered a compressed-domain ship detection framework combined with SDA and an extreme learning machine (ELM) [107] for optical spaceborne images....
[...]
1,800 citations
Cites background or methods from "Extreme learning machine: Theory an..."
...Huang et al. [ 27 ], the basic idea of the proof can be summarized as follows....
[...]
...These have been formally stated in the following theorems [ 27 ]....
[...]
...In real applications, the number of hidden nodes will always be less than the number of training samples and, hence, the training error cannot be made exactly zero but can approach a nonzero training error . The following theorem formally states this fact [ 27 ]....
[...]
...OS-ELM originates from the batch learning extreme learning machine (ELM) [20]–[22], [ 27 ], [30] developed for SLFNs with additive and RBF nodes....
[...]
...Huang et al. [20]–[22], [ 27 ], [30] to provide the necessary background for the development of OS-ELM in Section III....
[...]
1,767 citations
Cites background from "Extreme learning machine: Theory an..."
...The hidden layer of ELM need not be iteratively tuned [5, 6]....
[...]
...[6–9]....
[...]
...The ith row of H is the hidden layer feature mapping with respect to the ith input xi : hðxiÞ: It has been proved [6] that from the interpolation capability point of view, if the activation function g is infinitely differentiable in any interval the hidden layer parameters can be randomly generated....
[...]
...1 [6] Given any small positive value [ 0; activation function g : R ! R which is infinitely differentiable in any interval, and N arbitrary distinct samples ðxi; tiÞ 2 R R; there exists L B N such that for any Int....
[...]
...The learning capability of extreme learning machines have been studied in two aspects: interpolation capability [6] and universal approximation capability [7–9]....
[...]
References
29,130 citations
12,940 citations
"Extreme learning machine: Theory an..." refers methods in this paper
...The forest cover type [2] for 30 30m cells was obtained from US forest service (USFS) region 2 resource information system (RIS) data....
[...]
...Medium size classification applications The ELM performance has also been tested on the Banana database(7) and some other multiclass databases from the Statlog collection [2]: Landsat satellite image (SatImage), Image segmentation (Segment) and Shuttle landing control database....
[...]
7,601 citations
"Extreme learning machine: Theory an..." refers methods in this paper
...For this problem, as usually done in the literature [20,21,5,25] 75% and 25% samples are randomly chosen for training and testing at each trial, respectively....
[...]
...57% with 20 nodes, which is obviously higher than all the results so far reported in the literature using various popular algorithms such as SVM [20], SAOCIF [21], Cascade-Correlation algorithm [21], bagging and boosting methods [5], C4....
[...]
6,562 citations
"Extreme learning machine: Theory an..." refers methods in this paper
...As proposed by Hsu and Lin [8], for each problem, we estimate the generalized accuracy using different combination of cost parameters C and kernel parameters g: C 1⁄4 1⁄22(12); 2(11); ....
[...]
...As proposed by Hsu and Lin [8], for each problem, we estimate the generalized accuracy using different combination of cost parameters C and kernel parameters g: C ¼ ½212; 211; . . . ; 2 1; 2 2 and g ¼ ½24; 23; . . . ; 2 9; 2 10 ....
[...]