scispace - formally typeset
Search or ask a question

Showing papers by "Kuo-Chin Fan published in 2022"


Journal ArticleDOI
TL;DR: The experimental results proved that the proposed model can provide the optimal trade-off between accuracy and computational time compared to other related methods using the Indian Pines, Pavia University, and Salinas Scene hyperspectral benchmark datasets.
Abstract: The performance of hyperspectral image (HSI) classification is highly dependent on spatial and spectral information, and is heavily affected by factors such as data redundancy and insufficient spatial resolution. To overcome these challenges, many convolutional neural networks (CNN) especially 2D-CNN-based methods have been proposed for HSI classification. However, these methods produced insufficient results compared to 3D-CNN-based methods. On the other hand, the high computational complexity of the 3D-CNN-based methods is still a major concern that needs to be addressed. Therefore, this study introduces a consolidated convolutional neural network (C-CNN) to overcome the aforementioned issues. The proposed C-CNN is comprised of a three-dimension CNN (3D-CNN) joined with a two-dimension CNN (2D-CNN). The 3D-CNN is used to represent spatial–spectral features from the spectral bands, and the 2D-CNN is used to learn abstract spatial features. Principal component analysis (PCA) was firstly applied to the original HSIs before they are fed to the network to reduce the spectral bands redundancy. Moreover, image augmentation techniques including rotation and flipping have been used to increase the number of training samples and reduce the impact of overfitting. The proposed C-CNN that was trained using the augmented images is named C-CNN-Aug. Additionally, both Dropout and L2 regularization techniques have been used to further reduce the model complexity and prevent overfitting. The experimental results proved that the proposed model can provide the optimal trade-off between accuracy and computational time compared to other related methods using the Indian Pines, Pavia University, and Salinas Scene hyperspectral benchmark datasets.

31 citations


Proceedings ArticleDOI
04 Mar 2022
TL;DR: A new SFPN (Synthetic Fusion Pyramid Network) arichtecture is proposed which creates various synthetic layers between layers of the original FPN to enhance the accuracy of light-weight CNN backones to extract objects’ visual features more accurately.
Abstract: FPN (Feature Pyramid Network) has become a basic component of most SoTA one stage object detectors. Many previous studies have repeatedly proved that FPN can caputre better multi-scale feature maps to more precisely describe objects if they are with different sizes. However, for most backbones such VGG, ResNet, or DenseNet, the feature maps at each layer are downsized to their quarters due to the pooling operation or convolutions with stride 2. The gap of downscaling-by-2 is large and makes its FPN not fuse the features smoothly. This paper proposes a new SFPN (Synthetic Fusion Pyramid Network) arichtecture which creates various synthetic layers between layers of the original FPN to enhance the accuracy of light-weight CNN backones to extract objects’ visual features more accurately. Finally, experiments prove the SFPN architecture outperforms either the large backbone VGG16, ResNet50 or light-weight backbones such as MobilenetV2 based on AP score.

4 citations


Proceedings ArticleDOI
28 May 2022
TL;DR: In this paper , the authors proposed a new lightweight convolution method Cross-Stage Lightweight Module (CSL-M), which combines the Inverted residual block (IRB) and cross-stage Partial (CSP) concept.
Abstract: The development of lightweight object detectors is essential due to the limited computation resources. To reduce the computation cost, how to generate features plays a significant role. This paper proposes a new lightweight convolution method Cross-Stage Lightweight Module (CSL-M). It combines the Inverted Residual Block (IRB) and Cross-Stage Partial (CSP) concept. Experiments conducted at CIFAR-10 show that the proposed CSL-Net based on CSL-M performs better with fewer FLOPs than the other lightweight backbones. Finally, we use CSL-Net as the backbone to construct a lightweight detector CSL-YOLO, achieving better detection performance with only 43% FLOPs and 52% parameters than Tiny-YOLOv4.

Journal ArticleDOI
TL;DR: Siamese-Predictor as discussed by the authors is a predictor-based NAS that achieves the SOTA level, especially with limited computation budgets, and it applied to the proposed Tiny-NanoBench for lightweight CNN architecture.
Abstract: In the past decade, many architectures of convolution neural networks were designed by handcraft, such as Vgg16, ResNet, DenseNet, etc. They all achieve state-of-the-art level on different tasks in their time. However, it still relies on human intuition and experience, and it also takes so much time consumption for trial and error. Neural Architecture Search (NAS) focused on this issue. In recent works, the Neural Predictor has significantly improved with few training architectures as training samples. However, the sampling efficiency is already considerable. In this paper, our proposed Siamese-Predictor is inspired by past works of predictor-based NAS. It is constructed with the proposed Estimation Code, which is the prior knowledge about the training procedure. The proposed Siamese-Predictor gets significant benefits from this idea. This idea causes it to surpass the current SOTA predictor on NASBench-201. In order to explore the impact of the Estimation Code, we analyze the relationship between it and accuracy. We also propose the search space Tiny-NanoBench for lightweight CNN architecture. This well-designed search space is easier to find better architecture with few FLOPs than NASBench-201. In summary, the proposed Siamese-Predictor is a predictor-based NAS. It achieves the SOTA level, especially with limited computation budgets. It applied to the proposed Tiny-NanoBench can just use a few trained samples to find extremely lightweight CNN architecture.