scispace - formally typeset
Journal ArticleDOI: 10.1080/01431161.2020.1842543

Semantic segmentation of major macroalgae in coastal environments using high-resolution ground imagery and deep learning

04 Mar 2021-International Journal of Remote Sensing (Taylor & Francis)-Vol. 42, Iss: 5, pp 1785-1800
Abstract: Macroalgae are a fundamental component of coastal ecosystems and play a key role in shaping community structure and functioning. Macroalgae are currently threatened by diverse stressors, particular...

... read more

Citations
  More

7 results found


Open accessJournal ArticleDOI: 10.3390/RS13091741
30 Apr 2021-Remote Sensing
Abstract: Intertidal seagrass plays a vital role in estimating the overall health and dynamics of coastal environments due to its interaction with tidal changes. However, most seagrass habitats around the globe have been in steady decline due to human impacts, disturbing the already delicate balance in the environmental conditions that sustain seagrass. Miniaturization of multi-spectral sensors has facilitated very high resolution mapping of seagrass meadows, which significantly improves the potential for ecologists to monitor changes. In this study, two analytical approaches used for classifying intertidal seagrass habitats are compared—Object-based Image Analysis (OBIA) and Fully Convolutional Neural Networks (FCNNs). Both methods produce pixel-wise classifications in order to create segmented maps. FCNNs are an emerging set of algorithms within Deep Learning. Conversely, OBIA has been a prominent solution within this field, with many studies leveraging in-situ data and multiresolution segmentation to create habitat maps. This work demonstrates the utility of FCNNs in a semi-supervised setting to map seagrass and other coastal features from an optical drone survey conducted at Budle Bay, Northumberland, England. Semi-supervision is also an emerging field within Deep Learning that has practical benefits of achieving state of the art results using only subsets of labelled data. This is especially beneficial for remote sensing applications where in-situ data is an expensive commodity. For our results, we show that FCNNs have comparable performance with the standard OBIA method used by ecologists.

... read more

3 Citations


Journal ArticleDOI: 10.1109/TIM.2021.3102745
Pengxin Wang1, Liuyang Song1, Xudong Guo1, Huaqing Wang1  +1 moreInstitutions (2)
Abstract: Recently, the diagnosis of rotating machines based on deep learning models has achieved great success. Many of these intelligent diagnosis models are assumed that training and test data are subject to independent identical distributions (IIDs). Unfortunately, such an assumption is generally invalid in practical applications due to noise disturbances and changes in workload. To address the above problem, this article presents a high-stability diagnosis model named the multiscale feature fusion convolutional neural network (MFF-CNN). MFF-CNN does not rely on tedious data preprocessing and target domain information. It is composed of multiscale dilated convolution, self-adaptive weighting, and the new form of maxout (NFM) activation. It extracts, modulates, and fuses the input samples’ multiscale features so that the model focuses more on the health state difference rather than the noise disturbance and workload difference. Two diagnostic cases, including noisy cases and variable load cases, are used to verify the effectiveness of the present model. The results show that the present model has a strong health state identification capability and anti-interference capability for variable loads and noise disturbances.

... read more

Topics: Deep learning (56%), Convolutional neural network (55%), Noise (52%) ... read more

2 Citations


Open accessJournal ArticleDOI: 10.1016/J.ALGAL.2021.102568
Abstract: Microalgae are single-celled organisms that have been extensively utilized in biotechnology, pharmacology and foodstuff in recent years. The description and classification of many existing microalgae groups are carried out with classical methods in a long time and with a remarkably qualified labor force. Deep learning methods have achieved success in many fields are applied to the classification of microalga groups. In this study, Cyanobacteria and Chlorophyta microalga groups images are captured by using an inverted microscope. Data augmentation process has been carried out to increase the classification success in Convolutional Neural Network (CNN) models. The collected images are classified by employing two different methods. For the first method, classification is performed with seven different CNN models. In the second method, the Support Vector Machine (SVM) is used to increase the classification success of the AlexNet model with the lowest accuracy. For this, deep features which are extracted from the AlexNet model are classified with SVM. Four different kernel functions are used in the SVM classification process. The highest accuracy is found to be 99.66% among the different CNN models. AlexNet, which has the lowest accuracy with 98%, has reached 99.66% accuracy as a result of its application with SVM.

... read more


Open accessPosted Content
Abstract: Since 2011, significant and atypical arrival of two species of surface dwelling algae, Sargassum natans and Sargassum Fluitans, have been detected in the Mexican Caribbean. This massive accumulation of algae has had a great environmental and economic impact. Therefore, for the government, ecologists, and local businesses, it is important to keep track of the amount of sargassum that arrives on the Caribbean coast. High-resolution satellite imagery is expensive or may be time delayed. Therefore, we propose to estimate the amount of sargassum based on ground-level smartphone photographs. From the computer vision perspective, the problem is quite difficult since no information about the 3D world is provided, in consequence, we have to model it as a classification problem, where a set of five labels define the amount. For this purpose, we have built a dataset with more than one thousand examples from public forums such as Facebook or Instagram and we have tested several state-of-the-art convolutional networks. As a result, the VGG network trained under fine-tuning showed the best performance. Even though the reached accuracy could be improved with more examples, the current prediction distribution is narrow, so the predictions are adequate for keeping a record and taking quick ecological actions.

... read more


Proceedings ArticleDOI: 10.1109/CCCI52664.2021.9583195
Jinghu Li1, Lili Wang1, Qianguo Xing2Institutions (2)
15 Oct 2021-
Abstract: Video surveillance is an important method to obtain the dynamic changes of green macroalgae along the coast. The paper proposes a coastal green macroalgae extraction method based on the SLIC superpixel segmentation, CNN and SVM to realize the automated recognition of green macroalgae from lots of high-resolution RGB video data collected by unmanned aerial vehicle (UAV) and handheld devices. Firstly, SLIC algorithm is used to generate the multi-scale patches on the original high-resolution image. Then, three classification CNN is used to divide the multi-scale patches into three types: green macroalgae, background and mixing. Finally, SVM algorithm is used to extract the green macroalgae to improve the accuracy at the pixel level in the mixed patches. In order to evaluate the performance of the proposed method, experiments are conducted on our coastal green macroalgae image dataset. Compared with the method of RGB vegetation indices (such as ExR, RGBVI, NGBDI), the overall accuracy (OA), F1 score, and Kappa of the green macroalgae extraction with the method proposed in this paper are up to 95.23%, 0.9612, 0.9436, respectively. The results show that our method is significantly better than that of RGB vegetation indices since it effectively reduces the influence of sea waves and light on the recognition results. The automated extraction method for coastal green macroalgae proposed in this paper can provide a reference for the automatic monitoring of coastal green macroalgae with high precision.

... read more


References
  More

44 results found


Open accessProceedings ArticleDOI: 10.1109/CVPR.2016.90
Kaiming He1, Xiangyu Zhang1, Shaoqing Ren1, Jian Sun1Institutions (1)
27 Jun 2016-
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

... read more

Topics: Deep learning (53%), Residual (53%), Convolutional neural network (53%) ... read more

93,356 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2018.00474
Mark Sandler1, Andrew Howard1, Menglong Zhu1, Andrey Zhmoginov1  +1 moreInstitutions (1)
18 Jun 2018-
Abstract: In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.

... read more

Topics: Mobile architecture (54%), Object detection (53%), Image segmentation (52%) ... read more

5,263 Citations


Open accessProceedings ArticleDOI: 10.1109/CVPR.2017.195
François Chollet1Institutions (1)
21 Jul 2017-
Abstract: We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.

... read more

5,200 Citations


Open accessBook ChapterDOI: 10.1007/978-3-030-01234-2_49
Liang-Chieh Chen1, Yukun Zhu1, George Papandreou1, Florian Schroff1  +1 moreInstitutions (1)
08 Sep 2018-
Abstract: Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https://github.com/tensorflow/models/tree/master/research/deeplab.

... read more

Topics: Pooling (54%), Segmentation (51%)

2,887 Citations


Journal ArticleDOI: 10.1016/J.RSE.2009.05.012
Chuanmin Hu1Institutions (1)
Abstract: Various types of floating algae have been reported in open oceans and coastal waters, yet accurate and timely detection of these relatively small surface features using traditional satellite data and algorithms has been difficult or even impossible due to lack of spatial resolution, coverage, revisit frequency, or due to inherent algorithm limitations. Here, a simple ocean color index, namely the Floating Algae Index (FAI), is developed and used to detect floating algae in open ocean environments using the medium-resolution (250- and 500-m) data from operational MODIS (Moderate Resolution Imaging Spectroradiometer) instruments. FAI is defined as the difference between reflectance at 859 nm (vegetation “red edge”) and a linear baseline between the red band (645 nm) and short-wave infrared band (1240 or 1640 nm). Through data comparison and model simulations, FAI has shown advantages over the traditional NDVI (Normalized Difference Vegetation Index) or EVI (Enhanced Vegetation Index) because FAI is less sensitive to changes in environmental and observing conditions (aerosol type and thickness, solar/viewing geometry, and sun glint) and can “see” through thin clouds. The baseline subtraction method provides a simple yet effective means for atmospheric correction, through which floating algae can be easily recognized and delineated in various ocean waters, including the North Atlantic Ocean, Gulf of Mexico, Yellow Sea, and East China Sea. Because similar spectral bands are available on many existing and planned satellite sensors such as Landsat TM/ETM+ and VIIRS (Visible Infrared Imager/Radiometer Suite), the FAI concept is extendable to establish a long-term record of these ecologically important ocean plants.

... read more

423 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20217