scispace - formally typeset
Search or ask a question
Author

Jagannath Aryal

Bio: Jagannath Aryal is an academic researcher from University of Tasmania. The author has contributed to research in topics: Computer science & Decision support system. The author has an hindex of 23, co-authored 98 publications receiving 1940 citations. Previous affiliations of Jagannath Aryal include University of Salzburg & University of Otago.


Papers
More filters
Journal ArticleDOI
TL;DR: The CNN method is still in its infancy as most researchers will either use predefined parameters in solutions like Google TensorFlow or will apply different settings in a trial-and-error manner, Nevertheless, deep-learning can improve landslide mapping in the future if the effects of the different designs are better understood, enough training samples exist, and the results of augmentation strategies to artificially increase the number of existing samples are better understanding.
Abstract: There is a growing demand for detailed and accurate landslide maps and inventories around the globe, but particularly in hazard-prone regions such as the Himalayas. Most standard mapping methods require expert knowledge, supervision and fieldwork. In this study, we use optical data from the Rapid Eye satellite and topographic factors to analyze the potential of machine learning methods, i.e., artificial neural network (ANN), support vector machines (SVM) and random forest (RF), and different deep-learning convolution neural networks (CNNs) for landslide detection. We use two training zones and one test zone to independently evaluate the performance of different methods in the highly landslide-prone Rasuwa district in Nepal. Twenty different maps are created using ANN, SVM and RF and different CNN instantiations and are compared against the results of extensive fieldwork through a mean intersection-over-union (mIOU) and other common metrics. This accuracy assessment yields the best result of 78.26% mIOU for a small window size CNN, which uses spectral information only. The additional information from a 5 m digital elevation model helps to discriminate between human settlements and landslides but does not improve the overall classification accuracy. CNNs do not automatically outperform ANN, SVM and RF, although this is sometimes claimed. Rather, the performance of CNNs strongly depends on their design, i.e., layer depth, input window sizes and training strategies. Here, we conclude that the CNN method is still in its infancy as most researchers will either use predefined parameters in solutions like Google TensorFlow or will apply different settings in a trial-and-error manner. Nevertheless, deep-learning can improve landslide mapping in the future if the effects of the different designs are better understood, enough training samples exist, and the effects of augmentation strategies to artificially increase the number of existing samples are better understood.

458 citations

Journal ArticleDOI
TL;DR: In this article, the authors developed a methodology using object-oriented classification techniques and very high-resolution multispectral Ikonos imagery to automatically map the extent, distribution and density of private gardens in the city of Dunedin, New Zealand.

359 citations

Journal ArticleDOI
20 Nov 2007-Sensors
TL;DR: This approach does not provide maps as detailed as those produced by manually interpreting aerial photographs, but it can still extract ecologically significant classes and is an efficient way to generate accurate and detailed maps in significantly shorter time.
Abstract: Effective assessment of biodiversity in cities requires detailed vegetation maps.To date, most remote sensing of urban vegetation has focused on thematically coarse landcover products. Detailed habitat maps are created by manual interpretation of aerialphotographs, but this is time consuming and costly at large scale. To address this issue, wetested the effectiveness of object-based classifications that use automated imagesegmentation to extract meaningful ground features from imagery. We applied thesetechniques to very high resolution multispectral Ikonos images to produce vegetationcommunity maps in Dunedin City, New Zealand. An Ikonos image was orthorectified and amulti-scale segmentation algorithm used to produce a hierarchical network of image objects.The upper level included four coarse strata: industrial/commercial (commercial buildings),residential (houses and backyard private gardens), vegetation (vegetation patches larger than0.8/1ha), and water. We focused on the vegetation stratum that was segmented at moredetailed level to extract and classify fifteen classes of vegetation communities. The firstclassification yielded a moderate overall classification accuracy (64%, κ = 0.52), which ledus to consider a simplified classification with ten vegetation classes. The overallclassification accuracy from the simplified classification was 77% with a κ value close tothe excellent range (κ = 0.74). These results compared favourably with similar studies inother environments. We conclude that this approach does not provide maps as detailed as those produced by manually interpreting aerial photographs, but it can still extract ecologically significant classes. It is an efficient way to generate accurate and detailed maps in significantly shorter time. The final map accuracy could be improved by integrating segmentation, automated and manual classification in the mapping process, especially when considering important vegetation classes with limited spectral contrast.

160 citations

Journal ArticleDOI
TL;DR: An ensemble method based on a two-layered machine learning model is developed to establish relationship between fire incidence and climatic data and demonstrates that the model provides highly accurate bush-fire incidence hot-spot estimation from the weekly climatic surfaces.
Abstract: Increasing Australian bush-fire frequencies over the last decade has indicated a major climatic change in coming future. Understanding such climatic change for Australian bush-fire is limited and there is an urgent need of scientific research, which is capable enough to contribute to Australian society. Frequency of bush-fire carries information on spatial, temporal and climatic aspects of bush-fire events and provides contextual information to model various climate data for accurately predicting future bush-fire hot spots. In this study, we develop an ensemble method based on a two-layered machine learning model to establish relationship between fire incidence and climatic data. In a 336 week data trial, we demonstrate that the model provides highly accurate bush-fire incidence hot-spot estimation (91% global accuracy) from the weekly climatic surfaces. Our analysis also indicates that Australian weekly bush-fire frequencies increased by 40% over the last 5 years, particularly during summer months, implicating a serious climatic shift.

105 citations

Journal ArticleDOI
TL;DR: A methodology that incorporates object-based image analysis with three machine learning methods, namely, the multilayer perceptron neural network (MLP-NN) and random forest (RF), for landslide detection enhanced landslide detection when it was tested for detecting earthquake-triggered landslides in Rasuwa district, Nepal.
Abstract: Landslides represent a severe hazard in many areas of the world. Accurate landslide maps are needed to document the occurrence and extent of landslides and to investigate their distribution, types, and the pattern of slope failures. Landslide maps are also crucial for determining landslide susceptibility and risk. Satellite data have been widely used for such investigations—next to data from airborne or unmanned aerial vehicle (UAV)-borne campaigns and Digital Elevation Models (DEMs). We have developed a methodology that incorporates object-based image analysis (OBIA) with three machine learning (ML) methods, namely, the multilayer perceptron neural network (MLP-NN) and random forest (RF), for landslide detection. We identified the optimal scale parameters (SP) and used them for multi-scale segmentation and further analysis. We evaluated the resulting objects using the object pureness index (OPI), object matching index (OMI), and object fitness index (OFI) measures. We then applied two different methods to optimize the landslide detection task: (a) an ensemble method of stacking that combines the different ML methods for improving the performance, and (b) Dempster–Shafer theory (DST), to combine the multi-scale segmentation and classification results. Through the combination of three ML methods and the multi-scale approach, the framework enhanced landslide detection when it was tested for detecting earthquake-triggered landslides in Rasuwa district, Nepal. PlanetScope optical satellite images and a DEM were used, along with the derived landslide conditioning factors. Different accuracy assessment measures were used to compare the results against a field-based landslide inventory. All ML methods yielded the highest overall accuracies ranging from 83.3% to 87.2% when using objects with the optimal SP compared to other SPs. However, applying DST to combine the multi-scale results of each ML method significantly increased the overall accuracies to almost 90%. Overall, the integration of OBIA with ML methods resulted in appropriate landslide detections, but using the optimal SP and ML method is crucial for success.

102 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper gives an overview of the development of object based methods, which aim to delineate readily usable objects from imagery while at the same time combining image processing and GIS functionalities in order to utilize spectral and contextual information in an integrative way.
Abstract: Remote sensing imagery needs to be converted into tangible information which can be utilised in conjunction with other data sets, often within widely used Geographic Information Systems (GIS). As long as pixel sizes remained typically coarser than, or at the best, similar in size to the objects of interest, emphasis was placed on per-pixel analysis, or even sub-pixel analysis for this conversion, but with increasing spatial resolutions alternative paths have been followed, aimed at deriving objects that are made up of several pixels. This paper gives an overview of the development of object based methods, which aim to delineate readily usable objects from imagery while at the same time combining image processing and GIS functionalities in order to utilize spectral and contextual information in an integrative way. The most common approach used for building objects is image segmentation, which dates back to the 1970s. Around the year 2000 GIS and image processing started to grow together rapidly through object based image analysis (OBIA - or GEOBIA for geospatial object based image analysis). In contrast to typical Landsat resolutions, high resolution images support several scales within their images. Through a comprehensive literature review several thousand abstracts have been screened, and more than 820 OBIA-related articles comprising 145 journal papers, 84 book chapters and nearly 600 conference papers, are analysed in detail. It becomes evident that the first years of the OBIA/GEOBIA developments were characterised by the dominance of ‘grey’ literature, but that the number of peer-reviewed journal articles has increased sharply over the last four to five years. The pixel paradigm is beginning to show cracks and the OBIA methods are making considerable progress towards a spatially explicit information extraction workflow, such as is required for spatial planning as well as for many monitoring programmes.

3,809 citations

Journal Article
TL;DR: 1. Place animal in induction chamber and anesthetize the mouse and ensure sedation, move it to a nose cone for hair removal using cream and reduce anesthesia to maintain proper heart rate.
Abstract: 1. Place animal in induction chamber and anesthetize the mouse and ensure sedation. 2. Once the animal is sedated, move it to a nose cone for hair removal using cream. Only apply cream to the area of the chest that will be utilized for imaging. Once the hair is removed, wipe area with wet gauze to ensure all hair is removed. 3. Move the animal to the imaging platform and tape its paws to the ECG lead plates and insert rectal probe. Body temperature should be maintained at 36-37°C. During imaging, reduce anesthesia to maintain proper heart rate. If the animal shows signs of being awake, use a higher concentration of anesthetic.

1,557 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of the state-of-the-art of Big Data applications in Smart Farming and identify the related socio-economic challenges to be addressed.

1,477 citations