scispace - formally typeset
Search or ask a question

Showing papers by "Pijush Samui published in 2018"


Journal ArticleDOI
31 Oct 2018-Sensors
TL;DR: A new methodology for spatial prediction of flash floods based on Sentinel-1 SAR imagery and a new hybrid machine learning technique is proposed, indicating that the combination of FA and LM backpropagation is proven to be very effective and the proposed FA-LM-ANN is a new and useful tool for predicting flash flood susceptibility.
Abstract: Flash floods are widely recognized as one of the most devastating natural hazards in the world, therefore prediction of flash flood-prone areas is crucial for public safety and emergency management. This research proposes a new methodology for spatial prediction of flash floods based on Sentinel-1 SAR imagery and a new hybrid machine learning technique. The SAR imagery is used to detect flash flood inundation areas, whereas the new machine learning technique, which is a hybrid of the firefly algorithm (FA), Levenberg–Marquardt (LM) backpropagation, and an artificial neural network (named as FA-LM-ANN), was used to construct the prediction model. The Bac Ha Bao Yen (BHBY) area in the northwestern region of Vietnam was used as a case study. Accordingly, a Geographical Information System (GIS) database was constructed using 12 input variables (elevation, slope, aspect, curvature, topographic wetness index, stream power index, toposhade, stream density, rainfall, normalized difference vegetation index, soil type, and lithology) and subsequently the output of flood inundation areas was mapped. Using the database and FA-LM-ANN, the flash flood model was trained and verified. The model performance was validated via various performance metrics including the classification accuracy rate, the area under the curve, precision, and recall. Then, the flash flood model that produced the highest performance was compared with benchmarks, indicating that the combination of FA and LM backpropagation is proven to be very effective and the proposed FA-LM-ANN is a new and useful tool for predicting flash flood susceptibility.

91 citations


Journal ArticleDOI
TL;DR: The study ascertains that the GRNN model was a qualified data-intelligent tool for temperature estimation without a need for climate-based inputs, at least in the present investigation, and this model can be explored for its utility in energy management, building and construction, agriculture, heatwave studies, health and other socio-economic areas, particularly in data-sparse regions.

60 citations


Journal ArticleDOI
TL;DR: A comparison is made between the results obtained from all the above mentioned models and the model which provides the best fit is established and manifest that proposed models are robust for determination of compressive strength of concrete.
Abstract: In the present study, soft computing i.e., machine learning techniques and regression models algorithms have earned much importance for the prediction of the various parameters in different fields of science and engineering. This paper depicts that how regression models can be implemented for the prediction of compressive strength of concrete. Three models are taken into consideration for this; they are Gaussian Process for Regression (GPR), Multi Adaptive Regression Spline (MARS) and Minimax Probability Machine Regression (MPMR). Contents of cement, blast furnace slag, fly ash, water, superplasticizer, coarse aggregate, fine aggregate and age in days have been taken as inputs and compressive strength as output for GPR, MARS and MPMR models. A comparatively large set of data including 1030 normalized previously published results which were obtained from experiments were utilized. Here, a comparison is made between the results obtained from all the above mentioned models and the model which provides the best fit is established. The experimental results manifest that proposed models are robust for determination of compressive strength of concrete.

37 citations


Journal ArticleDOI
TL;DR: In this article, three methods have been used to predict the liquefaction susceptibility of some of high seismically active regions of Bihar such as Darbhanga, Samastipur, West Champaran and Araria Sangram.
Abstract: The built environment and human life have been regularly affected by earthquakes. One of the most catastrophic earthquake hazards is liquefaction causing failure of embankment, earth structures, foundations, natural slopes and superstructures. In this paper, three methods have been used to predict the liquefaction susceptibility of some of high seismically active regions of Bihar such as Darbhanga, Samastipur, West-Champaran and Araria Sangram. The first method is based on semi-empirical approach, given by Idriss and Boulanger (in: Proceedings of 11th International conference on soil dynamics and earthquake engineering and 3rd International conference on earthquake geotechnical engineering vol 1, pp 32–56, 2006). The stress reduction factor (rd), overburden correction factor for cyclic stress ratios Kσ, earthquake magnitude scaling factor for cyclic stress ratios (MSF) and recently modified relations has been considered in this approach. The result is obtained in form of factor of safety at particular depth beneath the ground surface. The second approach is based on Muduli’s probabilistic method (2013) and third is on relevance vector machine (RVM) method developed by Karthikeyan et al. (Eur J Environ Civ Eng 17(4):248–262, 2013). RVM model has been used to determine the liquefaction potential of soil based on standard penetration test (SPT) blow count. The SPT data has been used as input parameter for the analysis. Site coming under high seismic zone is more susceptible to liquefaction during earthquake. This concludes that earthquake magnitude plays very important role for the assessment of liquefaction, irrespective of soil type and water table of the site. This paper contains the deterministic and probabilistic analysis of liquefaction behavior of soil.

13 citations


Journal ArticleDOI
TL;DR: In this paper, a reliability approach has been presented in order to find the probability of liquefaction, which is formulated on the basis of the results of reliability analyses of 234 field data, using a deterministic simplified Idriss and Boulanger method.
Abstract: There are many deterministic and probabilistic liquefaction assessment measures to classify if soil liquefaction will take place or not. Different approaches give dissimilar safety factor and liquefaction probabilities. So, reliability analysis is required to deal with these different uncertainties. This paper describes a reliability technique for predicting the seismic liquefaction potential of soils of some areas at Bihar State. Here a reliability approach has been presented in order to find the probability of liquefaction. The proposed approach is formulated on the basis of the results of reliability analyses of 234 field data. Using a deterministic simplified Idriss and Boulanger method, factor of safety of soil has been accessed. The reliability index as well as corresponding probability of liquefaction has been determined based on a First Order Second Moment (FOSM) method. The developed method can be used as a robust tool for engineers concerned in the estimation of liquefaction potential.

9 citations


Book ChapterDOI
01 Jan 2018
TL;DR: This paper proposes Deep Belief Network (DBN) learning technique, which is one of the state of the art machine learning algorithms, for classification of drought and non-drought images and its effectiveness has been measured by various performance metrics.
Abstract: Drought is a condition of land in which the ground water faces a severe shortage. This condition affects the survival of plants and animals. Drought can impact ecosystem and agricultural productivity, severely. Hence, the economy also gets affected by this situation. This paper proposes Deep Belief Network (DBN) learning technique, which is one of the state of the art machine learning algorithms. This proposed work uses DBN, for classification of drought and non-drought images. Also, k nearest neighbour (kNN) and random forest learning methods have been proposed for the classification of the same drought images. The performance of the Deep Belief Network(DBN) has been compared with k nearest neighbour (kNN) and random forest. The data set has been split into 80:20, 70:30 and 60:40 as train and test. Finally, the effectiveness of the three proposed models have been measured by various performance metrics. Classifying Images of Drought-Affected Area Using Deep Belief Network, kNN, and Random Forest Learning Techniques

3 citations


Book ChapterDOI
01 Jan 2018
TL;DR: This chapter adopts three intelligent models {Extreme Learning Machine, Minimax Probability Machine Regression and Generalized Regression Neural Network} for determination of Ej of jointed rock mass.
Abstract: Elastic Modulus (Ej) of jointed rock mass is a key parameter for deformation analysis of rock mass. This chapter adopts three intelligent models {Extreme Learning Machine (ELM), Minimax Probability Machine Regression (MPMR) and Generalized Regression Neural Network (GRNN)} for determination of Ej of jointed rock mass. MPMR is derived in a probability framework. ELM is the modified version of Single Hidden Layer Feed forward network. GRNN approximates any arbitrary function between the input and output variables. Joint frequency (Jn), joint inclination parameter (n), joint roughness parameter (r), confining pressure (σ3) (MPa), and elastic modulus (Ei) (GPa) of intact rock have been taken as inputs of the ELM, GRNN and MPMR models. The output of ELM, GRNN and MPMR is Ej of jointed rock mass. In this study, ELM, GRNN and MPMR have been used as regression techniques. The developed GRNN, ELM and MPMR have been compared with the Artificial Neural Network (ANN) models.

3 citations


Book ChapterDOI
01 Jan 2018
TL;DR: In this article, four modeling techniques Ordinary Kriging (OK), Generalized Regression Neural Network (GRNN), Genetic Programming (GP) and Minimax Probability Machine Regression (MPMR) have been used for predicting the rock depth at any point in Chennai.
Abstract: This study adopts four modeling techniques Ordinary Kriging(OK), Generalized Regression Neural Network (GRNN), Genetic Programming(GP) and Minimax Probability Machine Regression(MPMR) for prediction of rock depth(d) at Chennai(India). Latitude (Lx) and Longitude(Ly) have been used as inputs of the models. A semivariogram has been constructed for developing the OK model. The developed GP gives equation for prediction of d at any point in Chennai. A comparison of four modeling techniques has been carried out. The performance of MPMR is slightly better than the other models. The developed models give the spatial variability of rock depth at Chennai.

1 citations


Journal ArticleDOI
TL;DR: To develop a model that accurately predicts Ʀ, four models—support vector machine, relevance vectors machine, Gaussian process regression and generalized regression neural network—have been considered and validated and compared to arrive at the best one.
Abstract: The rotation capacity (Ʀ) of steel beams is a physical factor that indicates the ductility of a structural member. This information is most useful in severe conditions like earthquakes, for example. In fact, Ʀ is a deciding factor in the plastic design of wide flange beams. To simplify the calculation of Ʀ, soft computing techniques could be applied. In this paper, the various attributes that govern Ʀ have been obtained from a wide experimental database, gathered from previously conducted experiments. To develop a model that accurately predicts Ʀ, four models—support vector machine, relevance vector machine, Gaussian process regression and generalized regression neural network—have been considered. These models have been tested and trained with the data collected. The models have then been validated and compared to arrive at the best one. Such efforts could go a long way in helping determine the Ʀ of wide flange steel beams and contribute to better design of structural members.

1 citations