scispace - formally typeset
Search or ask a question

Showing papers in "Archives of Computational Methods in Engineering in 2019"


Journal ArticleDOI
TL;DR: The performance of state-of-the-art techniques are analyzed to identify those that seem to work well across several crops or crop categories and a set of acceptable techniques are discovered.
Abstract: The symptoms of plant diseases are evident in different parts of a plant; however leaves are found to be the most commonly observed part for detecting an infection Researchers have thus attempted to automate the process of plant disease detection and classification using leaf images Several works utilized computer vision technologies effectively and contributed a lot in this domain This manuscript summarizes the pros and cons of all such studies to throw light on various important research aspects A discussion on commonly studied infections and research scenario in different phases of a disease detection system is presented The performance of state-of-the-art techniques are analyzed to identify those that seem to work well across several crops or crop categories Discovering a set of acceptable techniques, the manuscript highlights several points of consideration along with the future research directions The survey would help researchers to gain understanding of computer vision applications in plant disease detection

187 citations


Journal ArticleDOI
TL;DR: This paper presents the state of the art review describing different type of IM faults and their diagnostic schemes, and several monitoring techniques available for fault diagnosis of IM have been identified and represented.
Abstract: There is a constant call for reduction of operational and maintenance costs of induction motors (IMs) These costs can be significantly reduced if the health of the system is monitored regularly This allows for early detection of the degeneration of the motor health, alleviating a proactive response, minimizing unscheduled downtime, and unexpected breakdowns The condition based monitoring has become an important task for engineers and researchers mainly in industrial applications such as railways, oil extracting mills, industrial drives, agriculture, mining industry etc Owing to the demand and influence of condition monitoring and fault diagnosis in IMs and keeping in mind the prerequisite for future research, this paper presents the state of the art review describing different type of IM faults and their diagnostic schemes Several monitoring techniques available for fault diagnosis of IM have been identified and represented The utilization of non-invasive techniques for data acquisition in automatic timely scheduling of the maintenance and predicting failure aspects of dynamic machines holds a great scope in future

155 citations


Journal ArticleDOI
TL;DR: The numerical study has revealed consistent performance of a model out of all the surrogates utilized in solving a large-scale practical RDO problem and all the results have been compared with that of Monte Carlo simulation results.
Abstract: Robust design optimization (RDO) has been eminent, ascertaining optimal configuration of engineering systems in presence of uncertainties. However, computational aspect of conventional RDO can often get computationally intensive as neighborhood assessments of every solution are required to compute the performance variance and ensure feasibility. Surrogate assisted optimization is one of the efficient approaches in order to mitigate this issue of computational expense. However, the performance of a surrogate model plays a key factor in determining the optima in multi-modal and highly non-linear landscapes, in presence of uncertainties. In other words, the approximation accuracy of the model is principal in yielding the actual optima and thus, avoiding any misguide to the decision maker on the basis of false or, local optimum points. Therefore, an extensive survey has been carried out by employing most of the well-known surrogate models in the framework of RDO. It is worth mentioning that the numerical study has revealed consistent performance of a model out of all the surrogates utilized. Finally, the best performing model has been utilized in solving a large-scale practical RDO problem. All the results have been compared with that of Monte Carlo simulation results.

115 citations


Journal ArticleDOI
TL;DR: In this paper, a detailed review of existing formulations of Kirchhoff-love and Simo-Reissner type for highly slender beams is presented, where two different rotation interpolation schemes with strong or weak Kirchoff constraint enforcement, as well as two different choices of nodal triad parametrizations in terms of rotation or tangent vectors are proposed.
Abstract: The present work focuses on geometrically exact finite elements for highly slender beams. It aims at the proposal of novel formulations of Kirchhoff–Love type, a detailed review of existing formulations of Kirchhoff–Love and Simo–Reissner type as well as a careful evaluation and comparison of the proposed and existing formulations. Two different rotation interpolation schemes with strong or weak Kirchhoff constraint enforcement, respectively, as well as two different choices of nodal triad parametrizations in terms of rotation or tangent vectors are proposed. The combination of these schemes leads to four novel finite element variants, all of them based on a $$C^1$$ -continuous Hermite interpolation of the beam centerline. Essential requirements such as representability of general 3D, large deformation, dynamic problems involving slender beams with arbitrary initial curvatures and anisotropic cross-section shapes, preservation of objectivity and path-independence, consistent convergence orders, avoidance of locking effects as well as conservation of energy and momentum by the employed spatial discretization schemes, but also a range of practically relevant secondary aspects will be investigated analytically and verified numerically for the different formulations. It will be shown that the geometrically exact Kirchhoff–Love beam elements proposed in this work are the first ones of this type that fulfill all these essential requirements. On the contrary, Simo–Reissner type formulations fulfilling these requirements can be found in the literature very well. However, it will be argued that the shear-free Kirchhoff–Love formulations can provide considerable numerical advantages such as lower spatial discretization error levels, improved performance of time integration schemes as well as linear and nonlinear solvers and smooth geometry representation as compared to shear-deformable Simo–Reissner formulations when applied to highly slender beams. Concretely, several representative numerical test cases confirm that the proposed Kirchhoff–Love formulations exhibit a lower discretization error level as well as a considerably improved nonlinear solver performance in the range of high beam slenderness ratios as compared to two representative Simo–Reissner element formulations from the literature.

112 citations


Journal ArticleDOI
TL;DR: 6 heuristic algorithms are studied: Nearest Neighbor, Genetic Algorithm, Simulated Annealing, Tabu Search, Ant Colony Optimization and Tree Physiology Optimization for Travelling Salesman Problem.
Abstract: The Travelling Salesman Problem (TSP) is an NP-hard problem with high number of possible solutions. The complexity increases with the factorial of n nodes in each specific problem. Meta-heuristic algorithms are an optimization algorithm that able to solve TSP problem towards a satisfactory solution. To date, there are many meta-heuristic algorithms introduced in literatures which consist of different philosophies of intensification and diversification. This paper focuses on 6 heuristic algorithms: Nearest Neighbor, Genetic Algorithm, Simulated Annealing, Tabu Search, Ant Colony Optimization and Tree Physiology Optimization. The study in this paper includes comparison of computation, accuracy and convergence.

89 citations


Journal ArticleDOI
TL;DR: The impression behind this work is to simulate core Soft Computing methodologies, along with encapsulating various terminologies like evaluation parameters, tools, databases, noises etc which can be advantageous for researchers.
Abstract: Image segmentation methodology is a part of nearly all computer schemes as a pre-processing phase to excerpt more meaningful and useful information for analysing the objects within an image. Segmentation of an image is one of the most conjoint scientific matter, essential technology and critical constraint for image investigation and dispensation. There has been a lot of research work conceded in several emerging algorithms and approaches for segmentation, but even at present, no solitary standard technique has been proposed. The methodologies present are broadly classified among two classes i.e. traditional approaches and Soft computing approaches or Computational Intelligence (CI) approaches. In this article, our emphasis is to focus on Soft Computing (SC) techniques which has been adopted for segmenting an image. Nowadays, it is quite often seen that SC or CI is cast-off frequently in Information Technology and Computer Technology. However, Soft Computing approaches working synergistically provides in anyway, malleable information processing competence to manipulate real-life enigmatic circumstances. The impetus of these methodologies is to feat the lenience for ambiguity, roughness, imprecise acumen and partial veracity for the sake to attain compliance, sturdiness and economical results. Neural Networks (NNs), Fuzzy Logic (FL), and Genetic Algorithm (GA) are the fundamental approaches of SC regulation. SC approaches has been broadly implemented and studied in the number of applications including scientific analysis, medical, engineering, management, humanities etc. The paper focuses on introducing the various SC methodologies and presenting numerous applications in image segmentation. The acumen is to corroborate the probabilities of smearing computational intelligence to segmentation of an image. The available articles about usage of SC in segmentation are investigated, especially focusing on the core approaches like FL, NN and GA and efforts has been also made for collaborating new techniques like Fuzzy C-Means from FL family and Deep Neural Network or Convolutional Neural Network from NN family. The impression behind this work is to simulate core Soft Computing methodologies, along with encapsulating various terminologies like evaluation parameters, tools, databases, noises etc. which can be advantageous for researchers. This study also identifies approaches of SC being used, often collectively to resolve the distinctive dilemma of image segmentation, concluding with a general discussion about methodologies, applications followed by proposed work.

88 citations


Journal ArticleDOI
TL;DR: This study presents an up-to-date review over the application of NIOAs in image enhancement domain and the key issues which are involved in the formulation of NioAs based image enhancement models are discussed here.
Abstract: In the field of image processing, there are several problems where the efficient search has to be performed in complex search domain to find an optimal solution Image enhancement which improves the quality of an image for visual analysis and/or machine understanding is one of these problems There is no unique image enhancement technique and it’s measurement criterion which satisfies all the necessity and quantitatively judge the quality of a given image respectively Thus sometimes proper image enhancement problem becomes hard and takes large computational time In order to overcome that problem, researchers formulated the image enhancement as optimization problems and solved using Nature-Inspired Optimization Algorithms (NIOAs) which starts a new era in image enhancement field This study presents an up-to-date review over the application of NIOAs in image enhancement domain The key issues which are involved in the formulation of NIOAs based image enhancement models are also discussed here

84 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared the bandwidth of one-dimensional periodic structures for wave propagation in the non-dimensional domain and compared the effects of different parameters, such as damping, stiffness and mass ratios, nonlinearity, on the bandwidth.
Abstract: Wave propagation through a structured medium has attracted the attention of researchers for centuries due to its relevance to problems in condensed matter physics, chemistry, optics, phononics, composite, acoustics and mechanics. Wave containing certain band of frequencies can either propagate, known as transmission, or attenuated, known as attenuation band. This band structure for a continuum and its equivalent lumped spring mass model are not identical, although continuum medium is often modelled as a chain of discrete periodic structures because the continuous and discrete is depends on the scale. These band characteristics are dependent on the properties of the units, thus the effects of different parameters, such as damping, stiffness and mass ratios, nonlinearity, on the bandwidth are compared with each other in this review. To cloak, modulate, guide, filter out or attenuate unwanted frequencies from the propagating waves, metamaterials are widely investigated as a special form of the periodic structures from the past 2 decades. The main aim of this review is to compare the bandwidth for one-dimensional periodic structures. Waves through two and three-dimensional periodic medium are not considered in the review because the key band characteristics of periodic system can be perceived in one dimensional. The methods for computing the wave transmission are evaluated in the non-dimensional domain and the band characteristics of different one-dimensional periodic structures are critically assessed in this review. This review will help to the future researchers to choose a proper periodic medium for getting a specific band phenomenon.

79 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the use of optimization algorithms and Artificial Neural Networks (ANN) for structural monitoring in the form of a brief review, which aims to help engineers and researchers find a better alternative to their specific structural monitoring problems.
Abstract: The Structural Health Monitoring (SHM) technique is today the principle approach to manage the discovery and recognizable proof of damage in the most various designing areas. The need to monitor structural behavior is increasing every day but due to the development of new materials and increasingly complex structures. This leads to the development of increasingly robust and sensitive SHM methodologies and techniques. Damage Identification by means of intelligent signal processing and optimization algorithms based in vibration metrics are particularly emphasized in this paper. The methods discussed here are mainly elaborated by the evaluation of vibrational and modal data due to the great potential (and relatively easy to apply) of application. This article discusses the use of optimization algorithms and Artificial Neural Networks (ANN) for structural monitoring in the form of a brief review. This paper can be seen as a starting point of developing SHM systems and data analysis. The content of this paper aims to help engineers and researchers find a better alternative to their specific structural monitoring problems.

78 citations


Journal ArticleDOI
TL;DR: This paper carries out a comprehensive review of dehazing techniques to show that these could be effectively applied in real-life practice and encourages the researchers to use these techniques for removal of haze from hazy images.
Abstract: The visibility of outdoor images is greatly degraded due to the presence of fog, haze, smog etc The poor visibility may cause the failure of computer vision applications such as intelligent transportation systems, surveillance systems, object tracking systems, etc To resolve this problem, many image dehazing techniques have been developed These techniques play an important role in improving performance of various computer vision applications Due to this, the researchers are attracted toward the dehazing techniques This paper carries out a comprehensive review of dehazing techniques to show that these could be effectively applied in real-life practice On the other hand, it encourages the researchers to use these techniques for removal of haze from hazy images The seven main classes of dehazing technique, such as depth estimation, wavelet, enhancement, filtering, supervised learning, fusion, meta-heuristic techniques and variational model are addressed In addition, this paper focuses on mathematical models of dehazing techniques along with their implementation aspects Finally, some considerations about challenges and future scope in dehazing techniques are discussed

74 citations


Journal ArticleDOI
TL;DR: Different models of QNN, combining the basics of ANN with quantum computation paradigm which is superior than the traditional ANN, are developed and further the implement of the same in various applications are implemented.
Abstract: Quantum neural network is a useful tool which has seen more development over the years mainly after twentieth century. Like artificial neural network (ANN), a novel, useful and applicable concept has been proposed recently which is known as quantum neural network (QNN). QNN has been developed combining the basics of ANN with quantum computation paradigm which is superior than the traditional ANN. QNN is being used in computer games, function approximation, handling big data etc. Algorithms of QNN are also used in modelling social networks, associative memory devices, and automated control systems etc. Different models of QNN has been proposed by different researchers throughout the world but systematic study of these models have not been done till date. Moreover, application of QNN may also be seen in some of the related research papers. As such, this paper includes different models which have been developed and further the implement of the same in various applications. In order to understand the powerfulness of QNN, few results and reasons are incorporated to show that these new models are more useful and efficient than traditional ANN.

Journal ArticleDOI
TL;DR: This article is a review of the main metamodels that use function gradients in addition to function values and indicates that there is a trade-off between the better computing time of least squares methods and the larger versatility of kernel-based approaches.
Abstract: Metamodeling, the science of modeling functions observed at a finite number of points, benefits from all auxiliary information it can account for. Function gradients are a common auxiliary information and are useful for predicting functions with locally changing behaviors. This article is a review of the main metamodels that use function gradients in addition to function values. The goal of the article is to give the reader both an overview of the principles involved in gradient-enhanced metamodels while also providing insightful formulations. The following metamodels have gradient-enhanced versions in the literature and are reviewed here: classical, weighted and moving least squares, Shepard weighting functions, and the kernel-based methods that are radial basis functions, kriging and support vector machines. The methods are set in a common framework of linear combinations between a priori chosen functions and coefficients that depend on the observations. The characteristics common to all kernel-based approaches are underlined. A new ν-GSVR metamodel which uses gradients is given. Numerical comparisons of the metamodels are carried out for approximating analytical test functions. The experiments are replicable, as they are performed with an opensource available toolbox. The results indicate that there is a trade-off between the better computing time of least squares methods and the larger versatility of kernel-based approaches.

Journal ArticleDOI
TL;DR: There is enough scope for extended research for realizing optimal design of disc brake system by truly emulating all the relevant practical situations, according to the present article.
Abstract: Disc brake system is one of the most critical components in a vehicle, which is always exposed to nonlinear transient thermoelastic conditions. Optimal design of a brake system to suit the heat transfer, weight and packing requirements is an ongoing challenge. Substantial researches have been carried out and are underway, in order to address the diverse issues related to thermal, mechanical and structural performances of automobile disc brakes. With the extensive application of numerical tools and techniques, the analyses involved became easier and effective. The present article provides an exhaustive review of the numerical and experimental studies reported so far, on the analysis and design of solid and ventilated disc brakes. The directions for future works are also described. The review reveals that, there is enough scope for extended research for realizing optimal design of disc brake system by truly emulating all the relevant practical situations.

Journal ArticleDOI
TL;DR: The use of hierarchical collocation to approximate the numerical solution of parametric models and can be interfaced with no particular effort to existing third party simulation software making the proposed approach particularly appealing and adapted to practical engineering problems of industrial interest.
Abstract: We discuss the use of hierarchical collocation to approximate the numerical solution of parametric models. With respect to traditional projection-based reduced order modeling, the use of a collocation enables non-intrusive approach based on sparse adaptive sampling of the parametric space. This allows to recover the low-dimensional structure of the parametric solution subspace while also learning the functional dependency from the parameters in explicit form. A sparse low-rank approximate tensor representation of the parametric solution can be built through an incremental strategy that only needs to have access to the output of a deterministic solver. Non-intrusiveness makes this approach straightforwardly applicable to challenging problems characterized by nonlinearity or non affine weak forms. As we show in the various examples presented in the paper, the method can be interfaced with no particular effort to existing third party simulation software making the proposed approach particularly appealing and adapted to practical engineering problems of industrial interest.

Journal ArticleDOI
TL;DR: The recent advances made by the teams in ALE-VMS and ST-V MS computational aerodynamic and fluid–structure interaction (FSI) analysis of wind turbines are described.
Abstract: This is the first part of a two-part article on computer modeling of wind turbines. We describe the recent advances made by our teams in ALE-VMS and ST-VMS computational aerodynamic and fluid–structure interaction (FSI) analysis of wind turbines. The ALE-VMS method is the variational multiscale version of the Arbitrary Lagrangian–Eulerian method. The VMS components are from the residual-based VMS method. The ST-VMS method is the VMS version of the Deforming-Spatial-Domain/Stabilized Space–Time method. The ALE-VMS and ST-VMS serve as the core methods in the computations. They are complemented by special methods that include the ALE-VMS versions for stratified flows, sliding interfaces and weak enforcement of Dirichlet boundary conditions, ST Slip Interface (ST-SI) method, NURBS-based isogeometric analysis, ST/NURBS Mesh Update Method (STNMUM), Kirchhoff–Love shell modeling of wind-turbine structures, and full FSI coupling. The VMS feature of the ALE-VMS and ST-VMS addresses the computational challenges associated with the multiscale nature of the unsteady flow, and the moving-mesh feature of the ALE and ST frameworks enables high-resolution computation near the rotor surface. The ST framework, in a general context, provides higher-order accuracy. The ALE-VMS version for sliding interfaces and the ST-SI enable moving-mesh computation of the spinning rotor. The mesh covering the rotor spins with it, and the sliding interface or the SI between the spinning mesh and the rest of the mesh accurately connects the two sides of the solution. The ST-SI also enables prescribing the fluid velocity at the turbine rotor surface as weakly-enforced Dirichlet boundary condition. The STNMUM enables exact representation of the mesh rotation. The analysis cases reported include both the horizontal-axis and vertical-axis wind turbines, stratified and unstratified flows, standalone wind turbines, wind turbines with tower or support columns, aerodynamic interaction between two wind turbines, and the FSI between the aerodynamics and structural dynamics of wind turbines. Comparisons with experimental data are also included where applicable. The reported cases demonstrate the effectiveness of the ALE-VMS and ST-VMS computational analysis in wind-turbine aerodynamics and FSI.

Journal ArticleDOI
TL;DR: This review provides a comprehensive overview of the existing and emerging time-integration practices used in the operational global NWP and climate industry, where global refers to weather and climate simulations performed on the entire globe.
Abstract: The continuous partial differential equations governing a given physical phenomenon, such as the Navier–Stokes equations describing the fluid motion, must be numerically discretized in space and time in order to obtain a solution otherwise not readily available in closed (i.e., analytic) form. While the overall numerical discretization plays an essential role in the algorithmic efficiency and physically-faithful representation of the solution, the time-integration strategy commonly is one of the main drivers in terms of cost-to-solution (e.g., time- or energy-to-solution), accuracy and numerical stability, thus constituting one of the key building blocks of the computational model. This is especially true in time-critical applications, including numerical weather prediction (NWP), climate simulations and engineering. This review provides a comprehensive overview of the existing and emerging time-integration (also referred to as time-stepping) practices used in the operational global NWP and climate industry, where global refers to weather and climate simulations performed on the entire globe. While there are many flavors of time-integration strategies, in this review we focus on the most widely adopted in NWP and climate centers and we emphasize the reasons why such numerical solutions were adopted. This allows us to make some considerations on future trends in the field such as the need to balance accuracy in time with substantially enhanced time-to-solution and associated implications on energy consumption and running costs. In addition, the potential for the co-design of time-stepping algorithms and underlying high performance computing hardware, a keystone to accelerate the computational performance of future NWP and climate services, is also discussed in the context of the demanding operational requirements of the weather and climate industry.

Journal ArticleDOI
TL;DR: In this paper, the authors provide an overview of the current approaches to predict damage and failure of composite laminates at the micro-, meso-, and macro-level, and their application to understand the underlying physical phenomena that govern the mechanical response of thin-ply composites.
Abstract: This paper provides an overview of the current approaches to predict damage and failure of composite laminates at the micro-(constituent), meso-(ply), and macro-(structural) levels, and their application to understand the underlying physical phenomena that govern the mechanical response of thin-ply composites In this context, computational micro-mechanics is used in the analysis of ply thickness effects, with focus on the prediction of in-situ strengths At the mesoscale, to account for ply thickness effects, theoretical results are presented related with the implementation of failure criteria that account for the in-situ strengths Finally, at the structural level, analytical and computational fracture approaches are proposed to predict the strength of composite structures made of thin plies While computational mechanics models at the lower (micro- and meso-) length-scales already show a sufficient level of maturity, the strength prediction of thin-ply composite structures subjected to complex loading scenarios is still a challenge The former (micro- and meso-models) provide already interesting bases for in-silico material design and virtual testing procedures, with most of current and future research focused on reducing the computational cost of such strategies In the latter (structural level), analytical Finite Fracture Mechanics models—when closed-form solutions can be used, or the phase field approach to brittle fracture seem to be the most promising techniques to predict structural failure of thin-ply composite structures

Journal ArticleDOI
TL;DR: This paper summarizes and analyses the various soft computing and feature extraction techniques used for LULC classification and change detection and concludes that the broad usage of multispectral remote sensing images, object-based change detection, neural networks and various levels of image fusion methods offer more potential in LULC monitoring.
Abstract: Multispectral remote sensing images are the primary source in the land use and land cover (LULC) monitoring. This is achieved by LULC classification and LULC change detection. The change detection in LULC includes the detection of water bodies, forest fire, forest degradation, agriculture areas monitoring, etc. Various change detection and LULC classification methods have their own advantages and disadvantages, and no single method is optimal and finds applicability for all cases. This paper summarizes and analyses the various soft computing and feature extraction techniques used for LULC classification and change detection. Based on the average error rate, performances of the different soft computing techniques are evaluated. The broad usage of multispectral remote sensing images, object-based change detection, neural networks and various levels of image fusion methods offer more potential in LULC monitoring.

Journal ArticleDOI
TL;DR: Most of the deep learning tools are moving closer to the mobile terminal, and the role of ASICs is gradually emerging, and it is believed that the future deep learning applications will be inseparable from the ASIC support.
Abstract: With the rapid development of deep learning in various fields, the big companies and research teams have developed independent and unique tools. This paper collects 18 common deep learning frameworks and libraries (Caffe, Caffe2, Tensorflow, Theano include Keras Lasagnes and Blocks, MXNet, CNTK, Torch, PyTorch, Pylearn2, Scikit-learn, Matlab include MatconvNet Matlab deep learning and Deep learning tool box, Chainer, Deeplearning4j) and introduces a large number of benchmarking data. In addition, we give the overall score of the current eight mainstream deep learning frameworks from six aspects (model design ability, interface property, deployment ability, performance, framework design and prospects for development). Based on our overview, the deep learning researchers can choose the appropriate development tools according to the evaluation criteria. By summarizing the 18 deep learning frameworks and libraries, we have found that most of the deep learning tools are moving closer to the mobile terminal, and the role of ASICs is gradually emerging. It is believed that the future deep learning applications will be inseparable from the ASIC support.

Journal ArticleDOI
TL;DR: The theoretical background of Conjugate heat transfer is given along with direction to its application envelope to help the researchers and scientists who work in this area to progress in their research.
Abstract: This paper documents all the important works in the field of conjugate heat transfer study. Theoretical and applied aspects of conjugate heat transfer analysis are reviewed and summarized to a great extent on the light of available literature in this field. Over the years, conjugate heat transfer analysis has been evolved as the most effective method of heat transfer study. In this approach the mutual effects of thermal conduction in the solid and convection in the fluid are considered in the analysis. Various analytical and computational studies are reported in this field. Comprehension of analytical as well as computational studies of this field will help the researchers and scientists who work in this area to progress in their research. That is the focus of this review. Early analytical studies related to conjugate heat transfer are reviewed and summarised in the first part of this paper. Background of theoretical studies is discussed briefly. More importance is given in summarising the computational studies in this field. Different coupling techniques proposed to date are presented in great detail. Important studies narrating the application of conjugate heat transfer analysis are also discussed under separate headings. Hence the present paper gives complete theoretical background of Conjugate heat transfer along with direction to its application envelope.

Journal ArticleDOI
TL;DR: This study reviews several image processing methods in the feature extraction of leaves and discusses certain machine learning classifiers for an analysis of different species of leaves.
Abstract: Plants are fundamentally important to life. Key research areas in plant science include plant species identification, weed classification using hyper spectral images, monitoring plant health and tracing leaf growth, and the semantic interpretation of leaf information. Botanists easily identify plant species by discriminating between the shape of the leaf, tip, base, leaf margin and leaf vein, as well as the texture of the leaf and the arrangement of leaflets of compound leaves. Because of the increasing demand for experts and calls for biodiversity, there is a need for intelligent systems that recognize and characterize leaves so as to scrutinize a particular species, the diseases that affect them, the pattern of leaf growth, and so on. We review several image processing methods in the feature extraction of leaves, given that feature extraction is a crucial technique in computer vision. As computers cannot comprehend images, they are required to be converted into features by individually analyzing image shapes, colors, textures and moments. Images that look the same may deviate in terms of geometric and photometric variations. In our study, we also discuss certain machine learning classifiers for an analysis of different species of leaves.

Journal ArticleDOI
TL;DR: This paper presents a review of finite element approaches to cracking focusing on the development and use of tracking algorithms, and the most utilised criteria for the selection of the crack propagation direction are summarized.
Abstract: The importance of crack propagation in the structural behaviour of concrete and masonry structures has led to the development of a wide range of finite element methods for crack simulation. A common standpoint in many of them is the use of tracking algorithms, which identify and designate the location of cracks within the analysed structure. In this way, the crack modelling techniques, smeared or discrete, are applied only to a restricted part of the discretized domain. This paper presents a review of finite element approaches to cracking focusing on the development and use of tracking algorithms. These are presented in four categories according to the information necessary for the definition and storage of the crack-path. In addition to that, the most utilised criteria for the selection of the crack propagation direction are summarized. The various algorithmic issues involved in the development of a tracking algorithm are discussed through the presentation of a local tracking algorithm based on the smeared crack approach. Challenges such as the modelling of arbitrary and multiple cracks propagating towards more than one direction, as well as multi-directional and intersecting cracking, are detailed. The presented numerical model is applied to the analysis of small- and large-scale masonry and concrete structures under monotonic and cyclic loading.

Journal ArticleDOI
TL;DR: Numerical method is suggested for developing P–I diagrams for new structural elements for better understanding the effect of blast loads on structures in order to better design against specific threats.
Abstract: In recent years, many studies have been conducted by governmental and nongovernmental organizations across the world attempt to better understand the effect of blast loads on structures in order to better design against specific threats. Pressure–Impulse (P–I) diagram is an easiest method for describing a structure’s response to blast load. Therefore, this paper presents a comprehensive overview of P–I diagrams in RC structures under blast loads. The effects of different parameters on P–I diagram is performed. Three major methods to develop P–I diagram for various damage criterions are discussed in this research. Analytical methods are easy and simple to use but have limitations on the kinds of failure modes and unsuitable for complex geometries and irregular shape of pulse loads that they can capture. Experimental method is a good way to study the structure response to blast loads; however, it is require special and expensive instrumentation and also not possible in many cases due to the safety and environmental consideration. Despite numerical methods are capable of incorporating complex features of the material behaviour, geometry and boundary conditions. Hence, numerical method is suggested for developing P–I diagrams for new structural elements.

Journal ArticleDOI
TL;DR: This work considers constant- and variable-coefficient, second-order eigenvalue problems discretized through the (isogeometric) Galerkin method based on B-splines of degree p and smoothness and predicts the existence of p-k-k spectral branches and the divergence to infinity with respect to p of the largest optical branch in the case of classical finite element analysis.
Abstract: Symbol-based analysis of finite element and isogeometric B-spline discretizations of eigenvalue problems : Exposition and review

Journal ArticleDOI
TL;DR: Heterogeneous parallel computing technology is the most powerful acceleration method for the hydrological model parameter calibration and researches about the acceleration of SCE-UA and NSGA-II based on heterogeneous parallel Computing technique is rare and should be focused in the future.
Abstract: In this paper, the computer aided numerical method for hydrological model calibration is reviewed. The content includes review of the watershed hydrological models (data-driven model, conceptual model, and distributed model), review of the model calibration methods (manual calibration, single-objective automatic calibration, multi-objective automatic calibration, objective function, termination criterion, and data utilized for calibration), and review of the parallel computing accelerated model calibration (multi-node computer cluster, multi-core CPU, many-core GPU, and heterogeneous parallel computing). Recent development and the state-of-the-art are also analyzed. Three conclusions can be drawn: (1) Nowadays, different types of hydrological models have their own application fields and perform very well. Distributed hydrological model becomes the development direction and has a good future. (2) Computer aided automatic hydrological model calibration method has become the mainstream. Single-objective optimization method such as SCE-UA and multi-objective optimization method such as NSGA-II are very suitable to the model parameter calibration. (3) Heterogeneous parallel computing technology is the most powerful acceleration method for the hydrological model parameter calibration. However, researches about the acceleration of SCE-UA and NSGA-II based on heterogeneous parallel computing technique is rare and should be focused in the future.

Journal ArticleDOI
TL;DR: The complete RADO procedure, i.e., uncertainty modeling, establishment of uncertainty quantification approach as well as robust optimization subject to reliability constraints under uncertainty, is elaborated and a brief survey of the main applications of RADO in the aerodynamic design of transonic flow and natural-laminar-flow is presented.
Abstract: The ever-increasing demands for risk-free, resource-efficient and environment-friendly air vehicles motivate the development of advanced design methodology. As a particularly promising design methodology considering uncertainties, robust aerodynamic design optimization (RADO) is capable of providing robust and reliable aerodynamic configuration and reducing cost under probable uncertainties in the flight envelop and all life cycle of air vehicle. However, the major challenges including high computational cost with increasing dimensionality of uncertainty and complex RADO procedure hinder the wider application of RADO. In this paper, the complete RADO procedure, i.e., uncertainty modeling, establishment of uncertainty quantification approach as well as robust optimization subject to reliability constraints under uncertainty, is elaborated. Systematic reviews of RADO methodology including uncertainty modeling methods, comprehensive uncertainty quantification approaches, and robust optimization methods are provided. Further, this paper presents a brief survey of the main applications of RADO in the aerodynamic design of transonic flow and natural-laminar-flow, and discusses the application prospects of RADO methodology for air vehicles. The detailed statement of the paper indicates the intention, i.e., to present the state of the art in RADO methodology, to highlight the key techniques and primary challenges in RADO, and to provide the beneficial directions for future researches.

Journal ArticleDOI
TL;DR: A thorough review of a promising family of approaches which aim to find a compromise between cost and accuracy; hybrid RANS–LES methods to internal flows is given, where hybrid approaches have been shown to offer significant benefits to industrial CFD.
Abstract: When scale-resolving simulation approaches are employed for the simulation of turbulent flow, computational cost can often be prohibitive. This is particularly true for internal wall-bounded flows, including flows of industrial relevances which may involve both high Reynolds number and geometrical complexity. Modelling the turbulence induced stresses (at all scales) has proven to lack requisite accuracy in many situations. In this work we review a promising family of approaches which aim to find a compromise between cost and accuracy; hybrid RANS–LES methods. We place particular emphasis on the emergence of embedded large eddy simulation. These approaches are summarised and key features relevant to internal flows are highlighted. A thorough review of the application of these methods to internal flows is given, where hybrid approaches have been shown to offer significant benefits to industrial CFD (relative to an empirical broadband modelling of turbulence). This paper concludes by providing a cost-analysis and a discussion about the emerging novel use-modalities for hybrid RANS–LES methods in industrial CFD, such as automated embedded simulation and multi-dimensional coupling.

Journal ArticleDOI
TL;DR: A CAD-integrated template-based modeling framework is presented that streamlines the construction of solid non-uniform rational B-spline vascular models for performing isogeometric finite element analysis.
Abstract: We review the literature on patient-specific vascular modeling, with particular attention paid to three-dimensional arterial networks. Patient-specific vascular modeling typically involves three main steps: image processing, analysis suitable model generation, and computational analysis. Analysis suitable model generation techniques that are currently utilized suffer from several difficulties and complications, which often necessitate manual intervention and crude approximations. Because the modeling pipeline spans multiple disciplines, the benefits of integrating a computer-aided design (CAD) component for the geometric modeling tasks has been largely overlooked. Upon completion of our review, we adopt this philosophy and present a CAD-integrated template-based modeling framework that streamlines the construction of solid non-uniform rational B-spline vascular models for performing isogeometric finite element analysis. Examples of arterial models for mouse and human circles of Willis and a porcine coronary tree are presented.

Journal ArticleDOI
TL;DR: This review article is intended to identify, highlight and summarize research works on topics that are of substantial interest in the field of computational biomechanics in which meshfree or particle methods have been employed for analysis, simulation or/and modeling of biological systems such as soft matters, cells, biological soft and hard tissues and organs.
Abstract: The use of meshfree and particle methods in the field of bioengineering and biomechanics has significantly increased. This may be attributed to their unique abilities to overcome most of the inherent limitations of mesh-based methods in dealing with problems involving large deformation and complex geometry that are common in bioengineering and computational biomechanics in particular. This review article is intended to identify, highlight and summarize research works on topics that are of substantial interest in the field of computational biomechanics in which meshfree or particle methods have been employed for analysis, simulation or/and modeling of biological systems such as soft matters, cells, biological soft and hard tissues and organs. We also anticipate that this review will serve as a useful resource and guide to researchers who intend to extend their work into these research areas. This review article includes 333 references.

Journal ArticleDOI
TL;DR: It is observed that work done on the writer identification systems with good accuracy rates in Indic scripts is limited as compared to non-Indic scripts and truly presents a future direction.
Abstract: Writer identification is a challenging move in the field of pattern recognition and reflects advanced perceptions into the handwriting research. It is the process of determining the author or writer of the text by matching it with the training database. It is an exigent task because the writing style of an individual is distinct from other because of unique intrinsic characteristics and is different even if the same writer writes that text with the same pen next time. It is concerned with the writing styles, feelings, perception, behavior and the brain of an individual and it is one of the neoteric applications of biometric identification. Biometric identification is the branch of computer science that deals with identification of an individual from a group using unique identifiers such as fingerprints, retina, handwriting and signatures. It is a term used for the body measurements and calculations. This paper presents a comprehensive and transparent panorama on the work done for the writer identification system on different Indic and non-Indic scripts and a widespread view towards this peculiar research area. The structure of the paper comprises introduction, motivation for the work, background, sources of information, schemes, process, reported works, synthesis analysis, study of features and classifiers for writer identification, and finally the conclusion and future directions. The main focus of this paper is to present in a systematic way, the reported works on writer identification systems on Indic scripts such as Bengali, Gujarati, Gurumukhi, Kannada, Malayalam, Oriya, Tamil and Telugu and Non-Indic scripts such as Arabic, Chinese, French, Persian, Roman and finally exposes the synthesis analysis based on the findings. This study gives the cognizance and beneficial assistance to the novice researchers in this field by providing in a nut shell the studies of various feature extraction methods and classification techniques required for writer identification on both Indic and non-Indic scripts. It is observed that work done on the writer identification systems with good accuracy rates in Indic scripts is limited as compared to non-Indic scripts and truly presents a future direction.