scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Problems in Engineering in 2016"


Journal ArticleDOI
TL;DR: A novel supervised learning method, called Sparsity Preserving Discriminant Projections (SPDP), which attempts to preserve the sparse representation structure of the data and maximize the between-class separability simultaneously, can be regarded as a combiner of manifold learning and sparse representation.
Abstract: Dimensionality reduction is extremely important for understanding the intrinsic structure hidden in high-dimensional data. In recent years, sparse representation models have been widely used in dimensionality reduction. In this paper, a novel supervised learning method, called Sparsity Preserving Discriminant Projections (SPDP), is proposed. SPDP, which attempts to preserve the sparse representation structure of the data and maximize the between-class separability simultaneously, can be regarded as a combiner of manifold learning and sparse representation. Specifically, SPDP first creates a concatenated dictionary by classwise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least square method. Secondly, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds. Then, SPDP integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function. Finally, the proposed method is transformed into a generalized eigenvalue problem. Extensive experimental results on several popular face databases demonstrate the feasibility and effectiveness of the proposed approach.

120 citations


Journal ArticleDOI
TL;DR: In this paper, a channel coordination and quantity discounts between a vendor and a buyer with single-setup multi-delivery (SSMD) strategy was proposed to reduce the joint total cost among supply chain players.
Abstract: This paper illustrates a channel coordination and quantity discounts between a vendor and a buyer with single-setup multi-delivery (SSMD) strategy to reduce the joint total cost among supply chain players. The benefit of the coordination between a buyer and a vendor is considered as the vendor requests to the buyer for changing the ordering quantity such that the vendor can be benefited from lower inventory costs. After accepting the buyer’s condition, the vendor compensates the buyer for his increased inventory cost and gives consent for additional savings by offering a quantity discount. The centralized decision making is examined for the effect of this strategy with the presence of backorder for buyer and inspection cost for the vendor. The quantity discount strategy, with the presence of variable backorder and inspections, can allow more savings for all players of supply chain. Some numerical examples, sensitivity analysis, and graphical representations are given to illustrate more savings from existing literature and comparisons between the several demand values.

108 citations


Journal ArticleDOI
TL;DR: In this paper, a review paper shed light on the practical application aspects of deformation analysis of circular tunnel, rheological settlement of subgrade, and relevant loess researches subjected to the achievements acquired in geotechnical engineering.
Abstract: Over the past couple of decades, as a new mathematical tool for addressing a number of tough problems, fractional calculus has been gaining a continually increasing interest in diverse scientific fields, including geotechnical engineering due primarily to geotechnical rheology phenomenon. Unlike the classical constitutive models in which simulation analysis gradually fails to meet the reasonable accuracy of requirement, the fractional derivative models have shown the merits of hereditary phenomena with long memory. Additionally, it is traced that the fractional derivative model is one of the most effective and accurate approaches to describe the rheology phenomenon. In relation to this, an overview aimed first at model structure and parameter determination in combination with application cases based on fractional calculus was provided. Furthermore, this review paper shed light on the practical application aspects of deformation analysis of circular tunnel, rheological settlement of subgrade, and relevant loess researches subjected to the achievements acquired in geotechnical engineering. Finally, concluding remarks and important future investigation directions were pointed out.

100 citations


Journal ArticleDOI
TL;DR: In this article, a comprehensive fuzzy multicriteria decision-making (MCDM) approach for green supplier selection and evaluation, using both economic and environmental criteria, was proposed, where a fuzzy analytic hierarchy process was employed to determine the important weights of criteria under vague environment.
Abstract: Due to the challenge of rising public awareness of environmental issues and governmental regulations, green supply chain management (SCM) has become an important issue for companies to gain environmental sustainability. Supplier selection is one of the key operational tasks necessary to construct a green SCM. To select the most suitable suppliers, many economic and environmental criteria must be considered in the decision process. Although numerous studies have used economic criteria such as cost, quality, and lead time in the supplier selection process, only some studies have taken into account the environmental issues. This study proposes a comprehensive fuzzy multicriteria decision making (MCDM) approach for green supplier selection and evaluation, using both economic and environmental criteria. In the proposed approach, a fuzzy analytic hierarchy process (AHP) is employed to determine the important weights of criteria under vague environment. In addition, a fuzzy technique for order performance by similarity to ideal solution (TOPSIS) is used to evaluate and rank the potential suppliers. Finally, a case study in Luminance Enhancement Film (LEF) industry is presented to illustrate the applicability and efficiency of the proposed method.

98 citations


Journal ArticleDOI
TL;DR: An intention-aware decision-making algorithm to solve this challenging problem in an uncontrolled intersection scenario and could lead autonomous vehicle to pass through uncontrolled intersections safely and efficiently.
Abstract: Autonomous vehicles need to perform social accepted behaviors in complex urban scenarios including human-driven vehicles with uncertain intentions. This leads to many difficult decision-making problems, such as deciding a lane change maneuver and generating policies to pass through intersections. In this paper, we propose an intention-aware decision-making algorithm to solve this challenging problem in an uncontrolled intersection scenario. In order to consider uncertain intentions, we first develop a continuous hidden Markov model to predict both the high-level motion intention (e.g., turn right, turn left, and go straight) and the low level interaction intentions (e.g., yield status for related vehicles). Then a partially observable Markov decision process (POMDP) is built to model the general decision-making framework. Due to the difficulty in solving POMDP, we use proper assumptions and approximations to simplify this problem. A human-like policy generation mechanism is used to generate the possible candidates. Human-driven vehicles’ future motion model is proposed to be applied in state transition process and the intention is updated during each prediction time step. The reward function, which considers the driving safety, traffic laws, time efficiency, and so forth, is designed to calculate the optimal policy. Finally, our method is evaluated in simulation with PreScan software and a driving simulator. The experiments show that our method could lead autonomous vehicle to pass through uncontrolled intersections safely and efficiently.

97 citations


Journal ArticleDOI
TL;DR: A new personalized ranking algorithm (MERR_SVD) that exploits both explicit and implicit feedback data in order to overcome the defects of prior researches.
Abstract: The problem of the previous researches on personalized ranking is that they focused on either explicit feedback data or implicit feedback data rather than making full use of the information in the dataset. Until now, nobody has studied personalized ranking algorithm by exploiting both explicit and implicit feedback. In order to overcome the defects of prior researches, a new personalized ranking algorithm (MERR_SVD

96 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a review of mathematical contributions made in the last decade in the field of humanitarian supply chain (HSC) and highlight the potential research areas which require attention of the researchers.
Abstract: In the past decade the humanitarian supply chain (HSC) has attracted the attention of researchers due to the increasing frequency of disasters. The uncertainty in time, location, and severity of disaster during predisaster phase and poor conditions of available infrastructure during postdisaster phase make HSC operations difficult to handle. In order to overcome the difficulties during these phases, we need to assure that HSC operations are designed in an efficient manner to minimize human and economic losses. In the recent times, several mathematical optimization techniques and algorithms have been developed to increase the efficiency of HSC operations. These techniques and algorithms developed for the field of HSC motivate the need of a systematic literature review. Owing to the importance of mathematical modelling techniques, this paper presents the review of the mathematical contributions made in the last decade in the field of HSC. A systematic literature review methodology is used for this paper due to its transparent procedure. There are two objectives of this study: the first one is to conduct an up-to-date survey of mathematical models developed in HSC area and the second one is to highlight the potential research areas which require attention of the researchers.

96 citations


Journal ArticleDOI
TL;DR: The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.
Abstract: In order to improve the effects of denoising, this paper introduces the basic principles of wavelet threshold denoising and traditional structures threshold functions. Meanwhile, it proposes wavelet threshold function and fixed threshold formula which are both improved here. First, this paper studies the problems existing in the traditional wavelet threshold functions and introduces the adjustment factors to construct the new threshold function basis on soft threshold function. Then, it studies the fixed threshold and introduces the logarithmic function of layer number of wavelet decomposition to design the new fixed threshold formula. Finally, this paper uses hard threshold, soft threshold, Garrote threshold, and improved threshold function to denoise different signals. And the paper also calculates signal-to-noise (SNR) and mean square errors (MSE) of the hard threshold functions, soft thresholding functions, Garrote threshold functions, and the improved threshold function after denoising. Theoretical analysis and experimental results showed that the proposed approach could improve soft threshold functions with constant deviation and hard threshold with discontinuous function problems. The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.

91 citations


Journal ArticleDOI
TL;DR: An improved version of MFO algorithm based on Levy-flight strategy, which is named as LMFO, is proposed and a comparison with ABC, BA, GGSA, DA, PSOGSA, and MFO on 19 unconstrained benchmark functions and 2 constrained engineering design problems demonstrates the superior performance of LMFO.
Abstract: The moth-flame optimization (MFO) algorithm is a novel nature-inspired heuristic paradigm. The main inspiration of this algorithm is the navigation method of moths in nature called transverse orientation. Moths fly in night by maintaining a fixed angle with respect to the moon, a very effective mechanism for travelling in a straight line for long distances. However, these fancy insects are trapped in a spiral path around artificial lights. Aiming at the phenomenon that MFO algorithm has slow convergence and low precision, an improved version of MFO algorithm based on Levy-flight strategy, which is named as LMFO, is proposed. Levy-flight can increase the diversity of the population against premature convergence and make the algorithm jump out of local optimum more effectively. This approach is helpful to obtain a better trade-off between exploration and exploitation ability of MFO, thus, which can make LMFO faster and more robust than MFO. And a comparison with ABC, BA, GGSA, DA, PSOGSA, and MFO on 19 unconstrained benchmark functions and 2 constrained engineering design problems is tested. These results demonstrate the superior performance of LMFO.

91 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used remote sensing and GIS (geographic information system) to generate five thematic layers, lithology, lineament density, topology, slope, and river density, and integrated the multicriteria decision model (MCDM) with C5.0 and CART, respectively, to generate the decision tree with 80 surveyed tube wells divided into four classes on the basis of the yield.
Abstract: Groundwater plays an important role in global climate change and satisfying human needs. In the study, RS (remote sensing) and GIS (geographic information system) were utilized to generate five thematic layers, lithology, lineament density, topology, slope, and river density considered as factors influencing the groundwater potential. Then, the multicriteria decision model (MCDM) was integrated with C5.0 and CART, respectively, to generate the decision tree with 80 surveyed tube wells divided into four classes on the basis of the yield. To test the precision of the decision tree algorithms, the 10-fold cross validation and kappa coefficient were adopted and the average kappa coefficient for C5.0 and CART was 90.45% and 85.09%, respectively. After applying the decision tree to the whole study area, four classes of groundwater potential zones were demarcated. According to the classification result, the four grades of groundwater potential zones, “very good,” “good,” “moderate,” and “poor,” occupy 4.61%, 8.58%, 26.59%, and 60.23%, respectively, with C5.0 algorithm, while occupying the percentages of 4.68%, 10.09%, 26.10%, and 59.13%, respectively, with CART algorithm. Therefore, we can draw the conclusion that C5.0 algorithm is more appropriate than CART for the groundwater potential zone prediction.

77 citations


Journal ArticleDOI
TL;DR: The proposed technique provides a computationally efficient and reliable way of copy-move forgery detection that increases the credibility of images in evidence centered applications.
Abstract: Due to the powerful image editing tools images are open to several manipulations; therefore, their authenticity is becoming questionable especially when images have influential power, for example, in a court of law, news reports, and insurance claims. Image forensic techniques determine the integrity of images by applying various high-tech mechanisms developed in the literature. In this paper, the images are analyzed for a particular type of forgery where a region of an image is copied and pasted onto the same image to create a duplication or to conceal some existing objects. To detect the copy-move forgery attack, images are first divided into overlapping square blocks and DCT components are adopted as the block representations. Due to the high dimensional nature of the feature space, Gaussian RBF kernel PCA is applied to achieve the reduced dimensional feature vector representation that also improved the efficiency during the feature matching. Extensive experiments are performed to evaluate the proposed method in comparison to state of the art. The experimental results reveal that the proposed technique precisely determines the copy-move forgery even when the images are contaminated with blurring, noise, and compression and can effectively detect multiple copy-move forgeries. Hence, the proposed technique provides a computationally efficient and reliable way of copy-move forgery detection that increases the credibility of images in evidence centered applications.

Journal ArticleDOI
TL;DR: A survey of the compressive sensing idea and prerequisites, together with the commonly used reconstruction methods, for signal processing applications assuming some of the commonlyused transformation domains.
Abstract: Compressive sensing has emerged as an area that opens new perspectives in signal acquisition and processing. It appears as an alternative to the traditional sampling theory, endeavoring to reduce the required number of samples for successful signal reconstruction. In practice, compressive sensing aims to provide saving in sensing resources, transmission, and storage capacities and to facilitate signal processing in the circumstances when certain data are unavailable. To that end, compressive sensing relies on the mathematical algorithms solving the problem of data reconstruction from a greatly reduced number of measurements by exploring the properties of sparsity and incoherence. Therefore, this concept includes the optimization procedures aiming to provide the sparsest solution in a suitable representation domain. This work, therefore, offers a survey of the compressive sensing idea and prerequisites, together with the commonly used reconstruction methods. Moreover, the compressive sensing problem formulation is considered in signal processing applications assuming some of the commonly used transformation domains, namely, the Fourier transform domain, the polynomial Fourier transform domain, Hermite transform domain, and combined time-frequency domain.

Journal ArticleDOI
TL;DR: In this article, a methodology for supplier selection using -numbers is proposed considering information transformation, which includes two parts: one solves the issue of how to convert -number to the classic fuzzy number according to the fuzzy expectation; the other solves the problem of getting the optimal priority weight for suppliers selection with genetic algorithm (GA), which is an efficient and flexible method for calculating the priority weight of the judgement matrix.
Abstract: Supplier selection is a significant issue of multicriteria decision-making (MCDM), which has been heavily studied with classical fuzzy methodologies, but the reliability of the knowledge from domain experts is not efficiently taken into consideration. -number introduced by Zadeh has more power to describe the knowledge of human being with uncertain information considering both restraint and reliability. In this paper, a methodology for supplier selection using -numbers is proposed considering information transformation. It includes two parts: one solves the issue of how to convert -number to the classic fuzzy number according to the fuzzy expectation; the other solves the problem of how to get the optimal priority weight for supplier selection with genetic algorithm (GA), which is an efficient and flexible method for calculating the priority weight of the judgement matrix. Finally, an example for supplier selection is used to illustrate the effectiveness the proposed methodology.

Journal ArticleDOI
TL;DR: A method to determine generalized basic probability assignment in an open world is proposed and the triangular fuzzy number models to identify target in the proposed frame of discernment are established.
Abstract: Dempster-Shafer evidence theory (D-S theory) has been widely used in many information fusion systems since it was proposed by Dempster and extended by Shafer. However, how to determine the basic probability assignment (BPA), which is the main and first step in D-S theory, is still an open issue, especially when the given environment is in an open world, which means the frame of discernment is incomplete. In this paper, a method to determine generalized basic probability assignment in an open world is proposed. Frame of discernment in an open world is established first, and then the triangular fuzzy number models to identify target in the proposed frame of discernment are established. Pessimistic strategy based on the differentiation degree between model and sample is defined to yield the BPAs for known targets. If the sum of all the BPAs of known targets is over one, then they will be normalized and the BPA of unknown target is assigned to ; otherwise the BPA of unknown target is equal to minus the sum of all the known targets BPAs. IRIS classification examples illustrated the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: Experimental results show that CAASS can dynamically adjust the service level according to the environment variation and outperforms the existing streaming approaches in adaptive streaming media distribution according to peak signal-to-noise ratio (PSNR).
Abstract: We consider the problem of streaming media transmission in a heterogeneous network from a multisource server to home multiple terminals. In wired network, the transmission performance is limited by network state (e.g., the bandwidth variation, jitter, and packet loss). In wireless network, the multiple user terminals can cause bandwidth competition. Thus, the streaming media distribution in a heterogeneous network becomes a severe challenge which is critical for QoS guarantee. In this paper, we propose a context-aware adaptive streaming media distribution system (CAASS), which implements the context-aware module to perceive the environment parameters and use the strategy analysis (SA) module to deduce the most suitable service level. This approach is able to improve the video quality for guarantying streaming QoS. We formulate the optimization problem of QoS relationship with the environment parameters based on the QoS testing algorithm for IPTV in ITU-T G.1070. We evaluate the performance of the proposed CAASS through 12 types of experimental environments using a prototype system. Experimental results show that CAASS can dynamically adjust the service level according to the environment variation (e.g., network state and terminal performances) and outperforms the existing streaming approaches in adaptive streaming media distribution according to peak signal-to-noise ratio (PSNR).

Journal ArticleDOI
TL;DR: This paper proposes a fault localization method based on deep neural network (DNN) that is capable of achieving the complex function approximation and attaining distributed representation for input data by learning a deep nonlinear network structure and shows a strong capability of learning representation from a small sized training dataset.
Abstract: With software’s increasing scale and complexity, software failure is inevitable. To date, although many kinds of software fault localization methods have been proposed and have had respective achievements, they also have limitations. In particular, for fault localization techniques based on machine learning, the models available in literatures are all shallow architecture algorithms. Having shortcomings like the restricted ability to express complex functions under limited amount of sample data and restricted generalization ability for intricate problems, the faults cannot be analyzed accurately via those methods. To that end, we propose a fault localization method based on deep neural network (DNN). This approach is capable of achieving the complex function approximation and attaining distributed representation for input data by learning a deep nonlinear network structure. It also shows a strong capability of learning representation from a small sized training dataset. Our DNN-based model is trained utilizing the coverage data and the results of test cases as input and we further locate the faults by testing the trained model using the virtual test suite. This paper conducts experiments on the Siemens suite and Space program. The results demonstrate that our DNN-based fault localization technique outperforms other fault localization methods like BPNN, Tarantula, and so forth.

Journal ArticleDOI
TL;DR: In this work, the typical extreme learning machine (ELM) and an efficient learning model, upper-layer-solution-aware (USA), have been used in ROP prediction, and it is shown that ELM and USA outperform the commonly used conventional artificial neural network.
Abstract: Predicting the rate of penetration (ROP) is critical for drilling optimization because maximization of ROP can greatly reduce expensive drilling costs. In this work, the typical extreme learning machine (ELM) and an efficient learning model, upper-layer-solution-aware (USA), have been used in ROP prediction. Because formation type, rock mechanical properties, hydraulics, bit type and properties (weight on the bit and rotary speed), and mud properties are the most important parameters that affect ROP, they have been considered to be the input parameters to predict ROP. The prediction model has been constructed using industrial reservoir data sets that are collected from an oil reservoir at the Bohai Bay, China. The prediction accuracy of the model has been evaluated and compared with the commonly used conventional artificial neural network (ANN). The results indicate that ANN, ELM, and USA models are all competent for ROP prediction, while both of the ELM and USA models have the advantage of faster learning speed and better generalization performance. The simulation results have shown a promising prospect for ELM and USA in the field of ROP prediction in new oil and gas exploration in general, as they outperform the ANN model. Meanwhile, this work provides drilling engineers with more choices for ROP prediction according to their computation and accuracy demand.

Journal ArticleDOI
TL;DR: Based on the traditional Kalman filtering method, this article put forward three revised models for real-time passenger flow forecasting, which are accurate and stable enough for on-line predictions, especially during the peak periods.
Abstract: Short-term prediction of passenger flow is very important for the operation and management of a rail transit system. Based on the traditional Kalman filtering method, this paper puts forward three revised models for real-time passenger flow forecasting. First, the paper introduces the historical prediction error into the measurement equation and formulates a revised Kalman filtering model based on error correction coefficient (KF-ECC). Second, this paper employs the deviation between real-time passenger flow and corresponding historical data as state variable and presents a revised Kalman filtering model based on Historical Deviation (KF-HD). Third, the paper integrates nonparametric regression forecast into the traditional Kalman filtering method using a Bayesian combined technique and puts forward a revised Kalman filtering model based on Bayesian combination and nonparametric regression (KF-BCNR). A case study is implemented using statistical passenger flow data of rail transit line 13 in Beijing during a one-month period. The reported prediction results show that KF-ECC improves the applicability to historical trend, KF-HD achieves excellent accuracy and stability, and KF-BCNR yields the best performances. Comparisons among different periods further indicate that results during peak periods outperform those during nonpeak periods. All three revised models are accurate and stable enough for on-line predictions, especially during the peak periods.

Journal ArticleDOI
TL;DR: The Cahn-Hilliard equation has been extended to a variety of chemical, physical, biological, and other engineering fields such as spinodal decomposition, diblock copolymer, image inpainting, multiphase fluid flows, microstructures with elastic inhomogeneity, tumor growth simulation, and topology optimization as discussed by the authors.
Abstract: The celebrated Cahn–Hilliard (CH) equation was proposed to model the process of phase separation in binary alloys by Cahn and Hilliard Since then the equation has been extended to a variety of chemical, physical, biological, and other engineering fields such as spinodal decomposition, diblock copolymer, image inpainting, multiphase fluid flows, microstructures with elastic inhomogeneity, tumor growth simulation, and topology optimization Therefore, it is important to understand the basic mechanism of the CH equation in each modeling type In this paper, we review the applications of the CH equation and describe the basic mechanism of each modeling type with helpful references and computational simulation results

Journal ArticleDOI
TL;DR: A modified cosine similarity to measure the similarity between vectors is proposed and a new similarity measure of basic probability assignment (BPAs) is proposed based on the modifiedcosine similarity.
Abstract: Dempster-Shafer (D-S) evidence theory has been widely used in various fields. However, how to measure the degree of conflict (similarity) between the bodies of evidence is an open issue. In this paper, in order to solve this problem, firstly we propose a modified cosine similarity to measure the similarity between vectors. Then a new similarity measure of basic probability assignment (BPAs) is proposed based on the modified cosine similarity. The new similarity measure can achieve the reasonable measure of the similarity of BPAs and then efficiently measure the degree of conflict among bodies of evidence. Numerical examples are used to illustrate the effectiveness of the proposed method. Finally, a weighted average method based on the new BPAs similarity is proposed, and an example is used to show the validity of the proposed method.

Journal ArticleDOI
TL;DR: Extensive experiments and comparisons conducted on Corel-A, Caltech-256, and Ground Truth image datasets demonstrate that the proposed image representation increases the performance of image retrieval.
Abstract: Content-based image retrieval (CBIR) provides a sustainable solution to retrieve similar images from an image archive. In the last few years, the Bag-of-Visual-Words (BoVW) model gained attention and significantly improved the performance of image retrieval. In the standard BoVW model, an image is represented as an orderless global histogram of visual words by ignoring the spatial layout. The spatial layout of an image carries significant information that can enhance the performance of CBIR. In this paper, we are presenting a novel image representation that is based on a combination of local and global histograms of visual words. The global histogram of visual words is constructed over the whole image, while the local histogram of visual words is constructed over the local rectangular region of the image. The local histogram contains the spatial information about the salient objects. Extensive experiments and comparisons conducted on Corel-A, Caltech-256, and Ground Truth image datasets demonstrate that the proposed image representation increases the performance of image retrieval.

Journal ArticleDOI
TL;DR: In this article, the effect of the calculation of the longitudinal location of a wheel rail contact point on the wheelset's motion in a vehicle dynamic simulation was investigated, and it was shown that using a more accurate contact point location results in a smaller wheelset yaw angle, although the effect is small.
Abstract: This paper investigates the effect of the calculation of the longitudinal location of a wheel rail contact point on the wheelset’s motion in a vehicle dynamic simulation. All current vehicle dynamic software programs assume that the contact between wheel and rail takes place in the vertical plane through the wheelset’s rolling axis. However, when the yaw angle of the wheelset is nonzero, the contact point is situated up to 10 mm from that plane. This difference causes a difference in the yaw moment on the wheelset which is used in the vehicle dynamic simulation. To such an end, an existing analytical method to determine the longitudinal method was validated using a numerical approach. Then vehicle dynamic simulations with both the classic and the new contact location were performed, concluding that using a more accurate contact point location results in a smaller wheelset yaw angle in a vehicle dynamic simulation, although the effect is small.

Journal ArticleDOI
TL;DR: In this paper, a bidirectional evolutionary algorithm based on discrete level set functions is combined with the local level set method to replace the numerical process of hole nucleation for topology optimization.
Abstract: The local level set method (LLSM) is higher than the LSMs with global models in computational efficiency, because of the use of narrow-band model. The computational efficiency of the LLSM can be further increased by avoiding the reinitialization procedure by introducing a distance regularized equation (DRE). The numerical stability of the DRE can be ensured by a proposed conditionally stable difference scheme under reverse diffusion constraints. Nevertheless, the proposed method possesses no mechanism to nucleate new holes in the material domain for two-dimensional structures, so that a bidirectional evolutionary algorithm based on discrete level set functions is combined with the LLSM to replace the numerical process of hole nucleation. Numerical examples are given to show high computational efficiency and numerical stability of this algorithm for topology optimization.

Journal ArticleDOI
TL;DR: In this paper, a homotopy analysis method is used to obtain an infinite series solution for the space-time fractional nonlinear partial differential coupled mKdV equation, for the special case when the limit of the integral order of the time derivative is considered.
Abstract: We present new analytical approximated solutions for the space-time fractional nonlinear partial differential coupled mKdV equation. A homotopy analysis method is considered to obtain an infinite series solution. The effectiveness of this method is demonstrated by finding exact solutions of the fractional equation proposed, for the special case when the limit of the integral order of the time derivative is considered. The comparison shows a precise agreement between these solutions.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed improved firefly algorithm (IFA) can efficiently segment multilevel images and obtain better performance than the other three methods.
Abstract: Multilevel image segmentation is time-consuming and involves large computation. The firefly algorithm has been applied to enhancing the efficiency of multilevel image segmentation. However, in some cases, firefly algorithm is easily trapped into local optima. In this paper, an improved firefly algorithm (IFA) is proposed to search multilevel thresholds. In IFA, in order to help fireflies escape from local optima and accelerate the convergence, two strategies (i.e., diversity enhancing strategy with Cauchy mutation and neighborhood strategy) are proposed and adaptively chosen according to different stagnation stations. The proposed IFA is compared with three benchmark optimal algorithms, that is, Darwinian particle swarm optimization, hybrid differential evolution optimization, and firefly algorithm. The experimental results show that the proposed method can efficiently segment multilevel images and obtain better performance than the other three methods.

Journal ArticleDOI
TL;DR: In this paper, the elastic properties of sandstone and shale are analyzed based on petrophysics tests and the tensile strength is calibrated using compressive strength, which correlates with confinement pressure and elastic modulus.
Abstract: The tight gas reservoir in the fifth member of the Xujiahe formation contains heterogeneous interlayers of sandstone and shale that are low in both porosity and permeability. Elastic characteristics of sandstone and shale are analyzed in this study based on petrophysics tests. The tests indicate that sandstone and mudstone samples have different stress-strain relationships. The rock tends to exhibit elastic-plastic deformation. The compressive strength correlates with confinement pressure and elastic modulus. The results based on thin-bed log interpretation match dynamic Young’s modulus and Poisson’s ratio predicted by theory. The compressive strength is calculated from density, elastic impedance, and clay contents. The tensile strength is calibrated using compressive strength. Shear strength is calculated with an empirical formula. Finally, log interpretation of rock mechanical properties is performed on the fifth member of the Xujiahe formation. Natural fractures in downhole cores and rock microscopic failure in the samples in the cross section demonstrate that tensile fractures were primarily observed in sandstone, and shear fractures can be observed in both mudstone and sandstone. Based on different elasticity and plasticity of different rocks, as well as the characteristics of natural fractures, a fracture propagation model was built.

Journal ArticleDOI
Liguo Fei1, Yong Hu2, Fuyuan Xiao1, Luyuan Chen1, Yong Deng 
TL;DR: In this article, a TOPSIS method is proposed for MCDM problem based on a new effective and feasible representation of uncertain information, called numbers, and an application about human resources selection, which essentially is a multicriteria decision-making problem is conducted to demonstrate the effectiveness of the proposed -TOPSIS method.
Abstract: Multicriteria decision-making (MCDM) is an important branch of operations research which composes multiple-criteria to make decision. TOPSIS is an effective method in handling MCDM problem, while there still exist some shortcomings about it. Upon facing the MCDM problem, various types of uncertainty are inevitable such as incompleteness, fuzziness, and imprecision result from the powerlessness of human beings subjective judgment. However, the TOPSIS method cannot adequately deal with these types of uncertainties. In this paper, a -TOPSIS method is proposed for MCDM problem based on a new effective and feasible representation of uncertain information, called numbers. The -TOPSIS method is an extension of the classical TOPSIS method. Within the proposed method, numbers theory denotes the decision matrix given by experts considering the interrelation of multicriteria. An application about human resources selection, which essentially is a multicriteria decision-making problem, is conducted to demonstrate the effectiveness of the proposed -TOPSIS method.

Journal ArticleDOI
TL;DR: A novel convolutional neural network model is proposed to integrate the product related review features through a product word composition model to reduce overfitting and high variance.
Abstract: Product reviews are now widely used by individuals for making their decisions. However, due to the purpose of profit, reviewers game the system by posting fake reviews for promoting or demoting the target products. In the past few years, fake review detection has attracted significant attention from both the industrial organizations and academic communities. However, the issue remains to be a challenging problem due to lacking of labelling materials for supervised learning and evaluation. Current works made many attempts to address this problem from the angles of reviewer and review. However, there has been little discussion about the product related review features which is the main focus of our method. This paper proposes a novel convolutional neural network model to integrate the product related review features through a product word composition model. To reduce overfitting and high variance, a bagging model is introduced to bag the neural network model with two efficient classifiers. Experiments on the real-life Amazon review dataset demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: This paper summarizes the strategies and implementation processes of classical moving object compression algorithms and the related definitions about moving objects and their trajectories and some application scenarios are summarized to point out the potential application in the future.
Abstract: Compression technology is an efficient way to reserve useful and valuable data as well as remove redundant and inessential data from datasets. With the development of RFID and GPS devices, more and more moving objects can be traced and their trajectories can be recorded. However, the exponential increase in the amount of such trajectory data has caused a series of problems in the storage, processing, and analysis of data. Therefore, moving object trajectory compression undoubtedly becomes one of the hotspots in moving object data mining. To provide an overview, we survey and summarize the development and trend of moving object compression and analyze typical moving object compression algorithms presented in recent years. In this paper, we firstly summarize the strategies and implementation processes of classical moving object compression algorithms. Secondly, the related definitions about moving objects and their trajectories are discussed. Thirdly, the validation criteria are introduced for evaluating the performance and efficiency of compression algorithms. Finally, some application scenarios are also summarized to point out the potential application in the future. It is hoped that this research will serve as the steppingstone for those interested in advancing moving objects mining.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors used the data envelopment analysis (DEA) model to evaluate development after the transformation of Jiaozuo with the aim of providing a basis for its future developing plan.
Abstract: Jiaozuo is a typical resource-based city, and its economic transformation has been an example of success in China. However, quantitative evaluation of the city’s development has scarcely been performed, and future development is not clear. Because of this, using the relevant data from 1999 to 2013, this paper uses the data envelopment analysis (DEA) model to evaluate development after the transformation of Jiaozuo with the aim of providing a basis for its future developing plan. The results show that DEA was effective in 2000, 2004, 2006, 2010, and 2012, was weakly effective in 1999, 2001, 2002, 2003, and 2013, and was ineffective in 2005, 2007, 2008, 2009, and 2011. By evaluating the development of Jiaozuo, this paper provides policy implications for Jiaozuo’s sustainable development, and it may serve as a reference for the sustainable development of China’s other resources-based cities.