scispace - formally typeset
Search or ask a question

Showing papers in "Engineering With Computers in 2013"


Journal ArticleDOI
TL;DR: The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area and the optimal solutions obtained are mostly far better than the best solutions obtained by the existing methods.
Abstract: In this study, a new metaheuristic optimization algorithm, called cuckoo search (CS), is introduced for solving structural optimization tasks. The new CS algorithm in combination with Levy flights is first verified using a benchmark nonlinear constrained optimization problem. For the validation against structural engineering optimization problems, CS is subsequently applied to 13 design problems reported in the specialized literature. The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area. The optimal solutions obtained by CS are mostly far better than the best solutions obtained by the existing methods. The unique search features used in CS and the implications for future research are finally discussed in detail.

1,701 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extend the recently developed firefly algorithm to solve multi-objective optimization problems and validate the proposed approach using a selected subset of test functions and then apply it to solve design optimization benchmarks.
Abstract: Design problems in industrial engineering often involve a large number of design variables with multiple objectives, under complex nonlinear constraints. The algorithms for multiobjective problems can be significantly different from the methods for single objective optimization. To find the Pareto front and non-dominated set for a nonlinear multiobjective optimization problem may require significant computing effort, even for seemingly simple problems. Metaheuristic algorithms start to show their advantages in dealing with multiobjective optimization. In this paper, we extend the recently developed firefly algorithm to solve multiobjective optimization problems. We validate the proposed approach using a selected subset of test functions and then apply it to solve design optimization benchmarks. We will discuss our results and provide topics for further research.

414 citations


Journal ArticleDOI
TL;DR: An automated method for enhancement and segmentation of blood vessels in retinal images using 2-D Gabor wavelet for vessel enhancement due to their ability to enhance directional structures and a new multilayered thresholding technique for accurate vessel segmentation is presented.
Abstract: Diabetic retinopathy screening involves assessment of the retina with attention to a series of indicative features, i.e., blood vessels, optic disk and macula etc. The detection of changes in blood vessel structure and flow due to either vessel narrowing, complete occlusions or neovascularization is of great importance. Blood vessel segmentation is the basic foundation while developing retinal screening systems since vessels serve as one of the main retinal landmark features. This article presents an automated method for enhancement and segmentation of blood vessels in retinal images. We present a method that uses 2-D Gabor wavelet for vessel enhancement due to their ability to enhance directional structures and a new multilayered thresholding technique for accurate vessel segmentation. The strength of proposed segmentation technique is that it performs well for large variations in illumination and even for capturing the thinnest vessels. The system is tested on publicly available retinal images databases of manually labeled images, i.e., DRIVE and STARE. The proposed method for blood vessel segmentation achieves an average accuracy of 94.85% and an average area under the receiver operating characteristic curve of 0.9669. We compare our method with recently published methods and experimental results show that proposed method gives better results.

125 citations


Journal ArticleDOI
TL;DR: Where current industrial practice treats mechanical system design and control design as different design loops, this paper discusses their integration in a model-based design process at all design stages, turning concepts such as software-in-the-loop and hardware-in theloop into basic elements of an industrial design approach.
Abstract: The product race has become an innovation race, reconciling challenges of branding, performance, time to market and competitive pricing while complying with ecological, safety and legislation constraints. The answer lies in “smart” products of high complexity, relying on heterogeneous technologies and involving active components. To keep pace with this evolution and further accelerate the design cycle, the design engineering process must be rethought. The paper presents a mechatronic simulation approach to achieve this goal. The starting point is the current virtual prototyping paradigm that is widely adopted and that continues to improve in terms of model complexity, accuracy, robustness and automated optimization. Two evolutions are discussed. A first one is the extension to multi-physics simulation answering the design needs of the inherent multi-disciplinarity of “intelligent” products. Integration of thermal, hydraulic, mechanical, haptic and electrical functions requires simulation to extend beyond the traditional CAD-FEM approach, supporting the use of system, functional and perception models. The second evolution is the integration of control functions in the products. Where current industrial practice treats mechanical system design and control design as different design loops, this paper discusses their integration in a model-based design process at all design stages, turning concepts such as software-in-the-loop and hardware-in-the-loop into basic elements of an industrial design approach. These concepts are illustrated by a number of automotive design engineering cases, which demonstrate that the combined use of perception, geometric and system models allows to develop innovative solutions for the active safety, low-emission and high-comfort performance of next-generation vehicles. This process in turn poses new challenges to the design in terms of the specification and validation of such innovative products, including their failure modes and fault-tolerant behaviour. This will imply adopting a model-based system engineering approach that is currently already common practice in software engineering.

74 citations


Journal ArticleDOI
TL;DR: The authors would like to correct errors in the original publication as detailed below: Since the lower bound of the plate thickness (t) is defined in Sect.
Abstract: The authors would like to correct errors in the original publication as detailed below: Since the lower bound of the plate thickness (t) is defined in Sect. 3.2.4, Eq. (33) should be removed. There were some typos in the cited, earlier articles concerning the problem formulation, which should be corrected as: g1 1⁄4 85:334407 þ 0:0056858x2x5 þ 0:0006262x1x4 0:0022053x3x5 ð4Þ g2 1⁄4 80:51249 þ 0:0071317x2x5 þ 0:0029955x1x2 þ 0:0021813x23 ð5Þ g3 1⁄4 9:300961 þ 0:0047026x3x5 þ 0:0012547x1x3 þ 0:0019085x3x4 ð6Þ

51 citations


Journal ArticleDOI
TL;DR: This case study relates to the usability assessment of two control devices for a wheelchair-mounted robot manipulator to assist physically disabled people on the basis of two currently used methods: the multi criteria decision analysis and the Saaty’s analytic hierarchy process.
Abstract: The aim of the paper is to address an innovative methodology for assessing the usability of a product. This methodology is particularly suitable for designing products that provide their main functions through their control interfaces. In particular, this case study relates to the usability assessment of two control devices for a wheelchair-mounted robot manipulator to assist physically disabled people. The study focuses on defining a synthetic usability index on the basis of two currently used methods: the multi criteria decision analysis and the Saaty’s analytic hierarchy process. Several virtual reality (VR)-based experiments have been conducted, set up in accordance with a cross-array experimental plan, that adequately caters for both control and noise factors. Quantitative measures and subjective user evaluations have been collected to maximize the effectiveness, the efficiency and the satisfaction perceived by users while using the product. Compared to the literature on the subject, the proposed approach provides both more flexibility in defining quantitative indexes and more adequate results, even when involving only a small sample of users in the participatory design session. The use of VR technologies for the collection of the experimental data has been essential in terms of safety, costs and repeatability of the tests, as well as of the robustness with respect to noise factors.

48 citations


Journal ArticleDOI
TL;DR: In this article, the rotational speed of a pump-turbine during a transient process of load rejection was analyzed using the computational fluid dynamics method, which was based on a prototype of a pumped storage power station, and the computational method could be used in the fault diagnosis of transient operation.
Abstract: Three dimensional (3D) unsteady, incompressible flows in a pump-turbine during a transient process of load rejection were studied using the computational fluid dynamics method. The dynamic mesh (DM) method and remeshing method combined with non-conformal grid boundaries were applied to simulate the rotation of guide vanes. The fluid coupling and DM method were used to calculate the rotational speed for each time-step. Calculations were performed based on the \( \overline{{v^{2} }} - f \) turbulence model to incorporate the near-wall turbulence anisotropy as well as non-local pressure–strain effects. Results show that \( \overline{{v^{2} }} - f \) model is thought to be suitable to predict characteristics of the transient process during load rejection in a pump-turbine. The transient explicit characteristics, such as the flow-rate, head, torque of the runner, etc., during the process of load rejection are analyzed. Pressure fluctuation was larger in the region close to the runner during the load rejection. Stalls and reverse flow in the runner resulted in the decrease of the torque of runner at turbine mode and turbine-braking mode. Simulation results were also compared and verified by experimental results. 3D simulations can be used instead of experiments to study the influence of inner flow to the external characteristics during transient processes. This calculation was based on a prototype of a pumped storage power station, and the computational method could be used in the fault diagnosis of transient operation.

35 citations


Journal ArticleDOI
TL;DR: This work proposes a new RBDO framework: PS2, which combines Particle Swarm Optimization (PSO), Support Vector Machine (SVM), and Subset Simulation (SS), and shows that in various trials the PS2 framework consistently outperforms both the double-loop and single-loop approaches in terms of computational efficiency, solution quality, and model flexibility.
Abstract: Reliability-based design optimization (RBDO) is concerned with designing an engineering system to minimize a cost function subject to the reliability requirement that failure probability should not exceed a threshold. Conventional RBDO methods are less than satisfactory in dealing with discrete design parameters and complex limit state functions (nonlinear and non-differentiable). Methods that are flexible enough to address the concerns above, however, come at a high computational cost. To enhance computational efficiency without sacrificing model flexibility, we propose a new RBDO framework: PS2, which combines Particle Swarm Optimization (PSO), Support Vector Machine (SVM), and Subset Simulation (SS). SS can efficiently estimate small failure probabilities, based on which SVM is adopted to evaluate the reliability of candidate solutions using binary classification. PSO is employed to solve the discrete optimization problem. Primary emphasis is placed upon the cooperation between SVM and PSO. The cooperation is mutually beneficial since the SVM classifier helps PSO evaluate the feasibility of solutions with high efficiency while the optimal solutions obtained by PSO assist in retraining the SVM classifier to attain better accuracy. The PS2 framework is implemented to find the optimal design of a ten-bar truss, whose component sizes are selected from a commercial standard. The reliability constraints are non-differentiable with two failure modes: yield stress and buckling stress. The interactive process between PSO and SVM contributes greatly to the success of the PS2 framework. It is shown that in various trials the PS2 framework consistently outperforms both the double-loop and single-loop approaches in terms of computational efficiency, solution quality, and model flexibility.

34 citations


Journal ArticleDOI
TL;DR: A numerical form-finding procedure of tensegrity structures using singular value decompositions of the force density and equilibrium matrices to find the feasible sets of nodal coordinates and force density which satisfy the minimum required deficiencies of these two matrices.
Abstract: A numerical form-finding procedure of tensegrity structures is developed. The only required information is the topology and the types of members. The singular value decompositions of the force density and equilibrium matrices are performed iteratively to find the feasible sets of nodal coordinates and force densities which satisfy the minimum required deficiencies of these two matrices, respectively. An approach of defining a unique configuration of tensegrity structure by specifying an independent set of nodal coordinates is provided. An explanation is given for the preservation in self-equilibrium status of the tensegrity structures under affine transformation. Two- and three-dimensional examples are illustrated to demonstrate the efficiency and robustness of the proposed method in searching stable self-equilibrium configurations of tensegrity structures.

31 citations


Journal ArticleDOI
Raşit Köker1
TL;DR: In this study, simulated annealing (SA) algorithm was used together with the neural-network-based inverse kinematics problem solution robots to minimize the error at the end effector.
Abstract: The neural-network-based inverse kinematics solution is one of the recent topics in the robotics because of the fact that many traditional inverse kinematics problem solutions such as geometric, iterative and algebraic are inadequate for redundant robots. However, since the neural networks work with an acceptable error, the error at the end of inverse kinematics learning should be minimized. In this study, simulated annealing (SA) algorithm was used together with the neural-network-based inverse kinematics problem solution robots to minimize the error at the end effector. The solution method is applied to Stanford and Puma 560 six-joint robot models to show the efficiency. The proposed algorithm combines the characteristics of neural network and an optimization technique to obtain the best solution for the critical robotic applications. Three Elman neural networks were trained using separate training sets and different parameters, since one of them can give better results than the others can. The best result is selected within three neural network results by computing the end effector error via direct kinematics equation of the robotic manipulator. The decimal part of the neural network result was improved up to 10 digits using simulated annealing algorithm. The obtained best solution is given to the simulated annealing algorithm to find the best-fitting 10 digits for the decimal part of the solution. The end effector error was reduced significantly.

25 citations


Journal ArticleDOI
TL;DR: The modelling of the worm gearing proves that such a type is viable and machinable and the main advantage of the construction in comparison to other modified worm gearing types is the lack of transmission error, owing to the point-like connection.
Abstract: The modelling and simulation of a worm gearing generated by an intermediary generating helicoid with a circle profile in the axle plane is considered. The goals of the paper are to verify the viability of such a worm gearing and to determine the proper parameters of a machinable construction. For the modelling and for the analysis of contact patterns the software Surface Constructor was applied. First the original theory behind the software is introduced and then its application to modelling gear types with point-like connection is explained. After a short overview of the capabilities of the software, two sample tasks are introduced: the generation of the gear surfaces of a hypoid gearing and the detailed description of modelling the novel worm gear drive with point-like contact. The modelling of the worm gearing proves that such a type is viable and machinable. The main advantage of the construction in comparison to other modified worm gearing types is the lack of transmission error, owing to the point-like connection. Due to this feature, the gearing tolerates misalignment well.

Journal ArticleDOI
TL;DR: The Manufactured Analytical Solution Abstraction (MASA) library for applying the method of manufactured solutions to the verification of software used for solving a large class of problems stemming from numerical methods in mathematical physics.
Abstract: In this paper we introduce the Manufactured Analytical Solution Abstraction ( MASA ) library for applying the method of manufactured solutions to the verification of software used for solving a large class of problems stemming from numerical methods in mathematical physics including nonlinear equations, systems of algebraic equations, and ordinary and partial differential equations. We discuss the process of scientific software verification, manufactured solution generation using symbolic manipulation with computer algebra systems such as Maple™ or SymPy, and automatic differentiation for forcing function evaluation. We discuss a hierarchic methodology that can be used to alleviate the combinatorial complexity in generating symbolic manufactured solutions for systems of equations based on complex physics. Finally, we detail the essential features and examples of the Application Programming Interface behind MASA , an open source library designed to act as a central repository for manufactured and analytical solutions over a diverse range of problems.

Journal ArticleDOI
TL;DR: Simulation of fatigue crack growth simulation of alloy/ceramic functionally graded materials using extended finite element method (XFEM) shows that the presence of inhomogeneities/discontinuities in the domain significantly influences the fatigue life of the components.
Abstract: The present work deals with the fatigue crack growth simulation of alloy/ceramic functionally graded materials (FGMs) using extended finite element method (XFEM). Various cases of FGM containing multiple inhomogeneities/discontinuities along with either a major edge or a center crack are taken for the purpose of simulation. The fatigue life of the FGM plate is calculated using Paris law of fatigue crack growth under cyclic loading. The effect of multiple inhomogeneities/discontinuities (minor cracks, holes/voids, and inclusions) on the fatigue life of cracked FGM plate is studied in detail. These simulations show that the presence of inhomogeneities/discontinuities in the domain significantly influences the fatigue life of the components.

Journal ArticleDOI
TL;DR: A new boundary shape representation for 3D boundary value problems based on parametric triangular Bézier surface patches is presented, which eliminates the need for both boundary and domain discretizations in PIES.
Abstract: This paper presents a new boundary shape representation for 3D boundary value problems based on parametric triangular Bezier surface patches. Formed by the surface patches, the graphical representation of the boundary is directly incorporated into the formula of parametric integral equation system (PIES). This allows us to eliminate the need for both boundary and domain discretizations. The possibility of eliminating the discretization of the boundary and the domain in PIES significantly reduces the number of input data necessary to define the boundary. In this case, the boundary is described by a small set of control points of surface patches. Three numerical examples were used to validate the solutions of PIES with analytical and numerical results available in the literature.

Journal ArticleDOI
TL;DR: A usage coverage model is established to integrate users and their expected usage scenarios into product family assessment, which maps the user’s individual capacity together with a given product into the usage context space and several usage coverage indices are proposed.
Abstract: Computation techniques have provided designers with deeper understanding of the market niches that were neglected before. Usage contextual information has been studied in marketing research since the last century; however, little research in design engineering focuses on it. Therefore, in this paper, we analyzed the relations between usage context information and the design of products. A usage coverage model is established to integrate users and their expected usage scenarios into product family assessment. We map the user’s individual capacity together with a given product into the usage context space. The overlapping between the required usage and feasible usage can be measured. Based on this mechanism, several usage coverage indices are proposed to assess the compliance of a given product family to the expected set of usage scenarios to be covered. The original method is demonstrated on a scale-based product family of jigsaws in a redesign context. Constraint programming technique is applied to solve the physics-based causal loops that determine usage performances in a set-based design approach. Designers can rely on the results to eliminate redundant units in the family or modify the configuration of each product. The contribution of the paper is to provide an inter-disciplinary point of view to assessing the composition and configuration of a product family design.

Journal ArticleDOI
TL;DR: It is demonstrated that simplex method is more effective than simulated annealing one for optimization problems where the single analysis loop requires much time, even if the solutions are quite similar in terms of calculated values of the objective function, design and state variables.
Abstract: In this paper, two optimization approaches to improve the product design process have been analysed. Through the analysis of a case study, concerning the designing of a new High Energy Absorption Rear Underrun Protective Device (HEARUPD), two different optimization approaches (simplex and simulated annealing) have been compared. In the implemented optimization processes, the crash between an economy car and the rear part of a truck has been simulated by dynamic numerical (FEM) analyses. Moreover, authors have proposed the use of a suitable linear function of four variables with the purpose of reducing the multi-objective optimization processes to mono-objective ones. That has been made to simplify the analysis procedures without affecting the quality and the completeness of the optimization processes. The obtained results, as well as showing the high effectiveness of the integrated use of numerical crash analyses and optimization methods, demonstrate that simplex method is more effective than simulated annealing one for optimization problems where the single analysis loop requires much time. Even if the solutions are quite similar in terms of calculated values of the objective function, design and state variables, simplex method needs shorter computational time than simulated annealing to obtain an optimized solution.

Journal ArticleDOI
TL;DR: This paper extends the recent “Futures” concept permitting the algorithm to circumvent synchronization barriers in cluster optimization of an A/C duct of a passenger car, using a refined CFD legacy software model along with an adaptive meta-model based on Proper Orthogonal Decomposition and diffuse approximation.
Abstract: Multi-processor HPC tools have become commonplace in industry and research today. Evolutionary algorithms may be elegantly parallelized by broadcasting a whole population of designs to an array of processors in a computing cluster or grid. However, issues arise due to synchronization barriers: subsequent iterations have to wait for the successful execution of all jobs of the previous generation. When other users load a cluster or a grid, individual tasks may be delayed and some of them may never complete, slowing down and eventually blocking the optimization process. In this paper, we extend the recent “Futures” concept permitting the algorithm to circumvent such situations. The idea is to set the default values to the cost function values calculated using a high-quality surrogate model, progressively improving when “exact” numerical results are received. While waiting for the exact result, the algorithm continues using the approximation and when the data finally arrives, the surrogate model is updated. At convergence, the final result is not only an optimized set of designs, but also a surrogate model that is precise within the neighborhood of the optimal solution. We illustrate this approach with the cluster optimization of an A/C duct of a passenger car, using a refined CFD legacy software model along with an adaptive meta-model based on Proper Orthogonal Decomposition (POD) and diffuse approximation.

Journal ArticleDOI
TL;DR: The solution of kinematics problems for serial manipulators is fundamental for their synthesis, analysis, simulation, and computer control; for this reason, this paper introduces a public domain package and open software called SnAM (Serial n-Axis Manipulators), which is developed under the ADEFID framework.
Abstract: The solution of kinematics problems for serial manipulators is fundamental for their synthesis, analysis, simulation, and computer control; for this reason, this paper introduces a public domain package and open software called SnAM (Serial n-Axis Manipulators), which is developed under the ADEFID (ADvanced Engineering soFtware for Industrial Development) framework, where the manipulator is conceptualized as a derived class from CRobokin, CMachine and CIpiSModel, which are fundamental ADEFID classes. SnAM has been developed with efficient algorithms in a closed-loop solution to solve direct kinematics, whereas for the case of inverse kinematics, matrix formulation, elimination and numerical methods are implemented. Furthermore, for the architecture definition, the user is able to display a dialog box in which the design parameters are set based on the Denavit–Hartenberg convention with the aid of sliding bars, while the solid model is updated simultaneously showing the actual configuration. Since ADEFID provides tools to graphical interface with embedded control components SnAM adopt them to not only simulate virtually, but also with an adaptive prototype designed for this purpose. Furthermore, SnAM assists the user in tasks related to trajectory planning, collision-avoidance and three-dimension objects scanning.

Journal ArticleDOI
TL;DR: A prediction model was derived for the effective angle of shearing resistance of soils using a novel hybrid method coupling genetic programming (GP) and orthogonal least squares algorithm (OLS).
Abstract: In the present study, a prediction model was derived for the effective angle of shearing resistance (ϕ′) of soils using a novel hybrid method coupling genetic programming (GP) and orthogonal least squares algorithm (OLS). The proposed nonlinear model relates ϕ′ to the basic soil physical properties. A comprehensive experimental database of consolidated-drained triaxial tests was used to develop the model. Traditional GP and least square regression analyses were performed to benchmark the GP/OLS model against classical approaches. Validity of the model was verified using a part of laboratory data that were not involved in the calibration process. The statistical measures of correlation coefficient, root mean squared error, and mean absolute percent error were used to evaluate the performance of the models. Sensitivity and parametric analyses were conducted and discussed. The GP/OLS-based formula precisely estimates the ϕ′ values for a number of soil samples. The proposed model provides a better prediction performance than the traditional GP and regression models.

Journal ArticleDOI
TL;DR: A comparison among the hybridized solutions and the solutions obtained through GAs and gradient-based optimization methods, shows that the proposed algorithm often leads to very different topologies having better performances.
Abstract: In a recent project the authors have developed an approach to assist the identification of the optimal topology of a technical system, capable of overcoming geometrical contradictions that arise from conflicting design requirements. The method is based on the hybridization of partial solutions obtained from mono-objective topology optimization tasks. In order to investigate efficiency, effectiveness and potentialities of the developed hybridization algorithm, a comparison among the proposed approach and traditional topology optimization techniques such as Genetic Algorithms (GAs) and gradient-based methods is presented here. The benchmark has been performed by applying the hybridization algorithm to several case studies of multi-objective optimization problems available in literature. The obtained results demonstrate that the proposed approach is definitely less expensive in terms of computational requirements, than the conventional application of GAs to topology optimization tasks, still keeping the same effectiveness in terms of searching the global optimum solution. Moreover, the comparison among the hybridized solutions and the solutions obtained through GAs and gradient-based optimization methods, shows that the proposed algorithm often leads to very different topologies having better performances.

Journal ArticleDOI
TL;DR: A simple and efficient higher-order explicit numerical scheme for the integration of a system of ordinary differential equations which is constrained by an algebraic condition (DAE), which is significantly higher than the accuracy of the classical backward-Euler scheme.
Abstract: The article introduces, as a result of further development of the first-order scheme NICE, a simple and efficient higher-order explicit numerical scheme for the integration of a system of ordinary differential equations which is constrained by an algebraic condition (DAE). The scheme is based on the truncated Taylor expansion of the constraint equation with order h of the scheme being determined by the highest exponent in the truncated Taylor series. The integration scheme thus conceived will be named NICEh, considering both principal premises of its construction. In conjunction with a direct solution technique used to solve the boundary value problem, the NICEh scheme is very convenient for integrating constitutive models in plasticity. The plasticity models are defined mostly by a system of algebraic and differential equations in which the yield criterion represents the constraint condition. To study the properties of the new integration scheme, which, like the forward-Euler scheme, is characterised by its implementation simplicity due to the explicitness of its formulations, a damage constitutive model (Gurson–Tvergaard–Needleman model) is considered. The general opinion that the implicit backward-Euler scheme is much more accurate than the thus-far known explicit schemes is challenged by the introduction of the NICEh scheme. The accuracy of the higher-order explicit scheme in the studied cases is significantly higher than the accuracy of the classical backward-Euler scheme, if we compare them under the condition of a similar CPU time consumption.

Journal ArticleDOI
TL;DR: The derivation of parametric effectiveness is presented for optimization both with and without the constraint of constant volume, and can be used to select an initial search direction in parameter space, or to select sets of model parameters which have the greatest ability to improve model performance.
Abstract: The motivation for this paper is to present an approach for rating the quality of the parameters in a computer-aided design model for use as optimization variables. Parametric Effectiveness is computed as the ratio of change in performance achieved by perturbing the parameters in the optimum way, to the change in performance that would be achieved by allowing the boundary of the model to move without the constraint on shape change enforced by the CAD parameterization. The approach is applied in this paper to optimization based on adjoint shape sensitivity analyses. The derivation of parametric effectiveness is presented for optimization both with and without the constraint of constant volume. In both cases, the movement of the boundary is normalized with respect to a small root mean squared movement of the boundary. The approach can be used to select an initial search direction in parameter space, or to select sets of model parameters which have the greatest ability to improve model performance. The approach is applied to a number of example 2D and 3D FEA and CFD problems.

Journal ArticleDOI
TL;DR: Performance of a flying cross bar, instead of the flying plate, to incapacitate the long-rod penetrator, has been evaluated numerically based on a finite element model.
Abstract: Performance of a flying cross bar, instead of the flying plate, to incapacitate the long-rod penetrator, has been evaluated numerically based on a finite element model. The length to diameter ratio, L/D, of the penetrator was 30 and the velocity was 2.0 km/s. The length of the cross bar was fixed to 0.5L and the velocity of the bar was determined from its mass and given kinetic energy. The bar was assumed to impact the mid point of the penetrator at 45° of obliquity. The efficiency of flying cross bar is maximum when the diameter of the bar is in the range between 1D and 4D depending on the energy of the bar and the distance to witness block. The protection capability of the bar has been discussed in terms of the shape and lateral displacement of the disturbed penetrator by the flying cross bar.

Journal ArticleDOI
TL;DR: This paper proposes a novel method to efficiently solve inverse problems and to sample Quasi-Optimal Regions (QORs) in the input (design) space more densely.
Abstract: The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to find optimal performance characteristics of expensive simulations (forward analysis: from input to optimal output). However, often the practitioner knows a priori the desired performance and is interested in finding the associated input parameters (reverse analysis: from desired output to input). A popular method to solve such reverse (inverse) problems is to minimize the error between the simulated performance and the desired goal. However, there might be multiple quasi-optimal solutions to the problem. In this paper, the authors propose a novel method to efficiently solve inverse problems and to sample Quasi-Optimal Regions (QORs) in the input (design) space more densely. The development of this technique, based on the probability of improvement criterion and kriging models, is driven by a real-life problem from bio-mechanics, i.e., determining the elasticity of the (rabbit) tympanic membrane, a membrane that converts acoustic sound wave into vibrations of the middle ear ossicular bones.

Journal ArticleDOI
TL;DR: A new comparative analysis between classical functional formulations and a new functional presented by the authors is presented and it is proved under which conditions the minimization of the analyzed functionals leads to a full rank linear system.
Abstract: Sweep methods are one of the most robust techniques to generate hexahedral meshes in extrusion volumes. The main issue in sweep algorithms is the projection of cap surface meshes along the sweep path. The most competitive technique to determine this projection is to find a least-squares approximation of an affine mapping. Several functional formulations have been defined to carry out this least-squares approximation. However, these functionals generate unacceptable meshes for several common geometries in CAD models. In this paper we present a new comparative analysis between these classical functional formulations and a new functional presented by the authors. In particular, we prove under which conditions the minimization of the analyzed functionals leads to a full rank linear system. Moreover, we also prove the equivalences between these formulations. These allow us to point out the advantages of the proposed functional. Finally, from this analysis we outline an automatic algorithm to compute the nodes location in the inner layers.

Journal ArticleDOI
TL;DR: This paper has implemented a prototype system to show that employing the right tools on the stored knowledge and data a designer can reconstruct a migrated model based on new local requirements of the receiving system without having to repair it, avoiding delay and corrective processes that only produce patched-up design bases and limiting the effective use of such models.
Abstract: This paper presents a methodology to repair flawed migrated models that are designed with knowledge-guided NURBS (KGN). Incompatible traditional CAD packages that are based on NURBS technology have been for many years carrying out the exchange of models and data through either neutral file formats (IGES or STEP) or proprietary formats. The exchange process most often produces poor quality data and results in errors in the migrated CAD models costing the industry billions of dollars every year. While previous studies by large focus on finding means to repair migrated models through patching, we propose to repair errors in CAD models which are generated using KGN through a reconstructing process instead of patching. We have implemented a prototype system to show that employing the right tools on the stored knowledge and data a designer can reconstruct a migrated model based on new local requirements of the receiving system without having to repair it, avoiding delay and corrective processes that only produce patched-up design bases and limiting the effective use of such models.

Journal ArticleDOI
Yusheng Liu1, Chuhua Xian1, Ming Li1, Haibin Xia1, Shuming Gao1 
TL;DR: A novel local adaptation-based approach to generating the MA for efficient engineering analysis by adapting the MAs of the operand models in a certain way, instead of regenerating the MA from scratch.
Abstract: Currently engineering analysis is regarded as an integrated part of design process and medial axis (MA) is often utilized. However, the generation of MA of complicated models is computation intensive since it is always generated from scratch even if a tiny modification is imposed. A novel local adaptation-based approach to generating the MA for efficient engineering analysis is proposed in this study. With this method, the MA of a resultant model constructed from two other models via a Boolean operation or parameter modification is generated by adapting the MAs of the operand models in a certain way, instead of regenerating the MA from scratch. First, several new properties of the MA which are the fundamental basis of the proposed method are investigated. Then, the boundaries that will vanish from or be added into the resultant model during the Boolean operation or parameter modification are found, and the region in which the MA segments (MASs) need to be regenerated is determined. Finally, the new MASs are generated for the region using an improved tracing method. The final MA of the resultant model is thus constructed by combining the newly generated MASs with the reserved MASs of the operated model(s). Some examples are given to illustrate the high computational efficiency of the proposed method for engineering analysis.

Journal ArticleDOI
TL;DR: A MR environment specifically addressed to the usability evaluation of a product interface is described, which allows the simulation of the HMI behaviour using the same models and the same software employed by engineers during the design phase.
Abstract: The validation of a product interface is often a critical issue in the design process. Virtual reality and mixed reality (MR) are able to enhance the interactive simulation of the product human-machine interface (HMI), as these technologies allow engineers to directly involve end users in the usability assessment. This paper describes a MR environment specifically addressed to the usability evaluation of a product interface, which allows the simu- lation of the HMI behaviour using the same models and the same software employed by engineers during the design phase. Our approach is based on the run-time connection between the visualisation software and the simulators used for product design and analysis. In particular, we use Matlab/Simulink to model and simulate the product behaviour, and Virtools to create the interactive MR environment in which the end user can test the product. Thanks to this architecture, any modification done on the behaviour models is immediately testable in MR.

Journal ArticleDOI
TL;DR: The semi-automatic block-structured grid generation technique for hexahedral meshing of porous open cell Kelvin foam structures for investigation of the pore scale fluid flow is presented and the performance of the algorithm is compared with a tetrahedral full automatic Delaunay meshing technique.
Abstract: A semi-automatic block-structured grid generation technique for hexahedral meshing of porous open cell Kelvin foam structures for investigation of the pore scale fluid flow is presented. The performance of the algorithm is compared with a tetrahedral full automatic Delaunay meshing technique. In the first part of the paper the meshing strategies are explained. In the second part grid generation times, simulation times and the mesh quality are evaluated. For this Computational Fluid Dynamics (CFD) simulations for both a diffusion-dominated case (Re = 0.129) and a convection-dominated case (Re = 129) are carried out and analysed on four different cell resolutions of each mesh type. For the quality evaluation three different a posteriori error estimates are studied for the two mesh types on the different mesh sizes. The results are: the block-structured grid generation technique is about 10–20 times faster than the tetrahedral full automatic technique. While the mean field error estimates are comparable for both meshes, the maximum field error estimates for the block-structured meshes are only half of those for the tetrahedral meshes. Reaching simulation results of the same quality the hexahedral mesh needs about 20–40% less iterations with comparable mesh sizes. The time per iteration for the hexahedral meshes are up to 94% smaller than for the tetrahedral meshes. This makes the semi-automatic block-structured grid generation technique especially suitable for parameter studies and for the investigation of micro-scale flows in foam structures consisting of large quantities of Kelvin cells.

Journal ArticleDOI
TL;DR: Parametric analysis of a two-layered axially loaded strand is performed using the recently developed p-version finite element code, which describes the geometry well and takes into account all possible inter-wire motions and frictional contact between the wires.
Abstract: Parametric analysis of a two-layered axially loaded strand is performed using the recently developed p-version finite element code, which describes the geometry well and takes into account all possible inter-wire motions and frictional contact between the wires. A special nonlinear contact theory was developed based on the Hertz-theory. It is assumed that the wires have homogenous, isotropic, linear elastic material properties. The developed code is a tool for designing wire rope strands that require low computer resources and short computational time. Case studies are performed to verify and demonstrate the efficiency and applicability of the method. Design curves are presented according to the strand geometry parameters such as helix angle and ratio of the wire radius in the different layers. The optimal geometry parameters for a given strand can be determined using these design curves.