scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Reliability, Maintainability and Safety in 2018"


Proceedings Article•DOI•
01 Oct 2018
TL;DR: A battery RUL prediction approach based on a new recurrent neural network (RNN), i.e. the RNN with Gated Recurrent Unit (GRU) is proposed which overcomes the drawback on dealing with long term relationship of RNN.
Abstract: Lithium-ion battery has been widely applied as an energy storage component in various industrial applications including electric vehicles, distributed grids and space crafts. However, the battery performance degrades gradually due to the SEI growth, li-plating and other irreversible electro-chemical reactions. These inevitable reactions directly influence the reliability of the energy storage system and may further cause catastrophic consequences to the host system. Remaining useful life (RUL) is one of critical indicators to evaluate the battery performance. This paper proposes a battery RUL prediction approach based on a new recurrent neural network (RNN), i.e. the RNN with Gated Recurrent Unit (GRU). The proposed method overcomes the drawback on dealing with long term relationship of RNN. The structure of the RNN-GRU is much simpler which contributes to a higher computational complexity. The experiments based on the NMC lithium-ion battery cycle life testing data are conducted and the results indicate that the mean error of different battery cells are both less than 3% which means the proposed method is accurate and robust for battery RUL predictions.

30 citations


Proceedings Article•DOI•
Jochen Link, Karl Waedt1, Ines Ben Zid1, Xinxin Lou1•
01 Oct 2018
TL;DR: What kind of general challenges are up-coming and what kind of organizational challenges the authors have to solve in companies to fulfill the requirements of industrial projects which have to consider both functional safety and cybersecurity are provided.
Abstract: The interconnection of technical devices by new technology has to be based on the requirements needed to fulfill legal compliance. This interconnection and the increasing complexity coming with it require the knowledge of interrelations within an organization in order to progress towards an integrated organization. In this paper we intend to provide an overview on what kind of general challenges are up-coming and what kind of organizational challenges we have to solve in companies to fulfill the requirements of industrial projects which have to consider both functional safety and cybersecurity. The domain of functional safety and the expanding domain of cybersecurity both come with a high complexity and with their own grading and certification schemes, defense-in-depth architectures, comprehensive sets of standards and guidelines which are endorsed by specialists familiar with one of these domains. Accordingly, agreeing on effective and affordable approaches to handle the safety & security interface are still ongoing, both for generic solutions and for business-domain specific regulation. Part of this work results are from an ongoing exchange in the Sino-German Industry 4.0 / Intelligent Manufacturing Standardization Sub-Working Group that includes a track on functional safety and a track on cybersecurity, additionally to further tracks relevant to cyber physical systems, e.g. on reference architecture models, preventive maintenance, edge and fog computing.

8 citations


Proceedings Article•DOI•
Luyi Li1, Minyan Lu1, Tingyang Gu1•
01 Oct 2018
TL;DR: A systematic modeling approach based on set theory is proposed to depict failure indicators of complex software-intensive systems accurately and to provide a guideline for selecting appropriate failure indicators.
Abstract: With the increasing complexity of software-intensive systems, software failures have become the dominant cause of overall system failures. To ensure runtime reliability, researchers have come up with online failure prediction techniques to predict potential software failures in advance and these provide the basis for subsequent online fault management and other health management activities. Failure indicators are the abnormal system variables that indicate potential failures, which are the basis of failure prediction. However, in previous failure prediction studies, researchers have focused mainly on failure indicators at the hardware level, network level, or operating system level, ignoring application-level failure indicators. Furthermore, there is no systematic model for failure indicators, which means that failure indicators lack a precise definition and the use of failure indicators is quite arbitrary. This paper proposes a systematic modeling approach for failure indicators of complex software-intensive systems. Firstly, failure indicators at the application level are extracted based on the analysis of new features of complex software-intensive systems. Then, failure indicators at all levels are summarized and classified into several categories. In addition, these indicators are analyzed, and their features are extracted from different dimensions, including the time/space domain, system/application dimension, and from the internal/external viewpoint. Based on these different attributes from different dimensions, a systematic modeling approach based on set theory is proposed to depict these failure indicators accurately and to provide a guideline for selecting appropriate failure indicators. Finally, some examples are given to show how to use the proposed method and to verify its effectiveness.

7 citations


Proceedings Article•DOI•
01 Oct 2018
TL;DR: This study, based on the multi-source perception signal of machining process, the CNNs algorithm's convolutional layer extracting information features, the BN normalizing retention feature information and the pooling layers reducing the feature matrix contribute to the establishment of a state prediction model which could predict and identify the degree of wear of the tool and the abnormal state of theMachining process.
Abstract: In modern manufacturing systems and industries, an increasing number of researches which are absorbed in recognition of abnormal process, apply data-driven model to analyze the manufacturing process. However, considering the sequence data is unable to be imported in the regression model and classification model, researchers need to investment most of time in feature extraction methods to recognize the abnormal process. In the last few years, with the development of neural network algorithm deep learning methods, which redefine the processing way from raw data, the neural network has been used to address raw sensory data. In this study, which is based on the multi-source perception signal of machining process, the CNNs algorithm's convolutional layer extracting information features, the BN normalizing retention feature information and the pooling layers reducing the feature matrix contribute to the establishment of a state prediction model which could predict and identify the degree of wear of the tool and the abnormal state of the machining process. After that, a tool wear test is introduced to comparing results of the traditional signal processing method with CNN algorithm and the feasibility of the CNN algorithm is demonstrated.

6 citations


Proceedings Article•DOI•
01 Oct 2018
TL;DR: A method of a functional model-driven FMEA that starts from the source of the failure and has a good formalization process, which effectively avoids the duplication and omission of failure analysis results is proposed.
Abstract: Current FMEA methods face problems such as lack of formalization of failure mode and mechanism analysis process, ambiguous results of failure propagation and impact analysis, and weak support for product development in the entire analysis process. This paper proposes a method of a functional model-driven FMEA. The structure and function are modeled using the method of structural tree. The failure mode and mechanism are analyzed through stress, and the failure impact set is determined by the directed graph algorithm. Based on the above theories, a set of FMEA computer-aided analysis system was developed. The system starts from the source of the failure and has a good formalization process, which effectively avoids the duplication and omission of failure analysis results.

5 citations


Proceedings Article•DOI•
01 Oct 2018
TL;DR: In this paper, a comprehensive risk evaluation model for airport operation safety is proposed based on assigning severity weight to each safety performance indicator, including unsafe event type, foundation type, management type, and operation type.
Abstract: In order to comprehensively evaluate risk in airport operation safety, a safety performance indicator system is established by using a forward analysis process and backward analysis process. These safety performance indicators are categorized into four types: unsafe event type, foundation type, management type, and operation type. A framework used for evaluating airport operation risk is presented. A comprehensive risk evaluation model for airport operation safety is then proposed based on assigning severity weight to each safety performance indicator. Ten months' actual operation data from an airport is used to verify the feasibility and applicability of the comprehensive risk evaluation model. The evaluation results are consistent with the actual operation situation.

5 citations


Proceedings Article•DOI•
01 Oct 2018
TL;DR: Two ANN-based algorithms (Back propagation (BP) and Self-Organizing Maps) and their applications for the recognition of surface defect on images taken from bridges are discussed and a combined network algorithm with BP and SOM is designed in order to improve the performance in crack image segmentation.
Abstract: In bridge health monitoring, the detection and localization of surface defects are highly important for health condition evaluation. Due to the limitation of manual detection, it is easier to measure those defects in a more automatic way. Machine learning is a hot topic in the recent decade, and the contribution of Artificial Neural Network (ANN) is especially remarkable, which is the most widely used models of machine learning in the image-processing field. In this paper, we will discuss two ANN-based algorithms (Back propagation (BP) and Self-Organizing Maps (SOM)) and their applications for the recognition of surface defect on images taken from bridges. Moreover, a combined network algorithm with BP and SOM is designed in order to improve the performance in crack image segmentation, and analysis over this network is carried out specifically.

5 citations


Proceedings Article•DOI•
01 Oct 2018
TL;DR: A novel reliability growth model is proposed to model the reliability growth and tracking process to solve the foregoing problems and the results illustrate that the proposed method is more accurate and effective than traditional models.
Abstract: Reliability growth techniques is an effective means to track and predict reliability growth by planning growth paths in advance, and it can achieve quantitative improvement in the reliability of a product over a period of time. To evaluate such process, some reliability growth models which have extensively utilizations, such as the Duane model, the AMSAA model and other models, are proposed by researchers. However, some of these models still have some limitations, such as limited application scope, complicated model parameters calculation, and delayed tracking process. Applying these models to reliability growth may affect the prediction accuracy and tracking efficiency. In this paper, a novel reliability growth model is proposed to model the reliability growth and tracking process to solve the foregoing problems. First, GA-Elman neural network is chosen for short-term prediction of reliability growth. Second, based on this short-term prediction method, the reliability growth prediction and tracking model are established to achieve real-time of reliability growth. Finally, the proposed predictive model is verified by using simulated data and real engine reliability growth data from U.S.S. Grampus Diesel. The results illustrate that the proposed method is more accurate and effective than traditional models.

4 citations


Proceedings Article•DOI•
01 Oct 2018
TL;DR: The fuzzy comprehensive evaluation method is used to consider the influence of the reliability of the CNC machine tools and the operators on the system reliability and the results show that this method is suitable for the reliability evaluation of C NC machine tools.
Abstract: The reliability evaluation indices of computer numerical control (CNC) machine tool system have many levels and characteristics. In this paper, the CNC machine tool system is regarded as man-machine system. And the fuzzy comprehensive evaluation method is used to consider the influence of the reliability of the CNC machine tools and the operators on the system reliability. First, an evaluation index system is established and the criteria layer and the indicator layer are determined. Then, the weights of the indicators at each level are calculated by expert scoring. And finally, a comprehensive evaluation score is obtained through step-by-step evaluation. Taking JW4050AL machining center as an example, the results show that the reliability of this type of machine tool is at a high level, which is consistent with the actual situation. The results show that this method is suitable for the reliability evaluation of CNC machine tools.

4 citations


Proceedings Article•DOI•
01 Oct 2018
TL;DR: Through the comparison of different roads, this work can identify the worst resilient roads that cannot recover soon from the congestion during the rush hours and these identified roads with minimal resilience can be the targets of traffic improvement in the corresponding reliability management.
Abstract: As the lifeline system for city, transportation systems may be degraded for various reasons leading to the uncertainty on the reliability of traffic operation. While different reliability measures have been proposed for city traffic, it still remains challenging how congested roads are recovered in daily traffic operation. Based on the concept of resilience and the use of real-time traffic data, we study the resilience of roads during the daily traffic. Through the comparison of different roads, we can identify the worst resilient roads that cannot recover soon from the congestion during the rush hours. These identified roads with minimal resilience can be the targets of traffic improvement in the corresponding reliability management.

4 citations


Proceedings Article•DOI•
01 Oct 2018
TL;DR: This paper proposes a control strategy with environment identification to minimize the cost but achieve the effect of expensive Multiline Lidar, using computer vision and deep learning to train existing data sets in this paper.
Abstract: As the accuracy in sensors and powerful in controller keep improving, there is more room for developing the perception of the road environment and the operation in complex traffic conditions of Connected Automated Vehicles. In this paper, we propose a control strategy with environment identification to minimize the cost but achieve the effect of expensive Multiline Lidar. We use computer vision and deep learning to train existing data sets in this paper. More specifically, we use efficient neural network trained the data in German Traffic Sign Recognition Benchmark and KITTI respectively to realize classification of traffic signs and detection of vehicles, and functions of OpenCV are used to identify and locate traffic identification lines. To plan and make decisions on the driving route, the vehicle driving simulator based on the Model Predictive Control also is used to collect, control and train the data. Finally, our method can be proved practically from the case study and data in Udacity's Self-Driving Car Nanodegree project and the road scene in real life.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: Based on the analysis of actual environmental conditions, the main failure mechanisms and the corresponding sensitive stresses of the metering module are determined, which provide the basis for the smart meter's reliability index verification test design.
Abstract: The Failure Mode, Mechanism and Effect Analysis (FMMEA) method is a reliability analysis method that investigates the possible failure mechanisms and their failure modes for each component of the product, then determines how each failure mechanism affects the other components. This article uses the FMMEA method to analyze the failure mechanism of the smart meter's metering module. Based on the analysis of actual environmental conditions, the main failure mechanisms and the corresponding sensitive stresses of the metering module are determined, which provide the basis for the smart meter's reliability index verification test design.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: In this article, a statistical analysis and classification of scheduled commercial flight occurrences during 2008-2017 were performed using the classification criteria of the CAST / ICAO Common Taxonomy Team (ClCTT), and preventive measures proposed for Enterprise Safety Culture (ESC), Security Management System (SMS), Crew and Ground Staff (CaGS), Environment Factor (EF) and Physical Entity (PE).
Abstract: Since the 1950s, aircraft have become a secure method of transportation and their role in commercial services has continuously expanded, as reflected in the rapid growth of passenger and cargo traffic. Due its advantages, such as being comfortable and quick, commercial aviation occupies a unique position in the global transportation structure. However, civil aviation accidents occur annually resulting in heavy losses. Using the classification criteria of the CAST / ICAO Common Taxonomy Team (ClCTT), a statistical analysis and classification of scheduled commercial flight occurrences during 2008-2017 were performed. This period of time included 105 accidents and 457 incidents. For the type of accidents with high incidence and death rate, the causes were analyzed using the "2-4" Model, and preventive measures proposed for Enterprise Safety Culture (ESC), Security Management System (SMS), Crew and Ground Staff (CaGS), Environment Factor (EF) and Physical Entity (PE).

Proceedings Article•DOI•
01 Oct 2018
TL;DR: This paper proposes a service-based testing framework for NFV platform performance evaluation, under workloads and fault loads, and takes advantage of the Microservice Architecture to integrate some existing open-source testing frameworks.
Abstract: Availability was considered a critical factor of Network Function Virtualization(NFV) platforms. And many existing testing tools with fault injection features for NFV, only aim at availability, while now more and more attention have been paid to the performance degradation when failure happens. Moreover, performance data on hardware/software resources, like CPU consumption, memory usage, are required to collect in performance and fault-tolerant testing. To fulfill the requirements above, this paper proposes a service-based testing framework for NFV platform performance evaluation, under workloads and fault loads. By taking advantage of the Microservice Architecture, the proposed framework will integrate some existing open-source testing frameworks, including performance testing tools, fault injection tools, and monitoring tools, and build a more comprehensive testing scenario. A case study is conducted on Clearwater, a widely used open-source NFV application, to validate the efficiency and usability of the proposed framework.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: This study investigates the failure mechanism of the data storage system of a supercomputer and introduces an age-based cost-minimization preventive maintenance (PM) model, and establishes an optimization model to identify the optimal PM interval with the goal of minimizing the long-run average cost.
Abstract: In this study we investigate the failure mechanism of the data storage system of a supercomputer and introduce an age-based cost-minimization preventive maintenance (PM) model. The data storage system of a supercomputer consists of hundreds of storage nodes, and each storage node contains several independently and identically distributed nonrepairable hard disks (HDs). Based on the data storage mechanism of the HDs and the failure mechanism of the storage nodes, the k-out-of-n:F system is used to model the data storage nodes. The lifetime of all HDs is assumed to be exponentially distributed. The age-based PM policy is to replace the failed HDs every PM interval or upon system failure. We establish an optimization model to identify the optimal PM interval with the goal of minimizing the long-run average cost while meeting the requirements for system reliability. Our numerical case study shows the application of the present model and method, and analyzes the relationship between the long-run average cost and PM interval.

Proceedings Article•DOI•
Tingyang Gu1, Minyan Lu1, Luyi Li1•
01 Oct 2018
TL;DR: This paper focuses on runtime models used for analysis and evaluation of quality attributes of self-adaptive software, and introduced two types of typical construction methods and their general construction processes.
Abstract: Self-adaptive software has the capability to sense a change in its environment and its own behaviour, and then adjust itself accordingly during runtime to meet the desired requirements. Analysing and evaluating quality attributes is popular in self-adaptive software research. In recent years, several studies have proposed runtime models which analyse and evaluate quality attributes of self-adaptive software. This paper focuses on runtime models used for analysis and evaluation of quality attributes of self-adaptive software. Firstly, self-adaptive software, runtime models, analysis and evaluation of quality attributes based on runtime models, and related concepts are introduced. Studies describing runtime models are investigated and the meaning of runtime models used in analysis and evaluation of quality attributes of self-adaptive software is clarified. Two types of typical construction methods and their general construction processes are described. Runtime models were analysed and categorized considering multiple aspects, including type, modelling language, application scenarios, and relationship between runtime models and quality attributes. The extension mechanisms for runtime models were also analysed and extracted into two types. The weaknesses of current research are listed and analysed, with future research directions suggested.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: The failure prediction method based on Time Series Model proposed in this paper is proved to be feasible and can predict spacecraft load's failure state and by comparing it with the normal data.
Abstract: As chinese aerospace industry developing rapidly, a great mass of spacecraft has been lunched into space. As these crafts far away from human's reach, we can hardly predict the failure state of these crafts but by analyzing temperature data transferred with their remote sensing information. In this paper, a failure prediction method is put forward on basis of Time Series Model and the temperature data of the spacecraft load. First, the load's real-time temperature data are tested by White Noise Analysis to analyze the validity of the model. Then the Stationarity Analysis and Differential Processing on the data are performed to obtain a smooth time series. And then, the parameter of ARIMA model can be determined according to the time series. At last, future load's temperature can be predicted by the model, according to which we can predict spacecraft load's failure state and by comparing it with the normal data. At the end of this paper, a remote sensing camera's real-time temperature data are used as an example to verifying this method. After comparing to the real situation of the load, the failure prediction method based on Time Series Model proposed in this paper is proved to be feasible.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: The Software Failure Modes and Effects Analysis method was used to analyze the safety weaknesses of the LPAR software system, to determine the cause of the safety failure and to propose measures for improvement.
Abstract: This study investigates the safety problems for a Large Phased Array Radar (LPAR) software system. Firstly, the Software Fault Tree Analysis method was used to analyze the safety of the LPAR software system, and to identify any problems that may cause safety failure of the software system. Secondly, the problem of safety analysis was transformed into a Bayesian network risk assessment model. GeNIe software was used for the simulation and a risk assessment was performed which identified the safety weaknesses in the system. Finally, the Software Failure Modes and Effects Analysis method was used to analyze the safety weaknesses of the LPAR software system, to determine the cause of the safety failure and to propose measures for improvement.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: The present study adopted the principles of the quality management cycle to propose a procedure based on quality function deployment for product quality and reliability improvement and a case study of a common household appliance was used to illustrate the method.
Abstract: This study outlines the potential of the quality function deployment method as an effective tool in determining product quality and reliability improvement. The main failure modes and causes can be obtained from the quality warranty data statistical analysis and translated into customer requirements. The quality function deployment method converts customer requirements into improvement measures. Through quantitative analysis of the relationship between customer requirements and engineering measures, the degree of importance of engineering measures can be obtained and key measures determined. The quality function deployment method utilises a series of planning matrices (houses of quality). The present study adopted the principles of the quality management cycle to propose a procedure based on quality function deployment for product quality and reliability improvement. Finally, a case study of a common household appliance was used to illustrate the method.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: In this article, the effects of flow rate and printing head speed on UTS were examined and an effective Design of Experiments (DOE) method was applied to determine the settings of parameters to obtain the optimal UTS.
Abstract: Mechanical properties are important indexes to indicate quality of Fused Deposition Modeling (FDM) parts, and Ultimate Tensile Strength (UTS) is one of the frequently concerned mechanical properties. In the manufacturing process of FDM parts, UTS is affected by many controllable process parameters, such as raster orientation, fill rate, layer thickness, head speed, flow rate and so on. The effects of some parameters have been studied in detail but others have not. In this paper, the effects of flow rate and printing head speed on UTS are examined. An effective Design of Experiments (DOE) method is applied to determine the settings of parameters to obtain the optimal UTS. The two process parameters are set to twenty-four different combinations of six levels of printing head speed and four levels of flow rate in manufacturing process, and FDM specimens from Polylactic Acid (PLA) are manufactured according to the test standard. Five specimens are manufactured under each condition. UTS values of specimens are measured by experimental test of tensile properties. The response surface model and Signal-to-Noise Ratio (SNR) are applied to analyze these UTS data. As a result, flow rate has a significant effect on UTS but printing head speed does not show significant effect in this case, and optimal settings of flow rate and printing head speed are given.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: This paper model sequential attacks, which involve multiple sequence-dependent hazardous actions for a successful attack and explores a Markov-based method to estimate the occurrence probability of security risks for systems subject to the sequential attacks.
Abstract: One of the biggest challenges during the overall promotion of computer industry is the security risk issue. Most of the existing approaches for quantifying security risks are based on simple multiplications of frequencies and quantitative consequences of hazard occurrence without considering dependencies among the hazards. In this paper, we model sequential attacks, which involve multiple sequence-dependent hazardous actions for a successful attack. We also explore a Markov-based method to estimate the occurrence probability of security risks for systems subject to the sequential attacks. The method is demonstrated through a detailed case study where Trojan attacks in the banking application are modeled and analyzed.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: In this article, the variation of spring torque under different stress is taken as the degradation of the spring, and the reliability analysis of the springs' experimental data based on the degradation distribution is analyzed by using the stochastic process degradation modeling method and the generalized degenerate modeling method respectively.
Abstract: As basic components, springs are widely used in aerospace, mechanical and electrical fields. Due to their various functions such as buffer, energy storage and polarization, springs are playing an increasingly important role in various fields. In the long-term storage process, due to environmental factors such as temperature, the spring will be degraded, manifested as the stress relaxation phenomenon, and the process is that as the storage time increases, the elasticity of the spring slowly declines, and the load capacity slowly decreases, the torque changes while the spring is released. In this paper, the variation of spring torque under different stress is taken as the degradation of the spring, and the reliability analysis of the springs' experimental data based on the degradation distribution is analyzed by using the stochastic process degradation modeling method and the generalized degenerate modeling method respectively. The advantages and disadvantages of different methods will be given, it provides a reliable and effective way to analyze the degradation process of springs.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: A multi domain expert rating assignment optimization model based on grey prediction theory is proposed, suitable for the reliability allocation of large complex systems with multiple influence factors.
Abstract: The reliability allocation of the product is to allocate the reliability of the system with the components of the system. In view of the fact that there are many influencing factors in the process of reliability allocation, it is necessary in order to simplify and select the important factors to give priority to analysis. A multi domain expert rating assignment optimization model based on grey prediction theory is proposed. The model analyzes the relationship between the number of experts and the final reliability allocation results. By establishing the equivalent evaluation matrix, the distribution factors are omitted, and the reliability of the system is redistributed. Taking the reliability allocation of a certain aero-engine as an example, by comparing the results of the reliability allocation before and after optimization, we get the better result of reliability allocation in the case of reducing the least influence on the reliability allocation result. The new model is suitable for the reliability allocation of large complex systems with multiple influence factors. It is mainly applied to select and eliminate the minimal influence factors in the process of reliability allocation, which has precise engineering value.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: This study proposes that for shared input failures and hardware failures, the fault propagation characteristics of the IMA system due to the failure of different shared elements are obtained and the distribution of the reliability of the sensor is explored and proposed.
Abstract: One of the core landmark technologies of large aircraft is Integrated Modular Avionics (IMA). The IMA system integrates physical synthesis and functions to reduce aircraft weight, improve performance, and reduce life cycle costs. It is of great significance. But at the same time, a series of reliability and safety issues have also arisen as a result of these sharing and integration. Because traditional reliability and safety analysis methods are difficult to analyze the complex effects of shared element failures in IMA systems, based on the analysis of the IMA system structure and its failure modes, this study proposes that for shared input failures and hardware failures. Based on the IMA fault propagation model, through the calculation of the reachability matrix and the integrated analysis of the model topology and non-topological factors, the fault propagation characteristics of the IMA system due to the failure of different shared elements are obtained. Based on this, the distribution of the reliability of the sensor is explored and proposed. Preliminary thinking on reliability distribution of sensors based on safety importance.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: The multi-echelon inventory allocation of civil aircraft spare parts with lateral transshipments and importance degree is validated to be feasible and effective and the optimal configuration quantity for each spare part is illustrated.
Abstract: This paper proposes a research method for multi-echelon inventory allocation of civil aircraft spare parts with lateral transshipments and importance degree to investigate the influence of importance degree on inventory allocation of spare parts. Firstly, the procedure of multi-echelon inventory optimal allocation for civil aircraft spare parts is elaborated under considering importance degree and lateral transshipments. Secondly, the inventory optimal model with the fleet availability as the objective function, inventory system total cost and the fill rate as the constraints are constructed. Thirdly, the marginal analysis method is applied to resolve the mathematical model. Lastly, the spare parts of civil aircraft door are selected as the study object, and the inventory optimal allocation is investigated. The analysis results illustrate the optimal configuration quantity for each spare part, and the inventory system total cost and fleet availability are obtained under the condition of satisfying the constraints. Through the comparison of the results of the normal lateral transshipments model, the multi-echelon inventory allocation of civil aircraft spare parts with lateral transshipments and importance degree is validated to be feasible and effective.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: A simulation method based on extended object-oriented petri net (EOOPN) is proposed to get an approximate solution of such kind of time redundancy PMS with multiple dependent missions.
Abstract: For phased mission systems (PMS), sometimes there are several missions need to be completed during its mission time. These missions are not always independent due to common components. Moreover, the system within a phase may need to keep working for only a minimal length of time less than the phase duration, so a time redundancy exists in the phase. Most existing works on PMS do not considered this two factors. This paper proposed a simulation method based on extended object-oriented petri net (EOOPN) to get an approximate solution of such kind of time redundancy PMS with multiple dependent missions. A threelevel modeling procedure with five EOOPN general models is defined describing the mission execution process. A simplified example is introduced to explain the use of the method.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: A way to use these state variables containing a large number of component fault information to predict system faults is proposed, which relies on a Restricted Boltzmann Machine (RBM) and its derived algorithm, which has excellent information filtering, component analysis and feature extraction capabilities.
Abstract: The high efficiency and small size of the switching power amplifier (SPA) make it more ideal than amplifiers of other types, and it has been utilized widely. Effective fault prognosis of the SPA is extremely necessary for improving system reliability. This paper proposes a way to use these state variables containing a large number of component fault information to predict system faults. This method relies on a Restricted Boltzmann Machine (RBM) and its derived algorithm, which has excellent information filtering, component analysis and feature extraction capabilities. The classification-restricted Boltzmann model was constructed, and its classification performance was tested using data sets. It has good performance in avoiding over-fitting and local optimal solutions, and reducing machine learning process complexity.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: This paper will synthesis existing studies to carry out a detailed analysis on the difference between and relationship between functional safety and information security, and a preliminary description and framework for how to achieve safety and security integration will be provided.
Abstract: Functional safety and industrial control information security are two important aspects for guaranteeing the complete stability of a manufacturing process; they are inseparable from each other such as the control system, communication and software. Traditionally, independent functional safety standards and information security standards are applied in many high-risk industries. However, with increasing intelligent manufacturing, as well as continuous reformation of electronisation, digitisation and informatisation in the production process, functional safety and information security of complicated interconnected systems has become a challenge. On one hand, the connection between functional safety and information security is so close that it is difficult to clearly segment the two. On the other hand, new contradictions and conflicts may occur when safety and security problems are considered independently. Therefore, how to cooperatively and comprehensively consider functional safety and information security is a problem that remains to be solved. At present, there are some researches and theories in the published literature on the coordination or integration of functional safety and information security. This paper will synthesis these existing studies to carry out a detailed analysis on the difference between and relationship between functional safety and information security. The concept of integrated risk assessment and an integrated lifecycle model will also be proposed. Finally, a preliminary description and framework for how to achieve safety and security integration will be provided.

Proceedings Article•DOI•
Xiaoxi Liu, Yan Song, He Zongke, Kaiwei Wang, Kai Li 
01 Oct 2018
TL;DR: This work proposes an assessment model of continuous working ability (CWA), based on minimal order statistics, to ensure the stability and reliability of a radar.
Abstract: Radars are the eye and ear of military equipment. The continuous working ability of a radar is to the key to guaranteeing combat effectiveness. This work proposes an assessment model of continuous working ability (CWA), based on minimal order statistics, to ensure the stability and reliability of a radar. The relationship between CWA and mean time between failure (MTBF) is analyzed to reduce the time and cost of reliability testing. Maintainability and testability are developed in expanded CWA models. Using the models, the CWA of a radar is verified considering its maintainability or testability. Finally, the relationship between CWA design, basic reliability, maintainability, and testability is examined. Methods for improvement of CWA are also introduced.

Proceedings Article•DOI•
01 Oct 2018
TL;DR: The effectiveness of the reliability assessment proposed in this paper is tested with a simulation, and an actual case about a machine tool guideway wear testing is studied.
Abstract: An important issue in the validation of high-reliability and high-cost mechanical parts is reliability assessment for small sample, by means of tests. Since the sample should be as few as possible and the tests should be as short as possible, we combine some statistical approaches and failure physical models to estimate reliability for products subjected to degradation mechanism. In order to improve the accuracy of the reliability assessment of small sample, firstly, generate virtual samples based on the failure physical model and virtual sample generation method, only wear failure is studied in this paper. Then, a performance degradation model based on Normal process is used to assess the reliability. The effectiveness of the reliability assessment proposed in this paper is tested with a simulation, and an actual case about a machine tool guideway wear testing is studied.