scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Intelligent Manufacturing in 2014"


Journal ArticleDOI
TL;DR: The results demonstrate that not only the proposed models can predict the bead width and height with reasonable accuracy, but also the neural network model has a better performance than the second-order regression model due to its great capacity of approximating any nonlinear processes.
Abstract: The single weld bead geometry has critical effects on the layer thickness, surface quality, and dimensional accuracy of metallic parts in layered deposition process. The present study highlights application of a neural network and a second-order regression analysis for predicting bead geometry in robotic gas metal arc welding for rapid manufacturing. A series of experiments were carried out by applying a central composite rotatable design. The results demonstrate that not only the proposed models can predict the bead width and height with reasonable accuracy, but also the neural network model has a better performance than the second-order regression model due to its great capacity of approximating any nonlinear processes. The neural network model can efficiently be used to predict the desired bead geometry with high precision for the adaptive slicing principle in layer additive manufacturing.

216 citations


Journal ArticleDOI
TL;DR: The possibility to generate a system by applying a combination of Cluster Analysis and Supervised Machine Learning on product state data along the manufacturing programme will be presented.
Abstract: Increasing market demand towards higher product and process quality and efficiency forces companies to think of new and innovative ways to optimize their production. In the area of high-tech manufacturing products, even slight variations of the product state during production can lead to costly and time-consuming rework or even scrapage. Describing an individual product's state along the entire manufacturing programme, including all relevant information involved for utilization, e.g., in-process adjustments of process parameters, can be one way to meet the quality requirements and stay competitive. Ideally, the gathered information can be directly analyzed and in case of an identified critical trend or event, adequate action, such as an alarm, can be triggered. Traditional methods based on modelling of cause-effect relations reaches its limits due to the fast increasing complexity and high-dimensionality of modern manufacturing programmes. There is a need for new approaches that are able to cope with this complexity and high-dimensionality which, at the same time, are able to generate applicable results with reasonable effort. Within this paper, the possibility to generate such a system by applying a combination of Cluster Analysis and Supervised Machine Learning on product state data along the manufacturing programme will be presented. After elaborating on the different key aspects of the approach, the applicability on the identified problem in industrial environment will be discussed briefly.

190 citations


Journal ArticleDOI
TL;DR: This paper focuses on the design of multiobjective evolutionary algorithms (MOEAs) to solve a variety of scheduling problems, and introduces fitness assignment mechanism and performance measures for solving multiple objective optimization problems.
Abstract: Scheduling is an important tool for a manufacturing system, where it can have a major impact on the productivity of a production process. In order to find an optimal solution to scheduling problems it gives rise to complex combinatorial optimization problems. Unfortunately, most of them fall into the class of NP-hard combinatorial problems. In this paper, we focus on the design of multiobjective evolutionary algorithms (MOEAs) to solve a variety of scheduling problems. Firstly, we introduce fitness assignment mechanism and performance measures for solving multiple objective optimization problems, and introduce evolutionary representations and hybrid evolutionary operations especially for the scheduling problems. Then we apply these EAs to the different types of scheduling problems, included job shop scheduling problem (JSP), flexible JSP, Automatic Guided Vehicle (AGV) dispatching in flexible manufacturing system (FMS), and integrated process planning and scheduling (IPPS). Through a variety of numerical experiments, we demonstrate the effectiveness of these Hybrid EAs (HEAs) in the widely applications of manufacturing scheduling problems. This paper also summarizes a classification of scheduling problems, and illustrates the design way of EAs for the different types of scheduling problems. It is useful to guide how to design an effective EA for the practical manufacturing scheduling problems. As known, these practical scheduling problems are very complex, and almost is a combination of different typical scheduling problems.

166 citations


Journal ArticleDOI
TL;DR: An overview of recent advances in the field is provided in order to highlight the new trends in solution methodology and ideas for future research are presented by identifying gaps in the current literature.
Abstract: The buffer allocation problem is an NP-hard combinatorial optimization problem and it is an important research issue in designing manufacturing systems. The problem deals with finding optimal buffer sizes to be allocated into buffer areas in a production system to achieve a specific objective. This paper presents a comprehensive survey on buffer allocation problem in production systems. To provide a systematic review of current relevant research, first studies are grouped in two categories: 1. Reliable production lines, 2. Unreliable production lines. Next, the studies in each group are reviewed based on topology of the production line, the solution methodologies suggested and the objective function employed. The aim of this review is twofold. First, it provides an overview of recent advances in the field in order to highlight the new trends in solution methodology. Second, it presents ideas for future research by identifying gaps in the current literature.

134 citations


Journal ArticleDOI
TL;DR: A qualitative study to determine how organizations implement security strategies to protect their information systems in Korea reveals a deeply entrenched preventive mindset, driven by the desire to ensure availability of technology and services, and a comparative ignorance of exposure to business security risks.
Abstract: There considerable advice in both research and practice oriented literature on the topic of information security. Most of the discussion in literature focuses on how to prevent security attacks using technical countermeasures even though there are a number of other viable strategies such as deterrence, deception, detection and response. This paper reports on a qualitative study, conducted in Korea, to determine how organizations implement security strategies to protect their information systems. The findings reveal a deeply entrenched preventive mindset, driven by the desire to ensure availability of technology and services, and a comparative ignorance of exposure to business security risks. Whilst there was some evidence of usage of other strategies, they were also deployed in a preventive capacity. The paper presents a research agenda that calls for research on enterprise-wide multiple strategy deployment with a focus on how to combine, balance and optimize strategies.

121 citations


Journal ArticleDOI
TL;DR: This paper first employs fuzzy synthetic decision to evaluate cloud service providers according to cloud users’ preferences and then adopts cloud model to computing the uncertainty of cloud services based on monitored QoCS data to perform an accurate evaluation of Qo CS in service-oriented cloud computing.
Abstract: Cloud computing promises to provide high quality, on-demand services with service-oriented architecture. However, cloud service typically come with various levels of services and performance characteristics, which makes Quality of Cloud Service (QoCS) high variance. Hence, it is difficult for the users to evaluate these cloud services and select them to fit their QoCS requirements. In this paper, we propose an accurate evaluation approach of QoCS in service-oriented cloud computing. We first employ fuzzy synthetic decision to evaluate cloud service providers according to cloud users' preferences and then adopt cloud model to computing the uncertainty of cloud services based on monitored QoCS data. Finally, we obtain the evaluation results of QoCS using fuzzy logic control. The simulation results demonstrate that our proposed approach can perform an accurate evaluation of QoCS in service-oriented cloud computing.

119 citations


Journal ArticleDOI
TL;DR: This research addresses the disassembly line balancing problem (DLBP) using a Monte-Carlo based reinforcement learning technique and results indicate that the reinforcement learning based method is able to perform effectively, even on a complex large scale problem, within a reasonable amount of computational time.
Abstract: Due to increasing environmental concerns, manufacturers are forced to take back their products at the end of products' useful functional life Manufacturers explore various options including disassembly operations to recover components and subassemblies for reuse, remanufacture, and recycle to extend the life of materials in use and cut down the disposal volume However, disassembly operations are problematic due to high degree of uncertainty associated with the quality and configuration of product returns In this research we address the disassembly line balancing problem (DLBP) using a Monte-Carlo based reinforcement learning technique This reinforcement learning approach is tailored fit to the underlying dynamics of a DLBP The research results indicate that the reinforcement learning based method is able to perform effectively, even on a complex large scale problem, within a reasonable amount of computational time The proposed method performed on par or better than the benchmark methods for solving DLBP reported in the literature Unlike other methods which are usually limited deterministic environments, the reinforcement learning based method is able to operate in deterministic as well as stochastic environments

99 citations


Journal ArticleDOI
TL;DR: A data mining approach to estimate realistic SOTs and unknown dispatching rules from RFID-enabled shopfloor production data is proposed, aiming at improving the quality and stability of production plans and schedules.
Abstract: Radio frequency identification (RFID) has been widely used in manufacturing field and created a ubiquitous production environment, where advanced production planning and scheduling (APS) might be enabled. Within such environment, APS usually requires standard operation times (SOTs) and dispatching rules which have been obtained from time studies or based on past experiences. Wide variations exist and frequently cause serious discrepancies in executing plans and schedules. This paper proposes a data mining approach to estimate realistic SOTs and unknown dispatching rules from RFID-enabled shopfloor production data. The approach is evaluated by real-world data from a collaborative company which has been used RFID technology for supporting its shopfloor production over seven years. The key impact factors on SOTs are quantitatively examined. A reference table with the mined precise and practical SOTs is established for typical operations and suitable dispatching rules are labled as managerial implicities, aiming at improving the quality and stability of production plans and schedules.

88 citations


Journal ArticleDOI
TL;DR: This paper addresses the simulation of weld bead geometry in FCAW process using artificial neural networks (ANN) and optimization of process parameters using particle swarm optimization (PSO) algorithm.
Abstract: Flux cored arc welding (FCAW) process is a fusion welding process in which the welding electrode is a tubular wire that is continuously fed to the weld area. It is widely used in industries and shipyards for welding heavy plates. Welding input parameters play a very significant role in determining the quality of a weld joint. This paper addresses the simulation of weld bead geometry in FCAW process using artificial neural networks (ANN) and optimization of process parameters using particle swarm optimization (PSO) algorithm. The input process variables considered here include wire feed rate (F); voltage (V); welding speed (S) and torch Angle (A) each having 5 levels. The process output characteristics are weld bead width, reinforcement and depth of penetration. As per the statistical design of experiments by Taguchi L25 orthogonal array, bead on plate weldments were made. The experimental results were fed to the ANN algorithm for establishing a relationship between the input and output parameters. The results were then embedded into the PSO algorithm which optimizes the process parameters subjected to the objectives. In this study the objectives considered are maximization of depth of penetration, minimization of bead width and minimization of reinforcement.

85 citations


Journal ArticleDOI
TL;DR: There are strong similarities between these networks in spite of the basic differences in the type of objects that prevent an integral transposition, and this impact will be assessed by means of an analytic model based on a method of continuous approximations.
Abstract: Logistics networks that are currently formed by supply chains are intertwined but remain heterogeneous and not very interconnected. In computer networks, this stage was overtaken with the arrival of Internet. In this paper we explore the possible analogies and transpositions between computer networks, in particular Internet, and logistic networks. To this end, a new logistical concept was proposed: Physical Internet that aims at the interconnection of networks of logistic services. In fact, there are strong similarities between these networks in spite of the basic differences in the type of objects that prevent an integral transposition. To illustrate the pertinence of this analogy, the authors illustrate the interconnection potential of logistics networks with a stylised model. In view of the exploratory nature of this work, this impact will be assessed by means of an analytic model based on a method of continuous approximations. This illustration provides an indication of the potential inherent in the interconnection of logistics networks.

82 citations


Journal ArticleDOI
TL;DR: In this method, machining processes are divided into a series of activities, and Therblig, as one of the basic concepts of Motion study, is introduced to represent the basic energy demand unit.
Abstract: Energy efficiency has become an important factor that should be included in Intelligent Manufacturing due to the increasingly rising energy price and severe energy shortage issues. Energy demand modeling method is the foundation of improving the energy efficiency of manufacturing; therefore, an energy demand modeling methodology for machining processes is proposed. In this method, machining processes are divided into a series of activities, and Therblig, as one of the basic concepts of Motion study, is introduced to represent the basic energy demand unit. Moreover, a mathematical model of energy demand of machining processes is established by linking the activity and Therblig with machining state. Finally, case studies are performed to illustrate the validity and feasibility of the proposed methodology.

Journal ArticleDOI
TL;DR: A genetic algorithm-based heuristics is presented which results in a significant increase in the speed of finding near-optimal solutions and a mixed-integer programming model has been developed for the problem.
Abstract: This study deals with the problem of sequencing feeding tasks of a single mobile robot which is able to provide parts for feeders of machines on production lines. The mobile robot has to be scheduled in order to stoppage from lack of parts in the production line. A method based on the characteristics of feeders and inspired by the ( $$s,Q$$ ) inventory system, is thus applied to define time windows for the feeding tasks of the robot. The capacity of the robot is also taken into consideration. The performance criterion is to minimize total traveling time of the robot for a given planning horizon. A genetic algorithm-based heuristics is presented which results in a significant increase in the speed of finding near-optimal solutions. To evaluate the performance of the genetic algorithm-based heuristic, a mixed-integer programming model has been developed for the problem. A case study is implemented at an impeller production line in a real factory and computational experiments are also conducted to demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: From the experimental results, the performance of ABC was much superior where the estimated minimum R$$_{a }$$ value was 28, 42, 45, 2 and 0.9 % lower compared to actual machining, regression, artificial neural network (ANN), genetic algorithm (GA) and simulated annealing (SA) respectively.
Abstract: Modern machining processes such as abrasive waterjet (AWJ) are widely used in manufacturing industries nowadays. Optimizing the machining control parameters are essential in order to provide a better quality and economics machining. It was reported by previous researches that artificial bee colony (ABC) algorithm has less computation time requirement and offered optimal solution due to its excellent global and local search capability compared to the other optimization soft computing techniques. This research employed ABC algorithm to optimize the machining control parameters that lead to a minimum surface roughness (R $$_{a})$$ value for AWJ machining. Five machining control parameters that are optimized using ABC algorithm include traverse speed (V), waterjet pressure (P), standoff distance (h), abrasive grit size (d) and abrasive flow rate (m). From the experimental results, the performance of ABC was much superior where the estimated minimum R $$_{a }$$ value was 28, 42, 45, 2 and 0.9 % lower compared to actual machining, regression, artificial neural network (ANN), genetic algorithm (GA) and simulated annealing (SA) respectively.

Journal ArticleDOI
TL;DR: Results show that increased modularity is advantageous for the time-based performance of a supply chain network, whereas decreased modularity yields superiority in terms of cost performance.
Abstract: The benefit of integrating product design decisions and supply chain design decisions has been recognized by researchers. Such integration can facilitate better communication between design teams and operations groups. Consequently, potential supply chain risks can be highlighted and addressed before the launch of a new product. Modularization is one of the most critical elements for both product design and supply chain design decisions as it impacts the assembly sequence and hence the selection of component and module suppliers. However, the impact of modularity level on supply chain performance is still unclear, and thus is the focus of this study. The proposed analytical method incorporates both product design and supply chain design functions, and hence, enables simultaneous consideration of these decisions. The supply chain performances of all two-module and three-module design concepts are fully investigated in an effort to explore the impact of modularity level on supply chain performance. Results show that increased modularity is advantageous for the time-based performance of a supply chain network, whereas decreased modularity yields superiority in terms of cost performance.

Journal ArticleDOI
TL;DR: An enhanced Ant Colony optimization (ACO) algorithm is introduced which will be referred to as ACO II and its performance is compared to other existing and new algorithms including ACO I, MetaRaPS, and SA to prove the superiority of this algorithm.
Abstract: In a previous paper (Arnaout et al in J Intell Manuf 21:693---701, 2010), an Ant Colony optimization (ACO I) algorithm was introduced for minimizing the schedule's makespan on unrelated parallel machines with sequence-dependent setup times. Optimal solutions for small instances of this problem were obtained by solving a mixed integer program. However, for larger instances (up to 10 machines and 120 jobs), heuristic and approximate algorithms were necessary to reach solutions in reasonable computational times. ACO I's performance was evaluated by comparing its solutions to solutions obtained using Tabu Search and MetaRaPS (metaheuristic for Randomized Priority Search). While the results indicated that ACO I outperformed the other heuristics, it also showed that MetaRaPS had a better performance when all ratios of N/M (jobs to machines ratio) were considered. In this paper, we introduce an enhanced ACO which will be referred to as ACO II and compare its performance to other existing and new algorithms including ACO I, MetaRaPS, and SA. The extensive and expanded experiments conducted prove the superiority of the enhanced ACO II.

Journal ArticleDOI
TL;DR: The paper reaches the conclusion that the multi-class LAD based fault detection and identification is a promising diagnostic approach in CBM.
Abstract: This paper presents the implementation of a novel multi-class diagnostic technique for the detection and identification of faults based on an approach called logical analysis of data (LAD). LAD is a data mining, artificial intelligence approach that is based on pattern recognition. In the context of condition based maintenance (CBM), historical data containing condition indices and the state of the machine are the inputs to LAD. After training and testing phases, LAD generates patterns that characterize the faulty states according to the type of fault, and differentiate between these states and the normal state. These patterns are found by solving a mixed 0---1 integer linear programming problem. They are then used to detect and to identify a future unknown state of equipment. The diagnostic technique has already been tested on several known machine learning datasets. The results proved that the performance of this technique is comparable to other conventional approaches, such as neural network and support vector machine, with the added advantage of the clear interpretability of the generated patterns, which are rules characterizing the faults' types. To demonstrate its merit in fault diagnosis, the technique is used in the detection and identification of faults in power transformers using dissolved gas analysis data. The paper reaches the conclusion that the multi-class LAD based fault detection and identification is a promising diagnostic approach in CBM.

Journal ArticleDOI
TL;DR: The new method is named exponential risk priority number (ERPN), which uses a simple addition function to the exponential form of $$S, O, and $$D$$ to substitute the conventional RPN method, which is a mathematical product of three parameters.
Abstract: Failure modes and effects analysis (FMEA) has been used to identify the critical risk events and predict a system failure to avoid or reduce the potential failure modes and their effect on operations. The risk priority number (RPN) is the classical method to evaluate the risk of failure in conventional FMEA. RPN, which ranges from 1 to 1000, is a mathematical product of three parameters--severity ( $$S$$ ), occurrence ( $$O$$ ), and detection ( $$D$$ )--to rank and assess the risk of potential failure modes. However, there are some shortcomings of the conventional RPN method, such as: the RPN elements have many duplicate numbers; violate the assumption of measurement scales; and have not considered the weight of $$S, O$$ , and $$D$$ . In order to improve the aforementioned shortcomings of the conventional RPN calculation problem, this paper presents an easy yet effective method to enhance the risk evaluation capability of FMEA. The new method is named exponential risk priority number (ERPN), which uses a simple addition function to the exponential form of $$S, O$$ , and $$D$$ to substitute the conventional RPN method, which is a mathematical product of three parameters. Two practical cases are used to demonstrate that the ERPN method can not only resolve some problems of the conventional RPN method but also is able to provide a more accurate and reasonable risk assessment in FMEA.

Journal ArticleDOI
TL;DR: The numerical comparisons state that the HSS-MOEA is better than a generalized Pareto-based scale-independent fitness function based genetic algorithm combing with VEGA in efficacy (convergence and distribution) performance, while the efficiency is closely equivalent.
Abstract: Process planning and scheduling (PPS) is an important and practical topic but very intractable problem in manufacturing systems. Many research studies used multiobjective evolutionary algorithm (MOEA) to solve such problems; however, they cannot achieve satisfactory results in both quality and computational speed. This paper proposes a hybrid sampling strategy-based multiobjective evolutionary algorithm (HSS-MOEA) to deal with the PPS problem. HSS-MOEA tactfully combines the advantages of vector evaluated genetic algorithm (VEGA) and a sampling strategy according to a new Pareto dominating and dominated relationship-based fitness function (PDDR-FF). The sampling strategy of VEGA prefers the edge region of the Pareto front and PDDR-FF-based sampling strategy has the tendency converging toward the central area of the Pareto front. These two mechanisms preserve both the convergence rate and the distribution performance. The numerical comparisons state that the HSS-MOEA is better than a generalized Pareto-based scale-independent fitness function based genetic algorithm combing with VEGA in efficacy (convergence and distribution) performance, while the efficiency is closely equivalent. Moreover, the efficacy performance of HSS-MOEA is also better than NSGA-II and SPEA2, and the efficiency is obviously better than their performance.

Journal ArticleDOI
TL;DR: Two multi-objective metaheuristics based on simulated annealing and tabu search to solve the multiple allocation p-hub median problem under intentional disruptions by a bi-level model with two objective functions at the upper level and a single objective at the lower level are developed.
Abstract: The main issue in p-hub median problem is locating hub facilities and allocating spokes to those hubs in order to minimize the total transportation cost. However hub facilities may fail occasionally due to some disruptions which could lead to excessive costs. One of the most effective ways to hedge against disruptions especially intentional disruptions is designing more reliable hub networks. In this paper, we formulate the multiple allocation p-hub median problem under intentional disruptions by a bi-level model with two objective functions at the upper level and a single objective at the lower level. In this model, the leader aims at identifying the location of hubs so that minimize normal and worst-case transportation costs. Worst-case scenario is modeled in the lower level where the follower's objective is to identify the hubs that if lost, it would mostly increase the transportation cost. We develop two multi-objective metaheuristics based on simulated annealing and tabu search to solve the problem. Computational results indicate the viability and effectiveness of the proposed algorithms for exploring the non-dominated solutions.

Journal ArticleDOI
TL;DR: A hybrid M5-genetic programming (M5-GP) approach is proposed for empirical modelling of the FDM process with an attempt to resolve the issue of improving trustworthiness of the prediction ability of empirical models on test (unseen) samples.
Abstract: Recent years have seen various rapid prototyping (RP) processes such as fused deposition modelling (FDM) and three-dimensional printing being used for fabricating prototypes, leading to shorter product development times and less human intervention. The literature reveals that the properties of RP built parts such as surface roughness, strength, dimensional accuracy, build cost, etc are related to and can be improved by the appropriate settings of the input process parameters. Researchers have formulated physics-based models and applied empirical modelling techniques such as regression analysis and artificial neural network for the modelling of RP processes. Physics-based models require in-depth understanding of the processes which is a formidable task due to their complexity. The issue of improving trustworthiness of the prediction ability of empirical models on test (unseen) samples is paid little attention. In the present work, a hybrid M5 $$^{\prime }$$ -genetic programming (M5 $$^{\prime }$$ -GP) approach is proposed for empirical modelling of the FDM process with an attempt to resolve this issue of ensuring trustworthiness. This methodology is based on the error compensation achieved using a GP model in parallel with a M5 $$^{\prime }$$ model. The performance of the proposed hybrid model is compared to those of support vector regression (SVR) and adaptive neuro fuzzy inference system (ANFIS) model and it is found that the M5 $$^{\prime }$$ -GP model has the goodness of fit better than those of the SVR and ANFIS models.

Journal ArticleDOI
TL;DR: A multi-objective genetic algorithm has been applied to simultaneously optimize service period and product EOL decisions and a case study of a notebook market is provided to show the applications of the model.
Abstract: Sustainable product service system (S-PSS) regarded as an evolution of existing product development approaches which incorporates services as well as products for achieving sustainable development. Therefore, several IT manufacturers around the world have developed plans for fulfilling consumer's needs by providing a combination of product and services and also management of product at the end of its life (EOL). Hence, it is essential to develop an analytical model that can help manufacturers to analyze their S-PSS plans. We model both optimization of environmental and economical impacts of the product during consumption and end of life phase from point of view of the consumer and manufacturer. A multi-objective genetic algorithm has been applied to simultaneously optimize service period and product EOL decisions. Finally, a case study of a notebook market is provided to show the applications of the model.

Journal ArticleDOI
TL;DR: This paper formally proposes a business correlation model including both quality correlations and selection correlations, and presents an efficient approach for correlation-driven QoS-aware optimal service selection based on a genetic algorithm.
Abstract: A virtual enterprise is an emerging business cooperation model which allows rapid response to the unpredictable market behavior and opportunity. For service oriented enterprises, where computing resources are encapsulated as services and published online, establishing a virtual enterprise can be regarded as a process of service composition. As there are increasing numbers of available services providing similar functionalities but with different quality values, and with potential business correlations among them, it is not trivial to orchestrate a composite service with optimal overall quality of service (QoS). In this paper, we formally propose a business correlation model including both quality correlations and selection correlations, and then present an efficient approach for correlation-driven QoS-aware optimal service selection based on a genetic algorithm. The genetic algorithm is tailored with niching technology, a repair operator and a penalty mechanism. The effectiveness and efficiency of the approach are demonstrated via empirical studies at last.

Journal ArticleDOI
TL;DR: A new Petri net extension for modelling workflow activities together with their required resources is presented: resource-aware Petri nets (RAPN), and an intelligent workflow management system for process monitoring and delay prediction is introduced.
Abstract: Nowadays business process management is becoming a fundamental piece of many industrial processes. To manage the evolution and interactions between the business actions it is important to accurately model the steps to follow and the resources needed by a process. Workflows provide a way of describing the order of execution and the dependencies between the constituting activities of business processes. Workflow monitoring can help to improve and avoid delays in industrial environments where concurrent processes are carried out. In this article a new Petri net extension for modelling workflow activities together with their required resources is presented: resource-aware Petri nets (RAPN). An intelligent workflow management system for process monitoring and delay prediction is also introduced. Resource aware-Petri nets include time and resources within the classical Petri net workflow representation, facilitating the task of modelling and monitoring workflows. The workflow management system monitors the execution of workflows and detects possible delays using RAPN. In order to test this new approach, different services from a medical maintenance environment have been modelled and simulated.

Journal ArticleDOI
TL;DR: This paper describes two heterarchical architectures which use distributed intelligence for the control (operation scheduling, resource allocation and product routing) of a manufacturing cell and the implementation methodology along with experimental results is presented, emphasizing the advantages of each solution.
Abstract: This paper describes two heterarchical architectures which use distributed intelligence for the control (operation scheduling, resource allocation and product routing) of a manufacturing cell. In these architectures intelligence is put respectively at the product and through the network of workstation computers. The cost of embedded specialized hardware required for the first option is still very high. The second option needs the utilization of Automatic Identification and Data Capture technology close to the product to permit its identification and tracking by the computer system. These technologies enable the automated identification of objects, the collection of data about them, and the storage of that data directly into computer systems. Through this network, the computer system can access or update information in databases about the products and connect with software agents that represent production orders and resources and take in common decisions through negotiation and contract-based rules. The theory of the proposed solutions is based on the holonic manufacturing and product-driven automation concepts. The implementation methodology along with experimental results is presented, emphasizing the advantages of each solution.

Journal ArticleDOI
TL;DR: An automatic LED defect detection system based on positioning and image acquisition, appearance feature recognition, and defect classification, which achieved 97.83 %, proving that the detection method proposed can efficiently detect LED chip defects.
Abstract: This study proposed an automatic LED defect detection system to investigate the defects of LED chips. Such defects include fragment chips, scratch marks and remained gold on the pad area, scratch marks on the luminous zone, and missing luminous zone respectively. The system was based on positioning and image acquisition, appearance feature recognition, and defect classification. The normalized correlation coefficient method was used to locate the chip and acquire its image, the K-means clustering method was used to distinguish the appearance, pad area, and luminous zone of chips. In terms of pad area detection, histogram equalization was used to enhance the pad image contrast, and statistical threshold selection and morphological closing were applied to modify the impure points in the pad. Feature values of the pad area were then calculated. The optimal statistical threshold separated the luminous zone and background from the substrate. After processed with closing operation, features of the luminous zone were extracted. Finally, features of each part were clarified by an efficient two-step back-propagation neural network, where a designed appearance classifier and an internal structure classifier were used for recognition. From experiments, total recognition rate of this study achieved 97.83 %, proving that the detection method proposed by this study can efficiently detect LED chip defects.

Journal ArticleDOI
TL;DR: This study aims to develop a retrospective DOE data mining that matches potential designs with a huge amount of data automatically collected in semiconductor manufacturing to enable effective and meaningful knowledge extraction from the data.
Abstract: To maintain competitive advantages, semiconductor industry has strived for continuous technology migrations and quick response to yield excursion. As wafer fabrication has been increasingly complicated in nano technologies, many factors including recipe, process, tool, and chamber with the multicollinearity affect the yield that are hard to detect and interpret. Although design of experiment (DOE) is a cost effective approach to consider multiple factors simultaneously, it is difficult to follow the design to conduct experiments in real settings. Alternatively, data mining has been widely applied to extract potential useful patterns for manufacturing intelligence. However, because hundreds of factors must be considered simultaneously to accurately characterize the yield performance of newly released technology and tools for diagnosis, data mining requires tremendous time for analysis and often generates too many patterns that are hard to be interpreted by domain experts. To address the needs in real settings, this study aims to develop a retrospective DOE data mining that matches potential designs with a huge amount of data automatically collected in semiconductor manufacturing to enable effective and meaningful knowledge extraction from the data. DOE can detect high-order interactions and show how interconnected factors respond to a wide range of values. To validate the proposed approach, an empirical study was conducted in a semiconductor manufacturing company in Taiwan and the results demonstrated its practical viability.

Journal ArticleDOI
TL;DR: A new hybrid algorithm based on artificial immune systems (AIS) and particle swarm optimization (PSO) theory is proposed for this problem with the objective of makespan minimization and shows great competitiveness and potential.
Abstract: A static job shop scheduling problem (JSSP) is a class of JSSP which is a combinatorial optimization problem with the assumption of no disruptions and previously known knowledge about the jobs and machines. A new hybrid algorithm based on artificial immune systems (AIS) and particle swarm optimization (PSO) theory is proposed for this problem with the objective of makespan minimization. AIS is a metaheuristics inspired by the human immune system. Its two theories, namely, clonal selection and immune network theory, are integrated with PSO in this research. The clonal selection theory builds up the framework of the algorithm which consists of selection, cloning, hypermutation, memory cells extraction and receptor editing processes. Immune network theory increases the diversity of antibody set which represents the solution repertoire. To improve the antibody hypermutation process to accelerate the search procedure, a modified version of PSO is inserted. This proposed algorithm is tested on 25 benchmark problems of different sizes. The results demonstrate the effectiveness of the PSO algorithm and the specific memory cells extraction process which is one of the key features of AIS theory. By comparing with other popular approaches reported in existing literatures, this algorithm shows great competitiveness and potential, especially for small size problems in terms of computation time.

Journal ArticleDOI
TL;DR: A scheme for coordination of agents and architecture to dynamically generate context-aware graphical user interface to guide the users and control resource allocation based on the state of VE are proposed.
Abstract: Although virtual enterprises (VE) make it possible for small flexible enterprises to form a collaborative network to respond to business opportunities through dynamic coalition and sharing of the core competencies and resources, they also pose new challenges and issues. Creation of VE involves dynamically established partnerships between the partners and relies on a flexible coordination scheme. The dynamic organizations formed in VE present a challenge in the development of a new methodology to dynamically allocate re-sources and deliver the relevant information to the right people at the right time. A key issue is the development of an effective workflow management scheme for VE. Multi-agent systems (MAS) provide a flexible architecture to deal with changes based on dynamic organization and collaboration of autonomous agents. Despite the extensive studies and research results on MAS, development of a design methodology to support coordination and operations is critical to the success and adoption of VE. The objectives of this research are to propose a design methodology to facilitate coordination and development of context-aware workflow management systems and achieve effective resource allocation for VE based on MAS architecture. To achieve these objectives, a scheme for coordination of agents is proposed. Petri net models are used in the coordination scheme to describe workflows and capture resource activities in VE. The interactions between agents lead to a dynamic workflow model for VE. Based on the aforementioned model, we propose architecture to dynamically generate context-aware graphical user interface to guide the users and control resource allocation based on the state of VE. An order management example is used throughout this paper to illustrate the proposed design methodology.

Journal ArticleDOI
TL;DR: A real-time quality-monitoring model based on process variation trajectory chart is proposed and a novel equipment synthetic failure probability exponent of machining status nodes in MEPN is established to synthesize equipment’s parameters by using logistic regression to quantitatively analyze the potential-failure and forecast the equipment degradation trend.
Abstract: To ensure the machining processes stability of multistage machining processes (MMPs) and improve the quality of machining processes, a real-time quality monitoring and predicting model based on error propagation networks for MMPs is proposed in this paper. As there are some complicated interactions among different stages in MMPs, a machining error propagation network (MEPN) is proposed and its complexity is discussed to analyze the correlation among different stages in MMPs. Based on these, a real-time quality-monitoring model based on process variation trajectory chart is proposed to monitor the key machining stages extracted by MEPN. Due to the complexity of the correlation in MEPN, it is important and necessary to explore the variation propagation mechanism in MEPN. As for this issue, a machining error propagation model of machining form feature nodes in MEPN is established with the neuron model, which is solved with back-propagation neural network. The mapping relationship among machining errors of quality attributes is described through this node model. Furthermore, a novel equipment synthetic failure probability exponent of machining status nodes in MEPN is established to synthesize equipment's parameters by using logistic regression to quantitatively analyze the potential-failure and forecast the equipment degradation trend. At last, the machining process of a connecting rod is used to verify the proposed method.

Journal ArticleDOI
TL;DR: The obtained results indicate that the introduced intelligent framework is applicable for optimal design of complex engineering systems where there exists no analytical formulation that describes the phenomenon as well as information of optimal operating parameters.
Abstract: The purpose of current investigation is to develop a robust intelligent framework to achieve efficient and reliable operating process parameters for laser solid freeform fabrication (LSFF) process as a recent and ongoing topic of investigation. Firstly, based on mutable smart bee algorithm (MSBA) and fuzzy inference system (FIS) two models are developed to identify the clad hight (deposited layer thickness) and the melt pool depth as functions of scanning speed, laser power and mass powder. Using the obtained model, the well-known multiobjective evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is used for multi-criterion optimization of LSFF process. According to the available reported information and also the author's experiments, it is observed that the obtained Pareto front is not justifiable since it fails to cover the entire Pareto hyper-volume due to the lack of intensified exploration. To tackle this deficiency, authors execute a post optimization process through utilizing a competitive unsupervised machine learning approach known as self-organizing map (SOM) with cubic spatial topology. Achieved results indicate that this grid based network is capable of enhancing the intensification of Pareto solutions since its synaptic weights successfully imitate the characteristics of non-dominated solutions (optimal values of mass powder, laser power and scanning speed). For extracting the corresponding objective functions of these non-dominated synaptic weights, MSBA---FIS is used again to map the operating parameters to objective functions space. After the termination of abovementioned procedures, a valuable archive, containing a set of non-dominated solutions, is obtained which lets the authors to make a deliberate engineering trade-off. Simulation experiments reveal that the proposed intelligent framework is highly capable to cope with complex engineering systems. Besides, it is observed that MSBA is more efficient in evolving the structure of hierarchical fuzzy inference system in comparison with classic hierarchical GA-FIS model. This rises from the simple structure of MSBA that turns it into a fast and robust algorithm for handling constraint distributed systems (i.e. hierarchical FIS in current investigation). The obtained results also indicate that the introduced intelligent framework is applicable for optimal design of complex engineering systems where there exists no analytical formulation that describes the phenomenon as well as information of optimal operating parameters.