scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automation Science and Engineering in 2012"


Journal ArticleDOI
TL;DR: The iterative adaptive dynamic programming algorithm using globalized dual heuristic programming technique is introduced to obtain the optimal controller with convergence analysis in terms of cost function and control law for a class of unknown discrete-time nonlinear systems forward-in-time.
Abstract: In this paper, a neuro-optimal control scheme for a class of unknown discrete-time nonlinear systems with discount factor in the cost function is developed. The iterative adaptive dynamic programming algorithm using globalized dual heuristic programming technique is introduced to obtain the optimal controller with convergence analysis in terms of cost function and control law. In order to carry out the iterative algorithm, a neural network is constructed first to identify the unknown controlled system. Then, based on the learned system model, two other neural networks are employed as parametric structures to facilitate the implementation of the iterative algorithm, which aims at approximating at each iteration the cost function and its derivatives and the control law, respectively. Finally, a simulation example is provided to verify the effectiveness of the proposed optimal control approach. Note to Practitioners-The increasing complexity of the real-world industry processes inevitably leads to the occurrence of nonlinearity and high dimensions, and their mathematical models are often difficult to build. How to design the optimal controller for nonlinear systems without the requirement of knowing the explicit model has become one of the main foci of control practitioners. However, this problem cannot be handled by only relying on the traditional dynamic programming technique because of the "curse of dimensionality". To make things worse, the backward direction of solving process of dynamic programming precludes its wide application in practice. Therefore, in this paper, the iterative adaptive dynamic programming algorithm is proposed to deal with the optimal control problem for a class of unknown nonlinear systems forward-in-time. Moreover, the detailed implementation of the iterative ADP algorithm through the globalized dual heuristic programming technique is also presented by using neural networks. Finally, the effectiveness of the control strategy is illustrated via simulation study.

229 citations


Journal ArticleDOI
TL;DR: A cluster tool derived as a not-always-schedulable system by the existing methods is shown to be always-scaledulable by using the proposed novel method.
Abstract: Because of residency time constraints and activity time variation of cluster tools, it is very difficult to operate such integrated semiconductor manufacturing equipment. This paper addresses their real-time operational issues. To characterize their schedulability and achieve the minimum cycle time at their steady-state operation, Petri net (PN) models are developed to describe them, which are very compact, and independent of wafer flow pattern. It is due to the proposed models that scheduling cluster tools is converted into determining robot wait times. A two-level operational architecture is proposed to include an offline periodic schedule and real-time controller. The former determines when a wafer should be placed into a process module for processing, while the latter regulates robot wait times online in order to reduce the effect of activity time variation on wafer sojourn times in process modules. Therefore, the system can adapt to activity time variation. A cluster tool derived as a not-always-schedulable system by the existing methods is shown to be always-schedulable by using the proposed novel method.

201 citations


Journal ArticleDOI
TL;DR: In this paper, a Petri net (PN) model and a control policy are presented to derive closed-form schedulability conditions and an algorithm is developed to obtain an offline periodic schedule.
Abstract: With wafer residency time constraint of cluster tools in semiconductor manufacturing, activity time variation can make an originally feasible schedule infeasible. Thus, it is difficult to schedule them and schedulability is a vitally important issue. With bounded activity time variation considered, this paper addresses their real-time scheduling issues and conducts their schedulability analysis. A Petri net (PN) model and a control policy are presented. Based on them, this paper derives closed-form schedulability conditions. If schedulable, an algorithm is developed to obtain an offline periodic schedule. This schedule together with the control policy forms a real-time schedule. It is optimal in terms of cycle time and can be analytically computed, which represents significant advance in this area.

176 citations


Journal ArticleDOI
TL;DR: The main objective of this work is to design optimal ATO speed profiles of metro trains taking into account the energy recovered from regenerative brake in order to minimize the net energy at substations.
Abstract: Traffic operation has a significant impact on energy consumption in metro lines and thus it is important to analyze strategies to minimize it. In lines equipped with Automatic Train Operation systems (ATO), traffic regulation system selects one ATO speed profile on line among a preprogrammed set of optimized speed profiles. Previous works only minimize the energy demanded by the train in pantograph without considering energy savings measured at substations due to regenerative energy in detail. The main objective of this work is to design optimal ATO speed profiles of metro trains taking into account the energy recovered from regenerative brake in order to minimize the net energy at substations. A model of a train with an on-board energy storage device as well as a network model for estimating the energy recovered by the train is presented. Different scenarios are analyzed to assess the achievable energy savings due to possible investments such as installing power inverters or storage devices and energy savings due to the optimal design of ATO speed profiles are estimated. A real line of the Madrid Underground has been considered obtaining comparative results to facilitate an evaluation of the most advantageous scenario and possible investment. A future application would be the optimal design of driving for the newest train signaling system communications-based train control (CBTC). The CBTC improves train operation and control, due to continuous communication with each train. However, its features have not been taken full advantage yet to reduce the energy consumption. The proposed method could be applied to design and execute in real time the most energy-efficient driving according to the traffic and electrical situation of the line.

176 citations


Journal ArticleDOI
Raffaello D'Andrea1
TL;DR: The author reflects on the automation and robotics innovations that Kiva Systems developed that allow the company to successfully develop and deploy warehouse systems with hundreds - sometimes thousands - of autonomous mobile robots.
Abstract: The author reflects on the automation and robotics innovations that Kiva Systems developed that allow the company to successfully develop and deploy warehouse systems with hundreds - sometimes thousands - of autonomous mobile robots, and to describe some of the remaining research questions.

166 citations


Journal ArticleDOI
TL;DR: This paper presents a novel compliant parallel XY micromotion stage driven by piezoelectric actuators (PZT) designed using a symmetric 4-PP structure in which double four-bar flexure is chosen as the prismatic joint.
Abstract: This paper presents a novel compliant parallel XY micromotion stage driven by piezoelectric actuators (PZT). With the purpose to obtain complete kinematic decoupling and good stiffness performance, the stage is designed using a symmetric 4-PP structure in which double four-bar flexure is chosen as the prismatic joint. Matrix method is employed to establish the compliance model of the mechanism. Based on the model, dynamic analysis is investigated after static analysis is carried out. The dimensions of the mechanism are optimized using the particle swarm optimization (PSO) algorithm in order to maximize the natural frequencies. Finite-element analysis (FEA) result indicates that the mechanism has an almost linear force-deflection relationship, high first natural frequency (720.52 Hz), and ideal decoupling property. To cope with the nonlinearities such as hysteresis that exists in the PZT, the control system is constructed by a proportional-integral-derivative (PID) feedback controller with a feedforward compensator based on Preisach model. The fabricated prototype has a 19.2 μm × 8.8 μm rectangular workspace with coupling less than 5%. The result of the closed-loop test shows that the XY stage can achieve positioning, tracking and contouring tasks with small errors.

155 citations


Journal ArticleDOI
TL;DR: A novel method for optimizing the energy consumption of robotic manufacturing systems that embeds detailed evaluations of robots' energy consumptions into a scheduling model of the overall system and shows that there exists a real possibility for a significant reduction of theEnergy consumption in comparison to state-of-the-art scheduling approaches.
Abstract: Reduction of energy consumption is important for reaching a sustainable future. This paper presents a novel method for optimizing the energy consumption of robotic manufacturing systems. The method embeds detailed evaluations of robots' energy consumptions into a scheduling model of the overall system. The energy consumption for each operation is modeled and parameterized as function of the operation execution time, and the energy-optimal schedule is derived by solving a mixed-integer nonlinear programming problem. The objective function for the optimization problem is then the total energy consumption for the overall system. A case study of a sample robotic manufacturing system and an experiment on an industrial robot are presented. They show that there exists a real possibility for a significant reduction of the energy consumption in comparison to state-of-the-art scheduling approaches.

115 citations


Journal ArticleDOI
TL;DR: This paper investigates iterative deepening A* algorithms (rather than branch and bound) using new lower bound measures and heuristics, and shows that this approach is able to solve much larger instances of the container relocation problem in a time frame that is suitable for practical application.
Abstract: The container relocation problem, where containers that are stored in bays are retrieved in a fixed sequence, is a crucial port operation. Existing approaches using branch and bound algorithms are only able to optimally solve small cases in a practical time frame. In this paper, we investigate iterative deepening A* algorithms (rather than branch and bound) using new lower bound measures and heuristics, and show that this approach is able to solve much larger instances of the problem in a time frame that is suitable for practical application. We also examine a more difficult variant of the problem that has been largely ignored in existing literature.

112 citations


Journal ArticleDOI
Qingsong Xu1
TL;DR: Experimental results not only confirm the superiority of the dual-servo stage over the standalone coarse stage but reveal the effectiveness of the proposed idea of decoupling design.
Abstract: Dual-servo systems (DSSs) are highly desirable in micro-/nanomanipulation when high positioning accuracy, long stroke motion, and high servo bandwidth are required simultaneously. This paper presents the design and development of a new flexure-based dual-stage nanopositioning system. A coarse voice coil motor (VCM) and a fine piezoelectric stack actuator (PSA) are adopted to provide long stroke and quick response, respectively. A new decoupling design is carried out to minimize the interference behavior between the coarse and fine stages by taking into account actuation schemes as well as guiding mechanism implementations. Both analytical results and finite-element model (FEM) results show that the system is capable of over 10 mm traveling, while possessing a compact structure. To verify the decoupling property, a single-input-single-output (SISO) control scheme is realized on a prototype to demonstrate the performance of the DSS without considering the interference behavior. Experimental results not only confirm the superiority of the dual-servo stage over the standalone coarse stage but reveal the effectiveness of the proposed idea of decoupling design.

103 citations


Journal ArticleDOI
TL;DR: A new method based on mean change detection is proposed to estimate the probability density functions of the univariate process variable in the normal and abnormal conditions and a systematic design of alarm systems is investigated based on the three performance indices and the tradeoffs among them.
Abstract: The performance of a univariate alarm system can be assessed in many cases by three indices, namely, the false alarm rate (FAR), missed alarm rate (MAR), and averaged alarm delay (AAD). First, this paper studies the definition and computation of the FAR, MAR, and AAD for the basic mechanism of alarm generation solely based on a trip point, and for the advanced mechanism of alarm generation by exploiting alarm on/off delays. Second, a systematic design of alarm systems is investigated based on the three performance indices and the tradeoffs among them. The computation of FAR, MAR, and AAD and the design of alarm systems require the probability density functions (PDFs) of the univariate process variable in the normal and abnormal conditions. Thus, a new method based on mean change detection is proposed to estimate the two PDFs. Numerical examples and an industrial case study are provided to validate the obtained theoretical results on the FAR, MAR and AAD, and to illustrate the proposed performance assessment and alarm system design procedures.

94 citations


Journal ArticleDOI
TL;DR: A Petri net model is developed to describe the real-time scheduling problem of single-arm cluster tools with wafer residency time constraints and bounded activity time variation and an efficient algorithm is proposed to find a periodical schedule if it is schedulable.
Abstract: It is very challenging to schedule cluster tools subject to wafer residency time constraints and activity time variation. This work develops a Petri net model to describe the system and proposes a two-level real-time scheduling architecture. At the lower level, a real-time control policy is used to offset the activity time variation as much as possible. At the upper level, a periodical off-line schedule is derived under the normal condition. This work presents the schedulability conditions and scheduling algorithms for an off-line schedule. The schedulability conditions can be analytically checked. If they are satisfied, an off-line schedule can be analytically found. The off-line schedule together with a real-time control policy forms the real-time schedule for the system. It is optimal in terms of cycle time minimization. Illustrative examples are given to show the application of the proposed approach. Note to Practitioners-This paper discusses the real-time scheduling problem of single-arm cluster tools with wafer residency time constraints and bounded activity time variation. With a Petri net model, schedulability is analyzed and schedulability conditions are presented by using analytical expressions. Then, an efficient algorithm is proposed to find a periodical schedule if it is schedulable. Such a schedule is optimal in terms of cycle time and can adapt to bounded activity time variation. Therefore, it is applicable to the scheduling and real-time control of cluster tools in semiconductor manufacturing plants.

Journal ArticleDOI
TL;DR: A decoupled and prioritized path planning approach by sequentially applying a partially observable Markov decision process algorithm on every particle that needs to be transported by using an iterative version of a maximum bipartite graph matching algorithm.
Abstract: Automated transport of multiple particles using optical tweezers requires real-time path planning to move them in coordination by avoiding collisions among themselves and with randomly moving obstacles. This paper develops a decoupled and prioritized path planning approach by sequentially applying a partially observable Markov decision process algorithm on every particle that needs to be transported. We use an iterative version of a maximum bipartite graph matching algorithm to assign given goal locations to such particles. We then employ a three-step method consisting of clustering, classification, and branch and bound optimization to determine the final collision-free paths. We demonstrate the effectiveness of the developed approach via experiments using silica beads in a holographic tweezers setup. We also discuss the applicability of our approach and challenges in manipulating biological cells indirectly by using the transported particles as grippers.

Journal ArticleDOI
TL;DR: Simulation and experimental results envisage that the proposed RL-based adaptive controller (RLAC) outperforms both the DAC and FLAC.
Abstract: This paper exploits reinforcement learning (RL) for developing real-time adaptive control of tip trajectory and deflection of a two-link flexible manipulator handling variable payloads. This proposed adaptive controller consists of a proportional derivative (PD) tracking loop and an actor-critic-based RL loop that adapts the actor and critic weights in response to payload variations while suppressing the tip deflection and tracking the desired trajectory. The actor-critic-based RL loop uses a recursive least square (RLS)-based temporal difference (TD) learning with eligibility trace and an adaptive memory to estimate the critic weights and a gradient-based estimator for estimating actor weights. Tip trajectory tracking and suppression of tip deflection performances of the proposed RL-based adaptive controller (RLAC) are compared with that of a nonlinear regression-based direct adaptive controller (DAC) and a fuzzy learning-based adaptive controller (FLAC). Simulation and experimental results envisage that the RLAC outperforms both the DAC and FLAC.

Journal ArticleDOI
TL;DR: The VM automation levels are defined, the concept of automatic virtual metrology (AVM), and an AVM system for automatic and fab-wide VM deployment are developed and successfully deployed in a fifth-generation thin-film-transistor-liquid-crystal-display factory in Chi Mei Optoelectronics, Taiwan.
Abstract: Virtual Metrology (VM) is a method to conjecture manufacturing quality of a process tool based on data sensed from the process tool and without physical metrology operations. VM has now been designated by the International SEMATECH Manufacturing Initiative and International Technology Roadmap for Semiconductors as one of the focus areas for the next-generation factory realization roadmap of the semiconductor industry. This paper defines the VM automation levels, proposes the concept of automatic virtual metrology (AVM), and develops an AVM system for automatic and fab-wide VM deployment. The example of automatic VM model refreshing for chemical vapor deposition (CVD) tools is also illustrated in this paper. The AVM system has been successfully deployed in a fifth-generation thin-film-transistor-liquid-crystal-display (TFT-LCD) factory in Chi Mei Optoelectronics (CMO), Taiwan.

Journal ArticleDOI
TL;DR: A new macroscopic network-flow model is established where fire, smoke, and psychological factors can evoke a crowd's desire to escape—the desired flow rate, which can evacuate more people more rapidly by preventing or mitigating potential disorder and blocking at bottleneck passages.
Abstract: In building emergency evacuation, the perception of hazards can stress crowds, evoke their competitive behaviors, and trigger disorder and blocking as they pass through narrow passages (e.g., a small exit). This is a serious concern threatening evacuees' survivability and egress efficiency. How to optimize crowd guidance while considering such effects is an important problem. Based on advanced microscopic pedestrian models and simulations, this paper establishes a new macroscopic network-flow model where fire, smoke, and psychological factors can evoke a crowd's desire to escape—the desired flow rate. Disorder and blocking occur when the desired flow rate exceeds the passage capacity, resulting in a drastic decrease of crowd movement in a nonlinear and random fashion. To effectively guide cro wds, a divide-and-conquer approach is developed based on groups to reduce computational complexity and to reflect psychological findings. Egress routes for individual groups are optimized by using a novel combination of stochastic dynamic programming and the rollout scheme. These routes are then coordinated so that limited passage capacities are shared to meet the total need for joint movement. Numerical testing and simulation demonstrate that, compared with a strategy of merely using nearest exits, our solution can evacuate more people more rapidly by preventing or mitigating potential disorder and blocking at bottleneck passages.

Journal ArticleDOI
TL;DR: This work proposes a novel probability analysis method of dis assembly cost with random removal time and different removal labor cost, which presents typical probability evaluation models of disassembly cost.
Abstract: Disassembly is a systematic method to separate an end-of-life product into its constituent parts and components. However, the disassembly process of products can experience great uncertainty due to a variety of unpredictable factors. To deal with such uncertainty, this work proposes a novel probability analysis method of disassembly cost with random removal time and different removal labor cost. According to different constraints and actual execution of disassembly, it presents typical probability evaluation models of disassembly cost. Moreover, a solution algorithm based on stochastic simulation is used to solve the proposed probability models. Some numerical examples are given to illustrate the proposed concepts and the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: This paper presents a multiscale assembly and packaging system (MAPS) comprising of 20 degrees of freedom (DoFs) that can be arranged in several reconfigurable micromanipulation modules depending on the specific task.
Abstract: Reliable manufacturability has always been a major issue in commercialization of complex and heterogeneous microsystems Though successful for simpler and monolithic microdevices such as accelerometers and pressure sensors of early days, conventional surface micromachining techniques, and in-plane mechanisms do not prove suffice to address the manufacturing of today's wide range of microsystem designs This has led to the evolution of microassembly as an alternative and enabling technology which can, in principle, build complex systems by assembling heterogeneous microparts of comparatively simpler design; thus reducing the overall footprint of the device and providing high structural rigidity in a cost efficient manner However, unlike in macroscale assembly systems, microassembly does not enjoy the flexibility of having ready-to-use manipulation systems or standard off-the-shelf components System specific designs of microparts and mechanisms make the fabrication process expensive and assembly scheme diverse This warrants for a modular microassembly cell which can execute the assembly process of multiple microsystems by reconfiguring the kinematics setup, end-effectors, feedback system, etc; thus minimizing the cost of production In this paper, we present a multiscale assembly and packaging system (MAPS) comprising of 20 degrees of freedom (DoFs) that can be arranged in several reconfigurable micromanipulation modules depending on the specific task The system has been equipped with multiple custom-designed microgrippers and end-effectors for different applications Stereo microscopic vision is achieved through four high-resolution cameras We will demonstrate the construction of two different microsystems using this microassembly cell; the first one is a miniature optical spectrum analyzer called microspectrometer and the second one is a MEMS mobile robot/conveyor called Arripede

Journal ArticleDOI
TL;DR: A hybrid model based on a new Petri net formalism that merges the concepts of Hybrid Petri Nets and Colored Petri nets to obtain modular and compact models for automated warehouse systems analysis and performance evaluation is presented.
Abstract: An automated warehouse system has two main components: an automated storage and retrieval subsystem consisting of a number of aisles, each one served by a crane, and a picking area which is formed by bays where stock units coming from the aisles are partially emptied by human operators. These two components are connected via an interface area consisting of carousels, conveyors, and buffers. This area is usually modeled as a discrete event system, while the overall system performance depends also on continuous time phenomena. Part I presents a hybrid model based on a new Petri net formalism that merges the concepts of Hybrid Petri Nets and Colored Petri Nets to obtain modular and compact models for these systems. An example is discussed in detail to motivate the introduction of a new formalism. A control oriented simulation tool is also presented. Part II will focus on the application of this formalism to automated warehouse systems analysis and performance evaluation. Finally, a real case study is considered to show the effectiveness of the approach.

Journal ArticleDOI
TL;DR: The experimental results indicate that multiple kernel SVM is a kind of highly competitive data-driven modeling method for the blast furnace system and can provide reliable indication for blast furnace operators to take control actions.
Abstract: This paper constructs the framework of the reproducing kernel Hilbert space for multiple kernel learning, which provides clear insights into the reason that multiple kernel support vector machines (SVM) outperform single kernel SVM. These results can serve as a fundamental guide to account for the superiority of multiple kernel to single kernel learning. Subsequently, the constructed multiple kernel learning algorithms are applied to model a nonlinear blast furnace system only based on its input-output signals. The experimental results not only confirm the superiority of multiple kernel learning algorithms, but also indicate that multiple kernel SVM is a kind of highly competitive data-driven modeling method for the blast furnace system and can provide reliable indication for blast furnace operators to take control actions.

Journal ArticleDOI
TL;DR: This paper presents a Petri net approach to mediation-aided composition of Web services that is modeled as open WorkFlow Nets and are composed using mediation transitions (MTs), and an Event-Condition-Action rule-based technique is developed to automatically generate the BPEL code of the composition.
Abstract: Recently, mediation-aided composition has been widely adopted when dealing with incompatibilities of services. However, existing approaches suffer from state space explosion in compatibility verification and cannot automatically generate the BPEL code. This paper presents a Petri net approach to mediation-aided composition of Web services. First, services are modeled as open WorkFlow Nets (oWFNs) and are composed using mediation transitions (MTs). Second, the modular reachability graph (MRG) of composition is automatically constructed and the compatibility is analyzed, so that the problem of state space explosion is significantly alleviated. Furthermore, an Event-Condition-Action (ECA) rule-based technique is developed to automatically generate the BPEL code of the composition, which can significantly save the time and labor of designers. Finally, the prototype system has been developed.

Journal ArticleDOI
TL;DR: The redesign methodologies from the PLC control to the event-driven architecture of IEC 61499 function blocks are proposed, and three different redesign approaches are proposed: object-oriented conversion approach, object- oriented reuse approach, and class-oriented approach.
Abstract: The IEC 61499 Function Block architecture is considered as the next generation of programmable control technology promoting distributed control in automation. In this paper, the redesign methodologies from the PLC control to the event-driven architecture of IEC 61499 function blocks are proposed. A general set of translation steps and mapping rules for redesigning applications from IEC 61131-3 PLC to IEC 61499 Function Blocks is provided. Three different redesign approaches are proposed: object-oriented conversion approach, object-oriented reuse approach, and class-oriented approach. These approaches are to be applied for different design styles of the source PLC code. The data handling efficiency of all approaches is investigated. An airport baggage handling system is chosen as the case study for the redesign process. The rules and limitations are summarized and guidelines for the redesign process are provided.

Journal ArticleDOI
TL;DR: The sequential and holistic coalition methods presented in this paper provide both online and offline solutions for optimal multirobot task allocation.
Abstract: We propose a coalition-based approach to solve the task allocation problem of multiple robots with resource constraints. The resources required by task execution characterize the robots and tasks. Robots must form coalitions to accomplish the assigned tasks because individually, each robot may not complete the task independently due to resource limitation. We consider both online and offline assignment manners of the task allocation problem. For online assignment, a sequential coalition method is proposed to select efficiently the suitable robots to form coalitions for the assigned task. For offline assignment, a holistic coalition method is proposed for global optimization of all the assigned tasks. Both sequential and holistic coalition methods are compared with existing approaches. Numerous simulations and experiments performed on heterogeneous multiple mobile robots demonstrate the effectiveness of the proposed coalition-based task allocation methods. Note to Practitioners - Task allocation into a group of heterogeneous mobile robots for implementing multiple tasks is a challenge in multirobot applications. To find and organize the most suitable coalition for each task, we need to well organize the coalition for each task to maximize the robot group utility and optimize the task allocation solution. The sequential and holistic coalition methods presented in this paper provide both online and offline solutions for optimal multirobot task allocation. The advantage of the sequential coalition method lies in its efficiency in selecting best-fitted robots during coalition forming, and the advantage of the holistic coalition method lies in its effectiveness in finding the global optimal solution for all tasks. We illustrate the effectiveness of the proposed methods through numerous case studies with comparisons in this paper.

Journal ArticleDOI
TL;DR: A new energy model is developed based on the kinematic and dynamic behaviors of a chosen machine tool which can be extended and applied to other machines to establish their energy models for green and sustainable manufacturing.
Abstract: In this paper, a new energy model is developed based on the kinematic and dynamic behaviors of a chosen machine tool. One significant benefit of the developed energy model is their inherited relationship to the design variables involved in the manufacturing processes. Without radical changes of the machine tool's structure, the proposed model can be readily applied to optimize process parameters to reduce energy consumption. A new parallel kinematic machine Exechon is used as a case study to demonstrate the modeling procedure. The derived energy model is then used for simulation of drilling operations on aircraft components to verify its feasibility. Simulation results indicate that the developed energy model has led to an optimized machine setup which only consumes less than one-third of the energy of an average machine setup over the workspace. This approach can be extended and applied to other machines to establish their energy models for green and sustainable manufacturing.

Journal ArticleDOI
TL;DR: An efficient outpatient scheduling approach by specifying a bidding method and converting it to a group role assignment problem making automatic outpatient scheduling practical is proposed.
Abstract: Outpatient scheduling is considered as a complex problem. Efficient solutions to this problem are required by many health care facilities. This paper proposes an efficient approach to outpatient scheduling by specifying a bidding method and converting it to a group role assignment problem. The proposed approach is validated by conducting simulations and experiments with randomly generated patient requests for available time slots. The major contribution of this paper is an efficient outpatient scheduling approach making automatic outpatient scheduling practical. The exciting result is due to the consideration of outpatient scheduling as a collaborative activity and the creation of a qualification matrix in order to apply the group role assignment algorithm.

Journal ArticleDOI
TL;DR: The proposed DSM controller with the sliding plane selected for a dead-beat scheme ensures full demand satisfaction for arbitrary bounded demand and achieves a given service level with smaller holding costs and reduced order-to-demand variance ratio as compared to the classical order-up-to policy.
Abstract: In this paper, we apply control-theoretic approach to the design of inventory policy for systems with perishable goods. In the considered systems, the stock used to fulfill unknown, variable demand is subject to exponential decay. It is replenished from multiple supply sources characterized by different lead times. The challenge is to achieve high demand satisfaction with minimum costs when replenishment orders are realized with non-negligible delay. In contrast to the classical stochastic, or heuristic approaches, a formal design methodology based on discrete time sliding mode (DSM) control is employed. The proposed DSM controller with the sliding plane selected for a dead-beat scheme ensures full demand satisfaction for arbitrary bounded demand. It achieves a given service level with smaller holding costs and reduced order-to-demand variance ratio as compared to the classical order-up-to policy.

Journal ArticleDOI
TL;DR: An ontology which provides a conceptual architecture is developed for an HPS, such that a general interpretation of a manufacturing system's implementation is made possible and a formalized application method is devised for replacing simulations with real processes and vice versa.
Abstract: Hardware-in-the-loop (HIL) is a widely used testing approach for embedded systems, where real components and/or controllers are tested in closed-loop with a simulation model. In this paper, we generalize HIL by combining multiple simulations and real components into a Hybrid Process Simulation (HPS). An HPS is a test setup that contains at least one simulated and one actual component, but may contain many of both. It is implemented such that each simulated component can be swapped out with its real counterpart without making changes to the existing system, and vice versa. In this paper, an ontology which provides a conceptual architecture is developed for an HPS, such that a general interpretation of a manufacturing system's implementation is made possible. A formalized application method is then devised for replacing simulations with real processes and vice versa. A conceptual architecture is put forth that separates the effect of a component from its spatial essence (volume or mass). This separation allows workpieces in a manufacturing process, for example, to go from the physical world into the virtual world (computer simulation) and back again repeatedly. The conceptual architecture is applied to a small manufacturing line in the following scenarios: replacing a real robot with a simulated robot, replacing a manufacturing cell with a simulated manufacturing cell, and adding a new simulated manufacturing cell to the existing system. These applications successfully demonstrate how an HPS can be used to test a manufacturing system setup with multiple regions of real and simulated components.

Journal ArticleDOI
TL;DR: In Part I, a hybrid modeling approach based on a new Petri net formalism and a freeware simulation tool have been presented and a real case study is considered to show the effectiveness of the approach.
Abstract: An automated warehouse system has two main components: an automated storage and retrieval subsystem consisting of a number of aisles, each one served by a crane, and a picking area which is formed by bays where stock units coming from the aisles are partially emptied by human operators. These two components are connected via an interface area consisting of carousels, conveyors and buffers. This area is usually modeled as a discrete event system, while the overall system performance depends also on continuous time phenomena. In Part I, a hybrid modeling approach based on a new Petri net formalism and a freeware simulation tool have been presented. The concepts of Hybrid Petri Nets and Colored Petri Nets are merged to obtain modular and compact models for automated warehouse systems. Part II now focuses on the application of this formalism to automated warehouse systems analysis and performance evaluation. Liveness analysis is performed by means of a hybrid automaton obtained from the net model. A deadlock prevention policy is synthesized working on an aggregated model. Finally, a real case study is considered to show the effectiveness of the approach.

Journal ArticleDOI
TL;DR: An Intelligent Quick Prediction Algorithm (IQPA), which employs an extended ELM (ELME) in producing fast, stable, and accurate prediction results for control and loading problems, is devised and can finish the prediction task with accurate results within a prespecified time limit.
Abstract: The Artificial Neural Network (ANN) and its variations have been well-studied for their applications in the prediction of industrial control and loading problems. Despite showing satisfactory performance in terms of accuracy, the ANN models are notorious for being slow compared to, e.g., the traditional statistical models. This substantially hinders ANN model's real-world applications in control and loading prediction problems. Recently a novel learning approach of ANN called Extreme Learning Machine (ELM) has emerged and it is proven to be very fast compared with the traditional ANN. In this paper, an Intelligent Quick Prediction Algorithm (IQPA), which employs an extended ELM (ELME) in producing fast, stable, and accurate prediction results for control and loading problems, is devised. This algorithm is versatile in which it can be used for short, medium to long-term predictions with both time series and non-time series data. Publicly available power plant operations and aircraft control data are employed for conducting analysis with this proposed novel model. Experimental results show that IQPA is effective and efficient, and can finish the prediction task with accurate results within a prespecified time limit.

Journal ArticleDOI
Chengbin Chu1, Feng Chu, MengChu Zhou1, Haoxun Chen, Qingning Shen 
TL;DR: A strongly polynomial dynamic programming algorithm is developed to solve the transportation by tankers or trucks of crude oil and it is shown that under some realistic assumptions, this problem can be transformed into a single item lot sizing problem with limited production and inventory capacities.
Abstract: Crude oil transportation is a central logistics operation in petrochemical industry because its cost represents a significant part in the cost of petrochemical products. In this paper, we consider the transportation by tankers or trucks. We show that under some realistic assumptions, this problem can be transformed into a single item lot sizing problem with limited production and inventory capacities. We develop a strongly polynomial dynamic programming algorithm to solve it. The problem of crude oil transportation is very difficult. There are few efficient methods in this domain. In the model considered in this paper, crude oil is directly shipped from a supplier port to n client ports to satisfy customer demands over T future periods. The supplier port disposes a fleet of identical tankers with limited capacity. The inventory capacities of customers are limited and time-varying. The backlogging is admitted. The objective is to find an optimal shipment plan minimizing the total cost over the T-period horizon. When the number of tankers is unlimited and customer demands are independent, shipment plans of different customers become independent. This problem can be considered as n independent problems. Each of them can be transformed into a single item lot sizing problem with limited production and inventory capacities, where tanker capacity corresponds to production capacity in classical lot sizing models. The main contributions of this paper are: 1) transformation of a transportation planning problem into a lot-sizing problem; 2) an O(T3) algorithm is proposed to solve it; and 3) the results can also be applied to terrestrial transportation with direct deliveries.

Journal ArticleDOI
TL;DR: A method to relate local nanost structure variability (quality measure) to nanostructure interactions under the framework of Gaussian Markov random field is developed and enables a method to automatically detect defects and identify their patterns based on the underlying interaction patterns.
Abstract: Since properties of nanomaterials are determined by their structures, characterizing nanostructure feature variability and diagnosing structure defects are of great importance for quality control in scale-up nanomanufacturing. It is known that nanostructure interactions such as competing for source materials during growth contribute strongly to nanostructure uniformity and defect formation. However, there is a lack of rigorous formulation to describe nanostructure interactions and their effects on nanostructure variability. In this work, we develop a method to relate local nanostructure variability (quality measure) to nanostructure interactions under the framework of Gaussian Markov random field. With the developed modeling and estimation approaches, we are able to extract nanostructure interactions for any local region with or without defects based on its feature measurement. The established connection between nanostructure variability and interactions not only provides a metric for assessing nanostructure quality, but also enables a method to automatically detect defects and identify their patterns based on the underlying interaction patterns. Both simulation and real case studies are conducted to demonstrate the developed methods. The insights obtained from real case study agree with physical understanding.