scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automation Science and Engineering in 2007"


Journal ArticleDOI
TL;DR: This paper addresses the constraint that captures the inability of a fixed wing aircraft to turn at any arbitrary yaw rate and gives an algorithm to solve this problem by combining ideas from the traveling salesman problem and the path planning literature.
Abstract: This paper is about the allocation of tours of m targets to n vehicles. The motion of the vehicles satisfies a nonholonomic constraint (i.e., the yaw rate of the vehicle is bounded). Each target is to be visited by one and only one vehicle. Given a set of targets and the yaw rate constraints on the vehicles, the problem addressed in this paper is 1) to assign each vehicle a sequence of targets to visit, and 2) to find a feasible path for each vehicle that passes through the assigned targets with a requirement that the vehicle returns to its initial position. The heading angle at each target location may not be specified. The objective function is to minimize the sum of the distances traveled by all vehicles. A constant factor approximation algorithm is presented for the above resource allocation problem for both the single and the multiple vehicle case. Note to Practitioners-The motivation for this paper stems from the need to develop resource allocation algorithms for unmanned aerial vehicles (UAVs). Small autonomous UAVs are seen as ideal platforms for many applications, such as searching for targets, mapping a given area, traffic surveillance, fire monitoring, etc. The main advantage of using these small autonomous vehicles is that they can be used in situations where a manned mission is dangerous or not possible. Resource allocation problems naturally arise in these applications where one would want to optimally assign a given set of vehicles to the tasks at hand. The feature that differentiates these resource allocation problems from similar problems previously studied in the literature is that there are constraints on the motion of the vehicle. This paper addresses the constraint that captures the inability of a fixed wing aircraft to turn at any arbitrary yaw rate. The basic problem addressed in this paper is as follows: Given n vehicles and m targets, find a path for each vehicle satisfying yaw rate contraints such that each target is visited exactly once by a vehicle and the total distance traveled by all vehicles is minimized. We assume that the targets are at least 2r apart, where r is the minimum turning radius of the vehicle. This is a reasonable assumption because the sensors on these vehicles can map or see an area whose width is at least 2r. We give an algorithm to solve this problem by combining ideas from the traveling salesman problem and the path planning literature. We also show how these algorithms perform in the worst-case scenario

199 citations


Journal ArticleDOI
TL;DR: The modeling methodology helps to conceive in a natural way the model from the description of the system's components leading to modules that are easily interconnected, and the diagnosability test is stated as a linear programming problem which can be straightforward programmed.
Abstract: This paper is concerned with an online model-based fault diagnosis of discrete event systems. The model of the system is built using the interpreted Petri nets (IPN) formalism. The model includes the normal system states as well as all possible faulty states. Moreover, it assumes the general case when events and states are partially observed. One of the contributions of this work is a bottom-up modeling methodology. It describes the behavior of system elements using the required states variables and assigning a range to each state variable. Then, each state variable is represented by an IPN model, herein named module. Afterwards, using two composition operators over all the modules, a monolithic model for the whole system is derived. It is a very general modeling methodology that avoids tuning phases and the state combinatory found in finite state automata (FSA) approaches. Another contribution is a definition of diagnosability for IPN models built with the above methodology and a structural characterization of this property; polynomial algorithms for checking diagnosability of IPN are proposed, avoiding the reachability analysis of other approaches. The last contribution is a scheme for online diagnosis; it is based on the IPN model of the system and an efficient algorithm to detect and locate the faulty state. Note to Practitioners-The results proposed in this paper allow: 1) building discrete event system models in which faults may arise; 2) testing the diagnosability of the model; and 3) implementing an online diagnoser. The modeling methodology helps to conceive in a natural way the model from the description of the system's components leading to modules that are easily interconnected. The diagnosability test is stated as a linear programming problem which can be straightforward programmed. Finally, the algorithm for online diagnosis leads to an efficient procedure that monitors the system's outputs and handles the normal behavior model. This provides an opportune detection and location of faults occurring within the system

168 citations


Journal ArticleDOI
TL;DR: A framework to classify supply chain risk-management problems and approaches for the solution of these problems is developed, and two mathematical programming-based preventive models for strategic level deviation and disruption management are developed.
Abstract: In this paper, we develop a framework to classify supply chain risk-management problems and approaches for the solution of these problems. We argue that risk-management problems need to be handled at three levels: 1) strategic, 2) operational, and 3) tactical. In addition, risk within the supply chain might manifest itself in the form of deviations, disruptions, and disasters. To handle unforeseen events in the supply chain, there are two obvious approaches: 1) to design chains with built-in risk tolerance and 2) to contain the damage once the undesirable event has occurred. Both of these approaches require a clear understanding of undesirable events that may take place in the supply chain and the associated consequences and impacts from these events. Having described these approaches, we then focus our efforts on mapping out the propagation of events in the supply chain due to supplier nonperformance, and employ our insight to develop two mathematical programming-based preventive models for strategic level deviation and disruption management. The first model, a simple integer quadratic optimization model, adapted from the Markowitz model, determines optimal partner selection with the objective of minimizing both the operational cost and the variability of total operational cost. The second model, a simple mixed integer programming optimization model, adapted from the credit risk minimization model, determines optimal partner selection such that the supply shortfall is minimized even in the face of supplier disruptions. Hence, both of these models offer possible approaches to robust supply chain design

166 citations


Journal ArticleDOI
TL;DR: This paper surveys applications of queueing theory for semiconductor manufacturing systems (SMSs), conducts a survey on the important works and proposes a novel solution by relaxing a key assumption in the classicalQueueing theory.
Abstract: This paper surveys applications of queueing theory for semiconductor manufacturing systems (SMSs). Due to sophisticated tool specifications and process flows in semiconductor manufacturing, queueing models can be very complicated. Research efforts have been on the improvement of model assumptions and model input, mainly in the first moment (averages) and the second moment (variations). However, practices show that implementation of classical queueing theory in semiconductor industry has been unsatisfactory. In this paper, open problems on queueing modeling of SMS are discussed. A potential solution is also proposed by relaxing the independent assumptions in the classical queueing theory. Cycle time reduction has constantly been a key focus of semiconductor manufacturing. Compared with simulation, queueing theory-based analytical modeling is much faster in estimating manufacturing system performance and providing more insights for performance improvement. Therefore, queueing modeling attracts generous semiconductor research grants. Unfortunately, existing queueing models focus on simple extensions of the classical queueing theory and fail to question its applicability to the complicated SMS. Hence, related researches have not been employed widely in the semiconductor industry. In this paper, we conduct a survey on the important works and also present some open problems. We also propose a novel solution by relaxing a key assumption in the classical queueing theory. We are currently funded by Intel to explore this potential solution, and we hope it can foster an interesting research field for the years to come.

146 citations


Journal ArticleDOI
TL;DR: A distributed fault diagnosis algorithm is presented which allows each module in the distributed system to diagnose its faults independently unless completion of a task requires the use of coupled components.
Abstract: This paper studies online fault detection and isolation of modular dynamic systems modeled as sets of place-bordered Petri nets. The common places among the set of Petri nets modeling a system capture coupling of various system components. The transitions are labeled by events, some of which are unobservable (i.e., not directly recorded by the sensors attached to the system). The events whose occurrence must be diagnosed have unobservable transition labels. These events model faults or other significant changes in the system state. The existing theory of diagnosis of discrete-event systems is extended in the context of the above model. The modular structure of the system is exploited by a distributed algorithm for fault diagnosis. A Petri net diagnoser is associated with every Petri net and the diagnosers communicate in real time during the diagnostic process when the token count of common places changes. A merge function is defined to combine the individual diagnoser states and recover the complete diagnoser state that would be obtained under a monolithic approach. Strategies that reduce the communication overhead are presented. The software implementation of the distributed algorithm is discussed. Note to Practitioners-In the last decade, monitoring, fault detection, and diagnosis methodologies based on the use of discrete-event models have been successfully used in a variety of technological systems ranging from document processing systems to intelligent transportation systems. This paper was motivated by the problem of fault diagnosis for modular (distributed) dynamic discrete-event systems (DES). As a DES modeling formalism, Petri nets offer potential advantages in terms of the distributed representation of the system and the ability to represent coupling of the system components. The systems studied in this paper are sets of modules coupled with each other through various system components and modeled using Petri nets. We present a distributed fault diagnosis algorithm which allows each module in the distributed system to diagnose its faults independently unless completion of a task requires the use of coupled components. In the case of coupling, modules communicate with each other to accurately diagnose the fault. The distributed fault diagnosis algorithm recovers the monolithic diagnosis information at the cost of communication and growing communication overhead. To mitigate that problem, we present an improved version of the algorithm that significantly reduces the communication overhead. Finally, we introduce the software toolbox (written in Matlab and integrated with AT&T Graphviz) and we present a case study of an example of a heating, ventilation, and air-conditioning system where we use the software tool for modeling and analyzing the system

145 citations


Journal ArticleDOI
TL;DR: This paper develops a generic FDD scheme for centrifugal chillers and also develops a nominal data-driven model of the chiller that can predict the system response under new loading conditions.
Abstract: Chillers constitute a significant portion of energy consumption equipment in heating, ventilating and air-conditioning (HVAC) systems. The growing complexity of building systems has become a major challenge for field technicians to troubleshoot the problems manually; this calls for automated ldquosmart-service systemsrdquo for performing fault detection and diagnosis (FDD). The focus of this paper is to develop a generic FDD scheme for centrifugal chillers and also to develop a nominal data-driven (ldquoblack-boxrdquo) model of the chiller that can predict the system response under new loading conditions. In this vein, support vector machines, principal component analysis, and partial least squares are the candidate fault classification techniques in our approach. We present a genetic algorithm-based approach to select a sensor suite for maximum diagnosabilty and also evaluated the performance of selected classification procedures with the optimized sensor suite. The responses of these selected sensors are predicted under new loading conditions using the nominal model developed via the black-box modeling approach. We used the benchmark data on a 90-t real centrifugal chiller test equipment, provided by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, to demonstrate and validate our proposed diagnostic procedure. The database consists of data from sixty four monitored variables of the chiller under 27 different modes of operation during nominal and eight faulty conditions with different severities.

120 citations


Journal ArticleDOI
TL;DR: A force feedback interface is developed, which has the capability of measuring forces in the range of and provide a haptic display of the cell injection forces in real time and confirmed the research hypothesis that the use of combined vision and force feedback leads to a higher success rate in cell injection task compared to using vision feedback alone.
Abstract: Conventional methods of manipulating individual biological cells have been prevalent in the field of molecular biology. These methods do not have the ability to provide force feedback to an operator. Poor control of cell injection force is one of the primary reasons for low success rates in cell injection and transgenesis in particular. Therefore, there exists a need to incorporate force feedback into a cell injection system. We have developed a force feedback interface, which has the capability of measuring forces in the range of and provide a haptic display of the cell injection forces in real time. Using this force feedback interface, we performed several human factors studies to evaluate the effect of force feedback on cell injection outcomes. We tested our system with 40 human subjects and our experimental results indicate that the subjects were able to feel the cell injection force and confirmed our research hypothesis that the use of combined vision and force feedback leads to a higher success rate in cell injection task compared to using vision feedback alone.

115 citations


Journal ArticleDOI
TL;DR: A new deadlock control policy is proposed by treating robots as both material handling devices and buffers, which outperforms the existing ones and can be used to resolve deadlock by serving as temporary part storage devices.
Abstract: An automated manufacturing system (AMS) contains a number of versatile machines (or workstations), buffers, and an automated material handling system (MHS). The MHS can be an automated guide vehicle (AGV) system, and/or a system that consists of multiple robots. Deadlock resolution in AMS is an important issue. For the AMS with an AGV system as MHS, the problems of deadlock resolution for part processing process and AGV system as an integrated system has been studied. It is shown that AGVs can serve as both material handling devices and central buffers at the same time to help resolve deadlocks. For AMS with robots as MHS, the existing work treated the robots just as material handling devices and showed that the robots had contribution to deadlock. In this paper, such AMS is modeled by resource-oriented Petri nets. Contrary to the existing work, it is shown that the robots have no contribution to deadlock by adopting such nets to control AMS. More interestingly, they can be used to resolve deadlock by serving as temporary part storage devices. A new deadlock control policy is proposed by treating robots as both material handling devices and buffers. The new policy outperforms the existing ones.

103 citations


Journal ArticleDOI
TL;DR: This paper focuses on faulty behaviors modeled with ordinary Petri nets with some "fault" transitions, and methods are proposed to decide, in a systematic way, if the considered failures can be detected and isolated according to the existing sensors.
Abstract: The diagnosis of discrete event systems is strongly related to events estimation. This paper focuses on faulty behaviors modeled with ordinary Petri nets with some "fault" transitions. Partial but unbiased measurement of the places marking variation is used in order to estimate the firing sequences. The main contribution is to decide which sets of places must be observed for the exact estimation of some given firing sequences. Minimal diagnosers are defined that detect and isolate the firing of fault transitions immediately. Causality relationships and directed paths are also investigated to characterize the influence and dependence areas of the fault transitions. Delayed diagnosers are obtained as a consequence. Note to Practitioners-Structural tools are provided for the analysis of models used in the context of fault detection and isolation for discrete event systems. The systems that are concerned are either manufacturing processes, batch processes, digital devices, or communication protocols with single or multiple failures. Methods are proposed to decide, in a systematic way, if the considered failures can be detected and isolated according to the existing sensors. The obtained results can also be used by designers for sensor selection

98 citations


Journal ArticleDOI
TL;DR: This research shows that positional as well as intensity information, related to potential defects, can be extracted from the acquired laser projections, and describes novel strategies created for the automation of defect classification in tubular structures.
Abstract: Closed-circuit television (CCTV) is currently used in many inspection applications, such as the inspection of nonaccessible pipe surfaces. This human-oriented approach based on offline analysis of the raw images is highly subjective and prone to error because of the exorbitant amount of data to be assessed. Laser profilers have been recently proposed to project well-defined light patterns, improving the illumination of standard CCTV systems as well as enhancing the capability of automating the assessment process. This research shows that positional (geometrical) as well as intensity information, related to potential defects, can be extracted from the acquired laser projections. While most researchers focus on the analysis of positional information obtained from the acquired profiler signals, here the intensity information contained within the reflected light is also exploited for the purpose of defect classification and visualization. This paper describes novel strategies created for the automation of defect classification in tubular structures and explores new methods to fuse intensity and positional information, achieving improved multivariable defect classification. The acquired camera/laser images are processed in order to extract signal information for the purpose of visualization and map creation for further assessment. Then, a two-stage approach based on image processing and artificial neural networks is used to classify the images. First, a binary classifier identifies defective pipe sections, and then in a second stage, the defects are classified into different types, such as holes, cracks, and protruding obstacles. Experimental results are provided. Note to Practitioners-The method presented in this paper aims to automate the inspection of nonaccessible pipe surfaces. The method was thought to be employed in the inspection of sewers; however, it could be used in many other industrial applications and could also be extended to other shapes rather than tubular structures. A laser ring profiler, consisting, for instance, of a laser diode and a ring projector, can be easily integrated into existing closed-circuit television systems. The proposed algorithm identifies defective areas and categorizes the types of defects, analyzing the successive recorded camera images that will contain the reflected ring of light. The algorithm, that can be used online, makes use of the deformation of the reflected laser ring together with its changes in intensity. The fact of combining the two kinds of data using artificial-intelligent algorithms makes the method robust enough to work in harsh environments

95 citations


Journal ArticleDOI
TL;DR: The survey delineates different representative scenarios in e-procurement where auctions can be deployed and describes the conceptual and mathematical aspects of different categories of procurement auctions, and presents the mathematical formulations under each of the above categories.
Abstract: Auction-based mechanisms are extremely relevant in modern day electronic procurement systems since they enable a promising way of automating negotiations with suppliers and achieve the ideal goals of procurement efficiency and cost minimization. This paper surveys recent research and current art in the area of auction-based mechanisms for e-procurement. The survey delineates different representative scenarios in e-procurement where auctions can be deployed and describes the conceptual and mathematical aspects of different categories of procurement auctions. We discuss three broad categories: 1) single-item auctions: auctions for procuring a single unit or multiple units of a single homogeneous type of item; 2) multi-item auctions: auctions for procuring a single unit or multiple units of multiple items; and 3) multiattribute auctions where the procurement decisions are based not only on costs but also on attributes, such as lead times, maintenance contracts, quality, etc. In our review, we present the mathematical formulations under each of the above categories, bring out the game theoretic and computational issues involved in solving the problems, and summarize the current art. We also present a significant case study of auction based e-procurement at General Motors.

Journal ArticleDOI
TL;DR: A linear model is developed to describe the dimensional variation propagation of machining processes through kinematic analysis of the relationships among fixture error, datum error, machine geometric error, and the dimensional quality of the product.
Abstract: Recently, the modeling of variation propagation in complex multistage manufacturing processes has drawn significant attention. In this paper, a linear model is developed to describe the dimensional variation propagation of machining processes through kinematic analysis of the relationships among fixture error, datum error, machine geometric error, and the dimensional quality of the product. The developed modeling technique can handle general fixture layouts rather than being limited to a 3-2-1 layout case. The dimensional error accumulation and transformation within the multistage process are quantitatively described in this model. A systematic procedure to build the model is presented and validated. This model has great potential to be applied toward fault diagnosis and process design evaluation for complex machining processes. Note to Practitioners-Variation reduction is essential to improve process efficiency and product quality in order to gain a competitive advantage in manufacturing. Unfortunately, variation reduction presents difficult challenges, particularly for large-scale modern manufacturing processes. Due to the increasing complexity of products, modern manufacturing processes often involve multiple stations or operations. For example, multiple setups and operations are often needed in machining processes to finish the final product. When the workpiece passes through multiple stages, machining errors at each stage will be accumulated onto the workpiece and could further influence the subsequent operations. The variation accumulation and propagation pose significant challenges to final product variation analysis and reduction. This paper focuses on a systematic technique for the modeling of dimensional variation propagation in multistage machining processes. The relationship between typical process faults and product quality characteristics are established through a kinematics analysis. One salient feature of the proposed technique is that the interactions among different operations with general fixture layouts are captured systematically through the modeling of setup errors. This model has great potential to be applied to fault diagnosis and process design evaluation for a complex machining process

Journal ArticleDOI
TL;DR: Computational results reported in this paper show that significant savings could be realized by optimizing the composition of modules, and a simulated annealing algorithm improves on the previously generated solutions.
Abstract: The assemble-to-order (ATO) production strategy considers a tradeoff between the size of a product portfolio and the assembly lead time. The concept of modular design is often used in support of the ATO strategy. Modular design impacts the assembly of products and the supply chain, in particular, the storage, transport, and production are affected by the selected modular structure. The demand for products in a product family impacts the cost of the supply chain. Based on the demand patterns, a mix of modules and their stock are determined by solving an integer programming model. This model cannot be optimally solved due to its high computational complexity and, therefore, two heuristic algorithms are proposed. A simulated annealing algorithm improves on the previously generated solutions. The computational results reported in this paper show that significant savings could be realized by optimizing the composition of modules. The best performance is obtained by a simulated annealing combined with a heuristic approach.

Journal ArticleDOI
TL;DR: A unified language which aims to support integrated design specifications of automated systems, including the dynamics of heterogeneous physical assemblies, the discrete-event behavior of distributed control software, and the specification of interface ports between the plant and the control system are introduced.
Abstract: This paper describes a modeling language that aims to provide a unified framework for representing control systems, namely, physical plants coupled with computer-based control devices. The proposed modeling methodology is based on the cardinal principle of object orientation, which allows describing both control software and physical components using the same basic concepts, particularly those of capsules, ports, and protocols. Furthermore, it is illustrated how the well-known object-oriented specification language unified modeling language can be adopted, provided an adequate formalization of its semantics, to describe structural and behavioral aspects of control systems, related to both logical and physical parts. Note to Practitioners-The development of an automated system within an industrial setting is a complex task, whose successful result depends on the joint efforts of a team of designers with different scientific backgrounds and specialized knowledge. In fact, an automated system is typically composed of a mechanical assembly, which must be precisely designed and manufactured, and a set of sensors and actuators (e.g., electrical drives, pneumatic systems, etc.), which are, on their turn, controlled most of the time by means of digital processors. Of course, both electrical parts and control algorithms (e.g., proportional, integral, and derivative (PID) regulators, logic and supervisory control, reference trajectories for mechanical motions, etc.) should be designed with the same care given to mechanical aspects. Moreover, it is undeniable that none of the various parts composing the automated system design specification can, on their own, allow engineers to understand the actual behavior of the whole system, especially without a common description language that is understandable for all of the designers. The present paper introduces a unified language which aims to support integrated design specifications of automated systems, including the dynamics of heterogeneous physical assemblies, the discrete-event behavior of distributed control software, and the specification of interface ports between the plant and the control system. With the proposed language, it is possible to obtain a complete picture of the automated system suitable for its simulation, documentation, and validation. The modeling language described in the paper supports the principles of object orientation. This choice moves in the direction of enhancing modularity and reusability properties of design specifications, which are aspects of great importance in the design practice. Moreover, the object-oriented approach to automated systems design proposed in the paper aims to introduce the concept of "design by extension" in the manufacturing industry. This means that the definition of specialization relationships between classes of components implies that those components should be designed in order to be substitutable with each other, especially from a dynamic point of view. This aspect will be the subject of further papers illustrating other practical insights on the use of object-oriented models for automated systems

Journal ArticleDOI
TL;DR: The approach in this paper develops efficient techniques for constraining and reconstructing a product represented by free-form surfaces around reference objects with different shapes, so that this design automation problem can be fundamentally solved.
Abstract: This paper addresses the problem of volume parameterization that serves as the geometric kernel for design automation of customized free-form products The purpose of volume parameterization is to establish a mapping between the spaces that are near to two reference free-form models, so that the shape of a product presented in free-form surfaces can be transferred from the space around one reference model to another reference model The mapping is expected to keep the spatial relationship between the product model and reference models as much as possible We separate the mapping into rigid body transformation and elastic warping The rigid body transformation is determined by anchor points defined on the reference models using a least-squares fitting approach The elastic warping function is more difficult to obtain, especially when the meshes of the reference objects are inconsistent A three-stage approach is conducted First, a coarse-level warping function is computed based on the anchor points In the second phase, the topology consistency is maintained through a surface fitting process Finally, the mapping of volume parameterization is established on the surface fitting result Compared to previous methods, the approach presented here is more efficient Also, benefitting from the separation of rigid body transformation and elastic warping, the transient shape of a transferred product does not give unexpected distortion At the end of this paper, various industry applications of our approach in design automation are demonstrated Note to Practitioners-The motivation of this research is to develop a geometric solution for the design automation of customized free-form objects, which can greatly improve the efficiency of design processes in various industries involving customized products (eg, garment design, toy design, jewel design, shoe design, and glasses design, etc) The products in the above industries are usually composed of a very complex geometry shape (represented by free-form surfaces), and is not driven by a parameter table but a reference object with free-form shapes (eg, mannequin, toy, wrist, foot, and head models) After carefully designing a product around one particular reference model, it is desirable to have an automated tool for "grading" this product to other shape-changed reference objects while retaining the original spatial relationship between the product and reference models This is called the design automation of a customized free-form object Current commercial 3-D/2-D computer-aided design (CAD) systems, developed for the design automation of models with regular shape, cannot support the design automation in this manner The approach in this paper develops efficient techniques for constraining and reconstructing a product represented by free-form surfaces around reference objects with different shapes, so that this design automation problem can be fundamentally solved Although the approach has not been integrated into commercial CAD systems, the results based on our preliminary implementation are encouraging-the spatial relationship between reference models and the customized products is well preserved

Journal ArticleDOI
TL;DR: This paper investigates optimal grasp points on an arbitrary-shaped grasped object using a required external force set and presents an algorithm based on a branch-and-bound method to solve the problem.
Abstract: In this paper, we investigate optimal grasp points on an arbitrary-shaped grasped object using a required external force set. The required external force set is given based on a task, and consists of the external forces and moments, which must be balanced by virtue of contact forces applied by a robotic hand. When the origin is in the interior of the set, a force-closure grasp is required. When the dimension of the set is one, an equilibrium grasp is required. Therefore, we can investigate whatever the desired grasp is, such as when the desired grasp is a force closure and equilibrium grasps. Also, we only have to consider the forces contained in a given required external force set, not the whole set of possible resulting forces. Furthermore, we can avoid the frame-invariant problem (the criterion value changes with the change of the task (object) coordinate frame). We consider an optimization problem from the viewpoint of decreasing the magnitudes of the contact forces needed to balance any external force and moment contained in a given required external force set. In order to solve the problem, we present an algorithm based on a branch-and-bound method. We also present some numerical examples to show the validity of our approach. Note to Practitioners-This paper is concerned with grasping an object by a robotic hand. This article address how to grasp the object, namely, how to position every finger on the object. Recently, robots are desired to be used in housekeeping and in caring for elderly people. For this purpose, robot (multifingered) hands are equipped with the robots as general-purpose end effectors. The robot hands are required to automatically move to accomplish such tasks. In this case, the most fundamental issue for robot hands is to grasp the object. At home, there are many various-shaped objects. Consider the case where the robot (hand) is commanded to perform a certain task, such as putting the object into a box. In this case, the robot (hand) must grasp such an object (of any arbitrary shape) with appropriate grasp positions for completing the task. Therefore, the appropriate grasp positions must be calculated automatically. This article addresses a method to solve this problem. But to complete the grasping task, the following problems remain: calculation and control of the appropriate grasping forces

Journal ArticleDOI
TL;DR: The results of this paper are that reconfigurability is highly dependent on the level of modularity of the logic control system, and that not all "modular" structures are reconfigurable.
Abstract: The contribution of this paper is the introduction of the event-condition-action (ECA) paradigm for the design of modular logic controllers that are reconfigurable. ECA rules have been used extensively to specify the behavior of active database and expert systems and are recognized as a highly reconfigurable tool to design reactive behavior. This paper develops a method to design modular logic controllers whose dynamics are governed by ECA rules, with the ultimate goal of producing reconfigurable control. Modularity, integrability, and diagnosability measures that have in the past been used to measure the reconfigurability of manufacturing systems are used to assess the reconfigurability of the developed controllers. For the modularity measure, criteria found in computer science to evaluate the modularity of object-oriented programs are adapted to evaluate the modularity of modular logic controllers. The results of this paper are that reconfigurability is highly dependent on the level of modularity of the logic control system, and that not all "modular" structures are reconfigurable. There are approaches, such as the one shown in this paper using ECA rules, that can greatly increase the modularity, integrability, and diagnosability of the logic control system, thus increasing its reconfigurability. Note to Practitioners-This paper has been motivated by the problem of designing reconfigurable modular logic controllers. Reconfiguration is important in manufacturing, but it has also been an issue in the software design domain. There are software systems that currently exist, such as active data bases or expert systems with very powerful reconfiguration capabilities enabled by event-condition-action (ECA) rules. This paper applies the ECA concept to the design of modular logic controllers. This paper begins by describing what an ECA logic system is and then focuses on how ECA logic systems can be implemented with modular control approaches. To this end, two designs are considered. First, modular finite state machines are used to construct ECA logic systems, and a theoretical framework is built using this approach. Three qualitative measures for reconfigurability (modularity, integrability, and diagnosability) are presented and the controllers are evaluated using these measures. Second, an implementation using the IEC 61499 function block standard is presented as it is a widely understood and accepted standard for modular control applications. Future work entails theoretical analysis using modular verification techniques that exploit a controller structure

Journal ArticleDOI
TL;DR: This paper obtains a small model theorem showing that a supervisor exists if and only if it exists over a certain finite state space, namely the power set of Cartesian product of system and specification state spaces.
Abstract: This paper studies supervisory control of discrete event systems subject to specifications modeled as nondeterministic automata. The control is exercised so that the controlled system is simulation equivalent to the (nondeterministic) specification. Properties expressed in the universal fragment of the branching-time logic can equivalently be expressed as simulation equivalence specifications. This makes the simulation equivalence a natural choice for behavioral equivalence in many applications and it has found wide applicability in abstraction-based approaches to verification. While simulation equivalence is more general than language equivalence, we show that existence as well as synthesis of both the target and range control problems remain polynomially solvable. Our development shows that the simulation relation is a preorder over automata, with the union and the synchronization of the automata serving as an infimal upperbound and a supremal lowerbound, respectively. For the special case when the plant is deterministic, the notion of state-controllable-similar is introduced as a necessary and sufficient condition for the existence of similarity enforcing supervisor. We also present conditions for the existence of a similarity enforcing supervisor that is deterministic.

Journal ArticleDOI
TL;DR: A set of recursive equations for the fast calculation of divergence with an additional band to overcome the computational restrictions in real-time processing is derived and shows high detection accuracy with low false positive rates compared to the canonical analysis at a small number of spectral bands.
Abstract: This paper presents a spectral band selection method for feature dimensionality reduction in hyperspectral image analysis for detecting skin tumors on poultry carcasses. A hyperspectral image contains spatial information measured as a sequence of individual wavelength across broad spectral bands. Despite the useful information for skin tumor detection, real-time processing of hyperspectral images is often a challenging task due to the large amount of data. Band selection finds a subset of significant spectral bands in terms of information content for dimensionality reduction. This paper presents a band selection method of hyperspectral images based on the recursive divergence for the automatic detection of poultry carcasses. For this, we derive a set of recursive equations for the fast calculation of divergence with an additional band to overcome the computational restrictions in real-time processing. A support vector machine is used as a classifier for tumor detection. From our experiments, the proposed band selection method shows high detection accuracy with low false positive rates compared to the canonical analysis at a small number of spectral bands. Also, compared with the enumeration approach of 93.75% detection rate, our proposed recursive divergence approach gives 90.6% detection rate, which is within the industry-accepted accuracy of 90-95%, while achieving the computational saving for real-time processing.

Journal ArticleDOI
TL;DR: This study employs a novel method to incorporate a speed metric into economic efficiency evaluation and thereby provide a guideline for improving fab efficiency in manufacturing practice, and integrates factory productivity and cycle time into a relative efficiency analysis model that jointly evaluates the impact of these two factors in manufacturing performance.
Abstract: Economic efficiency analysis of semiconductor fabrication facilities (fabs) involves tradeoffs among cost, yield, and cycle time. Due to the disparate units involved, direct evaluation and comparison is difficult. This article employs data envelopment analysis (DEA) to determine relative efficiencies among fabs over time on the basis of empirical data, whereby cycle time performance is transformed into monetary value according to an estimated price decline rate. Two alternative DEA models are formulated to evaluate the influence of cycle time and other performance attributes. The results show that cycle time and yield follow increasing returns to scale, just as do cost and resource utilization. Statistical analyses are performed to investigate the DEA results, leading to specific improvement directions and opportunities for relatively inefficient fabs. Note to Practitioners-Speed of manufacturing is an important metric of factory performance, yet it has long been a challenge to integrate its value into overall performance evaluation. However, for many semiconductor products, a predictable rate of decline in selling prices makes it possible to transform time value into monetary value. This study employs a novel method to incorporate a speed metric into economic efficiency evaluation and thereby provide a guideline for improving fab efficiency in manufacturing practice. Furthermore, this study integrates factory productivity and cycle time into a relative efficiency analysis model that jointly evaluates the impact of these two factors in manufacturing performance. In particular, we validate this approach with data from ten leading wafer fabs obtained by the Competitive Semiconductor Manufacturing Program and we discuss managerial implications.

Journal ArticleDOI
TL;DR: This paper extends popular approximate mean cycle time formulae to address practical manufacturing issues, and test the approximations using parameters gleaned from production tool groups in IBM's 200 mm semiconductor wafer fabricator.
Abstract: Approximate closed form expressions for the mean cycle time in a G/G/m-queue often serve as practical and intuitive alternatives to more exact but less tractable analyses. However, the G/G/m-queue model may not fully address issues that arise in practical manufacturing systems. Such issues include tools with production parallelism, tools that are idle with work in process, travel to the queue, and the tendency of lots to defect from a failed server and return to the queue even after they have entered production. In this paper, we extend popular approximate mean cycle time formulae to address these practical manufacturing issues. Employing automated data extraction algorithms embedded in software, we test the approximations using parameters gleaned from production tool groups in IBM's 200 mm semiconductor wafer fabricator.

Journal ArticleDOI
TL;DR: This paper develops the ACO-based scheduling framework and provides the system parameter tuning strategy, and the system implementation at an Intel chipset factory demonstrates a significant machine conversion reduction comparing to a traditional scheduling approach.
Abstract: In semiconductor assembly and test manufacturing (ATM), a station normally consists of multiple machines (maybe of different types) for a certain operation step. It is critical to optimize the utilization of ATM stations for productivity improvement. In this paper, we first formulate the bottleneck station scheduling problem, and then apply ant colony optimization (ACO) to solve it metaheuristically. The ACO is a biological-inspired optimization mechanism. It incorporates each ant agent's feedback information to collaboratively search for the good solutions. We develop the ACO-based scheduling framework and provide the system parameter tuning strategy. The system implementation at an Intel chipset factory demonstrates a significant machine conversion reduction comparing to a traditional scheduling approach.

Journal ArticleDOI
TL;DR: This study presents a hybrid design methodology and a cost-effectiveness comparison of the vertical and the horizontal automated-guided-vehicle transportation systems, demonstrating that the horizontal AGV transportation system is more effective than the vertical AGV Transportation system under most demand scenarios.
Abstract: In this paper, we analyze and compare the performance of the vertical and the horizontal automated-guided-vehicle transportation systems. We use results in queuing network theory and a transportation simulator to design a hybrid strategy for this study, and to set the appropriate number of agents in the systems. Next, these two transportation systems are evaluated based on cost-effectiveness criteria. For this purpose, the total construction costs of the systems for the various transportation demands are compared. Finally, we provide analytical results to evaluate and to obtain the most efficient system, based on the validity of each system, under different demand scenario. Note to Practitioners-A good design methodology is essential for the study of the optimal layout in an automated container terminal. Port designers need to select the most efficient automated-guided-vehicle (AGV) transportation system, and to set the appropriate number of agents operating in the system. This study presents a hybrid design methodology and a cost-effectiveness comparison of the vertical and the horizontal transportation systems. Our proposed design methodology is able to derive the combinatorial optimal design solutions rapidly, and at the same time pin point the bottleneck in the system. This proposed methodology can be easily applied to any transportation or logistics system, provided the system can be divided into components represented as nodes in a graph. Our results demonstrate that the horizontal AGV transportation system is more effective than the vertical AGV transportation system under most demand scenarios.

Journal ArticleDOI
TL;DR: A model to quantify paint quality in terms of quality buy rate as a function of repair capacity is developed and it is shown that the QBR can be improved and unnecessary repaints can be reduced by increasing the repair capacity.
Abstract: Manufacturing system design has an impact on product quality. In this paper, we investigate this impact through an application study at an automotive paint shop. Specifically, for repair and rework systems in paint operations, we develop a model to quantify paint quality [in terms of quality buy rate (QBR)] as a function of repair capacity. We show that the QBR can be improved and unnecessary repaints can be reduced by increasing the repair capacity. Note to Practitioners-Manufacturing system design and quality management are important in many manufacturing industries. Although they have attracted substantial research effort, little attention has been paid to address the coupling or interactions between system design and product quality. Empirical evidence and analytical studies have shown that manufacturing system design does impact quality. In this paper, through an application study at a repair and rework system in an automotive paint shop, we show that paint quality, as measured by the quality buy rate, can be improved by designing the system more effectively. Similar problems are also often encountered in other manufacturing systems. Results obtained in this work, along with other results, demonstrate both the theoretical and practical importance of the analysis of manufacturing system design on product quality, and suggest a largely unexplored, but promising research area

Journal ArticleDOI
TL;DR: An intelligent system which tackles the most difficult instance of this problem, where two-dimensional irregular shapes have to be packed on a regularly or irregularly shaped surface, and achieves high-quality solutions with short computational times is introduced.
Abstract: Packing two-dimensional shapes on a surface such that no shapes overlap and the uncovered surface area is minimized is an important problem that arises in a variety of industrial applications. This paper introduces an intelligent system which tackles the most difficult instance of this problem, where two-dimensional irregular shapes have to be packed on a regularly or irregularly shaped surface. The proposed system utilizes techniques not previously applied to packing, drawn from computer vision and artificial intelligence, and achieves high-quality solutions with short computational times. In addition, the system deals with complex shapes and constraints that occur in industrial applications, such as defective regions and irregularly shaped sheets. We evaluate the effectiveness and efficiency of the proposed method using 14 established benchmark problems that are available from the EURO Special Interest Group on Cutting and Packing.

Journal ArticleDOI
TL;DR: A two-level hierarchical planning methodology to generate a complete capacity planning solution using mixed-integer linear programming and focus on MaxIt modeling with kit reconfiguration is proposed and verified by numerical experiments in a real production environment.
Abstract: Kits (such as accessories, fixtures, jigs, etc.) are widely used in production for many industries. They are normally product- and machine-specific, so a large kit inventory must be maintained when the product-mix variation is high. Fortunately, many kits are reconfigurable. That means they can be dissembled into components and then these components themselves (or together with some other components) can be reassembled into new types of kits. Therefore, we can save money and improve supply chain responsiveness by purchasing components instead of entire kits. However, research on capacity planning with reconfigurable kits has not been reported. We proposed a two-level hierarchical planning methodology to generate a complete capacity planning solution using mixed-integer linear programming. MaxIt covers mid-range monthly planning and automated capacity allocation system covers short-range weekly planning. These systems are integrated to generate optimal capacity plans considering kit components. This methodology has been successfully implemented in Intel's global semiconductor assembly and test manufacturing since 2004. In this paper, we present the hierarchical modeling framework and focus on MaxIt modeling with kit reconfiguration. We also verify the methodology by numerical experiments in a real production environment.

Journal ArticleDOI
TL;DR: A fast simulation-based methodology by an innovative integration of ordinal optimization (OO) and design of experiments (DOEs) to efficiently select a good scheduling policy for fab operations by comparing their relative orders of performance to a specified level of confidence is designed.
Abstract: Semiconductor wafer fab operations are characterized by complex and reentrant production processes over many heterogeneous machine groups with stringent performance requirements. Efficient composition of good scheduling policies from combinatorial options of wafer release and machine dispatching rules has posed a significant challenge to competitive fab operations. In this paper, we design a fast simulation-based methodology by an innovative integration of ordinal optimization (OO) and design of experiments (DOEs) to efficiently select a good scheduling policy for fab operations. Instead of finding the exact performance among scheduling policies, our approach compares their relative orders of performance to a specified level of confidence. Our new approach consists of three stages: performance estimation model construction using DOE, policy option screening process, and final simulation evaluation with intelligent computing budget allocation. The exponential convergence of OO is integrated into all the three stages to significantly improve computational efficiency. Simulation results of applications to scheduling wafer fabrications not only screen out good scheduling policies but also provide insights about how factors such as wafer release and the dispatching of each machine group may affect production cycle times and smoothness under a reentrant process flow. Most of the OO-based DOE simulations require 2-3 orders of magnitude less computation time than those of a traditional approach. Such a high speedup enables decision makers to explore much larger problems. Note to Practitioners - This paper designs a fast simulation-based methodology to compose a good scheduling policy from various dispatching rules of fab operations. The methodology innovatively applies DOE to estimate performance of dispatching rule combinations (policies) over various machines groups in a fab, screens out good enough policy options by using OO over the performance estimation, and allocates computation time intelligently to simulate potentially good options. Our study shows that OO-based DOE simulations require 2-3 orders of magnitude less computation time than those of a traditional approach. The high speedup enables fab managers to identify good scheduling policies from the many combinations of wafer release and dispatching rules.

Journal ArticleDOI
TL;DR: The artificial color contrast and statistically based fast bounded box methods can significantly improve the success rate of the detection by reducing the standard deviation of both the target and noise pixels, enlarging the separation between feature clusters in color space, and more tightly characterize the feature color from its background.
Abstract: Color information is useful in vision-based feature detection, particularly for food processing applications where color variability often renders grayscale-based machine-vision algorithms that are difficult or impossible to work with. This paper presents a color machine vision algorithm that consists of two components. The first creates an artificial color contrast as a prefilter that aims at highlighting the target while suppressing its surroundings. The second, referred to here as the statistically based fast bounded box (SFBB), utilizes the principal component analysis technique to characterize target features in color space from a set of training data so that the color classification can be performed accurately and efficiently. We evaluate the algorithm in the context of food processing applications and examine the effects of the color characterization on computational efficiency by comparing the proposed solution against two commonly used color classification algorithms; a neural-network classifier and the support vector machine. Comparison among the three methods demonstrates that statistically based fast bounded box is relatively easy to train, efficient, and effective since with sufficient training data, it does not require any additional optimization steps; these advantages make SFBB an ideal candidate for high-speed automation involving live and/or natural objects. Note to Practitioners-Variability in natural objects is usually several orders of magnitude higher than that for manufactured goods and has remained a challenge. As a result, most solutions to inspection problems of natural products today still have humans in the loop. One of the factors influencing the success rate of color machine vision in detecting a target is its ability to characterize colors. When unrelated features are very close to the target in the color space, which may not pose a significant problem to an experienced operator, they appear as noise and often result in false detection. This paper illustrates the applicability of the algorithm with a number of representative automation problems in the context of food processing applications. As demonstrated experimentally, the artificial color contrast and statistically based fast bounded box methods can significantly improve the success rate of the detection by reducing the standard deviation of both the target and noise pixels, enlarging the separation between feature clusters in color space, and more tightly characterize the feature color from its background. The algorithm presented here has several advantages, including simplicity in training and fast classification, since only three simple checks of rectangular bounds are performed

Journal ArticleDOI
TL;DR: It is shown that the small model theorem remains valid even when there is partial observation of events so that a supervisor must be both control and observation compatible ((/spl Sigma//sub u/, M)-compatible for short).
Abstract: This paper extends our prior result on decidability of bisimulation equivalence control from the setting of complete observations to that of partial observations. Besides being control compatible, the supervisor must now also be observation compatible. We show that the "small model theorem" remains valid by showing that a control and observation compatible supervisor exists if and only if it exists over a certain finite state space, namely the power set of the Cartesian product of the system and the specification state spaces. Note to Practitioners-Non-determinism in discrete-event systems arises due to abstraction and/or unmodeled dynamics. This paper addresses the issue of control of non-deterministic systems subject to non-deterministic specifications, under a partial observation of events. Non-deterministic plant and specification are useful when designing a system at a higher level of abstraction so that lower level details of the system and its specification are omitted to obtain higher level models that are non-deterministic. The control goal is to ensure that the controlled system has an equivalent behavior as the specification system, where the notion of equivalence used is that of bisimilarity. Bisimilarity requires the existence of an equivalence relation between the states of the two systems so that transitions on common events beginning from a pair of equivalent states end up in a pair of equivalent successor states. Supervisors are also allowed to be nondeterministic, where the nondeterminism in control is implemented by selecting control actions nondeterministically from among a set of precomputed choices. The main contribution of this paper is to show that a supervisor exists if and only if one exists where the size of its state-space upper bounded and so it suffices to search over this state space. We illustrate our results through a manufacturing example

Journal ArticleDOI
TL;DR: A real-life single-item dynamic lot sizing problem arising in a refinery for crude oil procurement is addressed and it is shown that the backlogging model can be solved in O(T2) time with general concave inventory holding and backlogging cost functions where T is the number of periods in the planning horizon.
Abstract: This paper addresses a real-life single-item dynamic lot sizing problem arising in a refinery for crude oil procurement. It can be considered as a lot sizing problem with bounded inventory. We consider two managerial policies. With one policy, a part of the demand of a period can be backlogged and with the other, a part of the demand of a period can be outsourced. We define actuated inventory bounds and show that any bounded inventory lot sizing model can be transformed into an equivalent model with actuated inventory bounds. The concept of actuated inventory bounds significantly contributes to the complexity reduction. In the studied models, the production capacity can be assumed to be unlimited and the production cost functions to be linear but with fixed charges. The results can be easily extended to piecewise linear concave production cost functions. The goal is to minimize the total cost of production, inventory holding and backlogging, or outsourcing. We show that the backlogging model can be solved in O(T2) time with general concave inventory holding and backlogging cost functions where T is the number of periods in the planning horizon. The complexity is reduced to O(T) when the inventory/backlogging cost functions are linear and there is no speculative motives to hold either inventory or backlogging. When the outsourcing levels are unbounded, we show that the outsourcing model can be transformed into an inventory/backlogging model. As a consequence, the problem can be solved in O(T2) time, if the outsourcing cost functions are linear with fixed charges even if the inventory holding cost functions are general concave functions. When the outsourcing level of a period is bounded from above by the demand of the period, which is the case in many application areas, we show that the outsourcing model can be solved in O(T2 logT) time if the inventory holding and the outsourcing cost functions are linear. Note to Practitioners-This paper considers dynamic lot-sizing models with bounded inventory and outsourcing or backlogging decisions. Based on the forecasted requirements of a given item for each period of the planning horizon, the problem consists of determining the quantity to be produced inhouse or to be ordered from a supplier and the quantity to be outsourced in each period to minimize a total cost over the considered planning horizon, composed of the production or purchasing cost, inventory holding cost, and the backlogging cost or the outsourcing cost. These problems initially come from real-life crude oil procurement and often arise in many companies. In this paper, we consider two models. In one model, backlogging is allowed with a backlogging penalty while there is no possibility of outsourcing. In the other model, all of the customer requirements are satisfied in time (i.e., without backlogging) but outsourcing is possible. For each model, we develop an algorithm to find an optimal solution. The computation time of these algorithms can be bounded by a one or two degree polynom of the number of periods in the planning horizon, which means that the computation time required to find an optimal solution is very short