scispace - formally typeset
Search or ask a question

Showing papers in "Systems Engineering in 2003"


Journal ArticleDOI
TL;DR: In this paper, the authors discuss how value is added in product development through work on activities and the production of deliverables, and integrate findings from several streams of research and provide bases upon which to build improved value models.
Abstract: In an effort to improve company operations and their results, more firms are applying the principles of “Lean”—not only to manufacturing but also to systems engineering processes. Too often, however, this is done with a shallow understanding of Lean and/or without a systems view, in which case Lean creates new problems and tensions and may not deliver expected results. Lean is not about just minimizing cost, cycle time, or waste. Lean is about maximizing value. In systems engineering or product development (PD), maximizing value may require doing more activities, not fewer. Since a process is a kind of system, a systems view and systems engineering principles are helpful. As the value of a system is more than the value of its individual components, the value of a process is more than the value of its individual activities. Value is driven not only by the presence of necessary (value-adding) activities in the PD process but also by the way those activities work together to ensure that they use and produce the right work products, services, and information at the right time. This paper discusses how value is added in PD through work on activities and the production of deliverables. It integrates findings from several streams of research and provides bases upon which to build improved value models. It shows how the concept of Lean can broaden from asking “What wasteful activities can we stop doing?” to include insights from asking “What helpful activities can we start doing, and when?” © 2003 Wiley Periodicals, Inc. Syst Eng 6: 49–61, 2003. DOI 10.1002/sys.10034

156 citations


Journal ArticleDOI
TL;DR: An understanding of SoS challenges to the application of Systems Engineering (SE) in organizational evolutionary development is presented and a new approach to SE process organization and management is presented in order to help an organization cope with the high complexity of soS evolutions and improve its architecture practice.
Abstract: Engineering activities in future organization development, including various information-based systems, vary, but all result in evolutions of an organization, its capabilities and systems. These evolutions occur in a context of Systems-of-Systems (SoS) where the organization must maintain a sustained, sustainable, and controlled SoS evolution as a whole. This paper presents an understanding of SoS challenges to the application of Systems Engineering (SE) in organizational evolutionary development and discusses the difference between “developing a SoS” and “developing systems in a SoS context” from an SE management perspective. A new approach to SE process organization and management is presented in order to help an organization cope with the high complexity of SoS evolutions and improve its architecture practice. Philosophically different from many SoS SE studies that consider mainly how to develop a SoS, the new approach is to add a dimension or components of SE practice at the organization level that is aimed at creating a better engineering environment to enable effective applications of traditional SE practice in implementing SoS evolutions. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 170–183, 2003

81 citations


Journal ArticleDOI
TL;DR: The U.S. Department of Defense (DoDOD) has mandated the development of architectures to support the acquisition of systems that are interoperable and will meet the needs of military coalitions.
Abstract: The U.S. Department of Defense (DoD) has mandated the development of architectures to support the acquisition of systems that are interoperable and will meet the needs of military coalitions. This paper provides a general description of an architecting process based on object orientation and UML. It then provides a rationale for style constraints on the use of UML artifacts for representing DoD Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) architectures. Finally the paper describes a mapping between the UML artifacts and an executable model based on colored Petri nets that can be used for logical, behavioral, and performance evaluation of the architecture. A procedure for the conversion is also provided. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 266–300, 2003

66 citations


Journal ArticleDOI
TL;DR: The results of the analysis show that the modularization methods are useful for products with simple product architecture, but concerning higher degrees of product complexity, several functions allocated to several physical modules, or large physical variation of the variants, the methods seem insufficient.
Abstract: In recent years, product modularization has attracted increasing interest among both practitioners and academics To facilitate the decision-making and management of modularization in the manufacturing industry, tools and methods are needed The purpose of this paper is to identify existing modularization methods and analyze them regarding their ability to deal with different degrees of product complexity, where complexity has several aspects: many variants, mix of different technologies, and having different solutions for one and the same function A literature survey results in six different methods for modularization The results of the analysis show that the modularization methods are useful for products with simple product architecture; ie, one function is allocated to one physical module However, concerning higher degrees of product complexity, several functions allocated to several physical modules, or large physical variation of the variants, the methods seem insufficient Therefore, to make the methods suitable for complex products, areas for further development are suggested and discussed ©2003 Wiley Periodicals, Inc Syst Eng 6: 195–209, 2003

50 citations


Journal ArticleDOI
TL;DR: An overview of the methodologies for risk and cost monitoring for VVT is provided and a novel approach for modeling VVT strategies as decision problems is proposed and a quantitative VVT process and risk model is proposed.
Abstract: The cost of large systems' Verification, Validation, and Testing (VVT) is in the neighborhood of 40% of the total life cycle cost. The cost associated with systems' failures is even more dramatic, often exceeding 10% of industrial organizations turnover. There is a great potential benefit in streamlining and optimizing the VVT process. The first step in accomplishing this aim is to define a VVT strategy and then to quantify the cost and risk associated with carrying it out. This paper provides an overview of the methodologies for risk and cost monitoring for VVT and proposes a novel approach for modeling VVT strategies as decision problems. A quantitative VVT process and risk model is proposed. Due to the nondeterministic nature of risk, simulation is used to generate distributions of possible costs, schedules, and risk outcomes. These distributions represent a probabilistic approach and are analyzed in relation to impact events. The model provides means to explore different VVT strategies for optimizing relevant decision parameters. To demonstrate the proposed procedure the paper describes a case study depicting a planned avionics suite upgrade program for a fighter aircraft. Some simplified partial quantitative results are also presented. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 135–151, 2003

46 citations


Journal ArticleDOI
TL;DR: This paper argues for the adoption of object‐oriented design and UML tools for nonsoftware designs, i.e., systems, hardware and algorithms: This is a controversial position and presents a case study, the design of a heating, ventilation, and air conditioning system, using U ML tools.
Abstract: This paper argues for the adoption of object-oriented design and UML tools for nonsoftware designs, i.e., systems, hardware and algorithms: This is a controversial position. It presents a case study, the design of a heating, ventilation, and air conditioning system, using UML tools. This case study also shows the incremental elaboration used to progress from the requirements model, to the analysis model, to the design model, etc. The paper finally discusses some difficulties that must be overcome in order to apply UML tools to system designs. © 2002 Wiley Periodicals, Inc. Syst Eng 6: 28–48, 2003.

34 citations


Journal ArticleDOI
TL;DR: The methodology presented in this paper can serve as a vehicle with which to enable the intelligence community to better assess the intent and capabilities of terrorist groups, develop and compare terrorist scenarios from different sources and aggregate the set that should guide decisions on intelligence collection.
Abstract: Disruption of a terrorist attack depends on having information facilitating the identification and location of those involved in supporting, planning, and carrying out the attack Such information arises from myriad sources, such as human or instrument surveillance by intelligence or law enforcement agencies, a variety of documents concerning transactions, and tips from a wide range of occasional observers Given the enormous amount of information available, a method is needed to cull and analyze only that which is relevant to the task, confirm its validity, and eliminate the rest The risk-based methodology for scenario tracking, intelligence gathering, and analysis for countering terrorism builds on the premise that in planning, supporting, and carrying out a terrorist plot, those involved will conduct a series of related activities for which there may be some observables and other acquirable evidence Those activities taken together constitute a threat scenario Information consistent with a realistic threat scenario may be useful in thwarting an impending attack Information not consistent with any such scenario is irrelevant Thus, the methodology requires a comprehensive set of realistic threat scenarios that would form a systemic process for collecting and analyzing information It also requires a process for judging the validity and usefulness of such information The key questions for intelligence gathering and analysis are: how to produce a comprehensive set of threat scenarios, how to winnow that set to a subset of most likely scenarios, what supplementary intelligence is worth pursuing, how to judge the relevance of available information, and how to validate and analyze the information The methodology presented in this paper can serve as a vehicle with which to enable the intelligence community to better: (a) assess the intent and capabilities of terrorist groups, (b) develop and compare terrorist scenarios from different sources and aggregate the set that should guide decisions on intelligence collection, (c) assess the possible distributions of responsibility for intelligence gathering and analysis across various homeland security agencies at the federal, state, and local levels, and (d) establish effective collection priorities to meet the demands of counterterrorism Some of the critical issues addressed in this paper include: (1) how to create a reasonably complete set of scenarios and filter it down to a more manageable set to establish intelligence collection priorities, (2) how to integrate the wide variety of intelligence sources associated with monitoring for terrorism and analytically account for the corresponding disparities in information reliability, and (3) how to incorporate these new methodologies into existing information management efforts related to protecting our nation's critical infrastructures © 2003 Wiley Periodicals, Inc Syst Eng 6: 152–169, 2003 7

31 citations


Journal ArticleDOI
TL;DR: This article compares Activity models of the Unified Modeling Language, version 2 (UML 2) to a widely‐used systems engineering (SE) flow diagram, the Enhanced Functional Flow Block Diagram (EFFBD) and issues are identified in applying UML 2 Activities to EFFBD and to satisfying UML‐SE functional flow requirements.
Abstract: This article compares Activity models of the Unified Modeling Language, version 2 (UML 2) [OMG (Object Management Group), UML 2.0 superstructure specification, August 2003, http://www.omg.org/cgi-bin/doc?ptc/03-08-02], to a widely-used systems engineering (SE) flow diagram, the Enhanced Functional Flow Block Diagram (EFFBD) [J. Long, Relationships between common graphical representations in system engineering, ViTech Corporation, 2002], and to the requirements for functional flow modeling in a systems engineering extension for UML (UML-SE) [OMG Systems Engineering Domain Special Interest Group (SE-DSIG), UML for systems engineering RFP, March 2003a, http://www.omg.org/cgi-bin/doc?ad/03-03-41]. Issues are identified in applying UML 2 Activities to EFFBD and to satisfying UML-SE functional flow requirements. Solutions are suggested to these issues that can be used to translate between the languages and to develop standards such as revisions to UML 2 or extensions in UML-SE. *© 2003 Wiley Periodicals, Inc. Syst Eng 6, 249–265, 2003

19 citations


Journal Article
TL;DR: Some properties of fuzzy judgment matrix with multiplicative consistency are proved and a method for identifying the degree of additive consistency of a fuzzy judgment Matrix is presented.
Abstract: Some properties of fuzzy judgment matrix with multiplicative consistency are proved A method for identifying the degree of additive consistency of a fuzzy judgment matrix is presented An approach for improving additive consistency of a fuzzy judgment matrix is given This approach is illustrated through a numerical example

18 citations


Journal ArticleDOI
TL;DR: This property, together with the synthesis technique, enables both formal and simulation methods to be used together, when each one utilizes a different self‐contained commercial‐off‐the‐shelf software application.
Abstract: An approach is presented for generating a performance prediction model so that both qualitative (logical correctness) and quantitative (timeliness) properties of a real-time system can be evaluated. The architecture of a system is layered into a functional layer and a physical one. Both architectural layers are developed as executable models: the executable functional model is a Petri net and the executable physical model is a queuing net. The two-layered executable models are then connected to develop a performance prediction model. A message-passing pattern is generated from the Petri net using a state space analysis technique. Then, the queuing net model processes these messages preserving the pattern. Once the network delays are obtained from the queuing model, their values are inserted back into the Petri net model. Since the communication service demands are isolated from the executable functional model, the communications network can be specified independently at any preferred level of detail. This enables the executable functional model to be invariant with respect to the executable physical model resulting in additional flexibility in designing a large-scale information system. This property, together with the synthesis technique, enables both formal and simulation methods to be used together, when each one utilizes a different self-contained commercial-off-the-shelf software application. © 2002 Wiley Peri

17 citations


Journal ArticleDOI
TL;DR: Using the classification framework for distinction among projects, it is shown that an early and careful analysis of project characteristics during the project conceptual and planning phases could have led management to a different style from inception, and might have helped avoiding many of the difficulties encountered later.
Abstract: The premise of this paper is that systems engineering, and project management as well, are all but universal and that different projects should employ different management and organizational styles. While this idea is not new and many managers use their own ways for identifying project differences, this paper offers to use a more formal, research-based framework for distinction among projects and adapting management style. Using a recently developed conceptual framework for project classification, this study demonstrates that a proper identification of project characteristics and adaptation of a suitable style is critical for success. To illustrate this concept, we analyzed the evolution and lessons learned from a complex high-tech defense system development project. The project, seen a priori as an extension of previous experience, turned out to be a completely new kind of effort, particularly in terms of complexity and the use of new technology. Management was not ready for this type of task, and it had initially chosen a traditional style that was successfully used by the company in previous projects. It turned out that what worked in the past, does not necessarily apply to all projects. When the project went into serious trouble, it had to be “saved,” by significant reconstruction and changing of management style. The result was extensive budget overrun and substantial delay in delivery. Using the classification framework for distinction among projects, we show that an early and careful analysis of project characteristics during the project conceptual and planning phases could have led management to a different style from inception, and might have helped avoiding many of the difficulties encountered later. We conclude by a set of implications and recommendations to management at large, and system house organizations in particular. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 123–134, 2003 DOI 10.1002/sys.10041

Journal Article
TL;DR: In this paper, a logistic equation from biology is applied to describe the process of industry cluster formation through firm's output variable ratio, and two kinds of cluster model, concentration of subcontractors around dominant firm and concentration of simple competitors, are analyzed respectively.
Abstract: Some similarities are exhibited between species co exist in nature and enterprise cluster in economic life. In this paper logistic equation, from biology, is applied to describe dynamically the process of industry cluster formation through firm's output variable ratio. Two kinds of cluster model, concentration of subcontractors around dominant firm and concentration of simple competitors, are analyzed respectively. Moreover, equilibrium conditions of the two models are given and explained. The main conclusion is that pierce competition is crucial factor for the formation of enterprise cluster.

Journal ArticleDOI
TL;DR: A Bayesian Belief Network (BBN) approach for socio‐technical system reliability assessment quantifies error influences arising from user knowledge, ability, and task environment, combined with factors describing the complexity of user action and user interface quality.
Abstract: This article presents a Bayesian Belief Network (BBN) approach for socio-technical system reliability assessment. A human error BBN model quantifies error influences arising from user knowledge, ability, and task environment, combined with factors describing the complexity of user action and user interface quality. System reliability evaluation is achieved by the System Reliability Analyser tool, which enables the iterative manipulation of the human error model according to high-level scenarios. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 210–223, 2003

Journal Article
TL;DR: A kind of knowledge relative reduction Algorithm was proposed with decision attribute support degree applied in knowledge express system and showed that the approach was effective in solving knowledge reduction.
Abstract: A kind of knowledge relative reduction Algorithm was proposed. With decision attribute support degree applied in knowledge express system, the support degree of the knowledge supplied by condition attribute for the whole decision was described and relative importance degree and relative core was obtained and relative core was obtained and as initial population in GA in order to accelerate convergence. Punishing function was used in fitness function to assuring reduction have fewer attributes and stronger support and search effect is very good. The practical results showed that the approach was effective in solving knowledge reduction.

Journal ArticleDOI
TL;DR: In this paper, the authors present the concept of System Operational Effectiveness (SOE) as a generic framework for a wholistic system assessment by balancing factors pertaining to system performance, availability, process efficiency, and cost.
Abstract: An assessment framework to make explicit the “cause and effect” relationship between design decisions and their impact on system operations, maintenance, and support is essential to influence new and upgrade program development from the longer-term life-cycle perspective. This becomes even more urgent with increasingly greater utilization of commercial-off-the-shelf (COTS) elements within information and knowledge intensive systems in the commercial (IT, Telecommunication, Banking, Finance) and aerospace domains. These architectures are often characterized by an evolving physical baseline (technology refreshment) driven by obsolescence and end-of-life risk considerations. The first objective of this paper is to present the concept of System Operational Effectiveness (SOE). System Operational Effectiveness serves as a generic framework for a wholistic system assessment by balancing factors pertaining to system performance, availability, process efficiency, and cost. Then, given the significance of system training costs, the results of an industry survey on system training metrics and methods are presented. This survey was conducted to help understand training metrics currently utilized within industry with a particular focus on information and knowledge intensive systems. A subsequent objective is to delineate architectural attributes that can be used to assess architectural goodness with respect to training requirements and cost. This is an ongoing research initiative and initial results from this initiative are also presented. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 238–248, 2003

Journal ArticleDOI
TL;DR: Revised definitions and a new approach for appraising and using GAs have been developed for the Federal Aviation Administration Integrated Capability Maturity Model v2.0 (FAA‐iCMM®).
Abstract: Generic Attributes (GAs) are measures of process performance introduced by the systems engineering capability model, EIA/IS 731. The systems model defines two GAs—Effectiveness and Value. While the concept of GAs is generally accepted as valid, they have been used very little, due to difficulties in interpretation and appraisal. Revised definitions and a new approach for appraising and using GAs have been developed for the Federal Aviation Administration Integrated Capability Maturity Model® v2.0 (FAA-iCMM®). The improved approach to Generic Attributes provides the needed clarity and measurement objectivity. Definitions of the iCMM® GAs and their relationship to process model concepts are described in this paper, along with a practical approach to appraising GAs. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 301–308, 2003

Journal ArticleDOI
TL;DR: A “fractal” method of design is developed that embraces the entire space of candidate system solutions and provides for the direct synthesis of optimal system design solutions of complex system implementations with multiple components, embodying different technologies, and with multiple levels of design hierarchy.
Abstract: By building on earlier established work, a nonrecursive approach to design optimization is proposed. A “fractal” method of design is developed that embraces the entire space of candidate system solutions. It provides for the direct synthesis of optimal system design solutions of complex system implementations with multiple components, embodying different technologies, and with multiple levels of design hierarchy. The derivation and decomposition of comparative evaluation criteria and tradeoff functions, using “mathematical orders” over the space of candidate systems, is explored in detail. An extended form of the Subsystem Tradeoff Functional Equation (i.e., nk-STFE for n components and k evaluation criteria) is developed, and its application to optimal design of complex systems architectures is presented. In particular, the effect of overall constraints on system implementation—in terms of combinatorial constraints on component choice, and the manner in which this is incorporated into a systems-theoretic approach to complex system design—is discussed. The development of formal (systems-theoretic) constructs, theorems, and theorem proofs, are provided where necessary. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 92–105, 2003

Journal ArticleDOI
TL;DR: Insights gained from this case study enable proposing a general model of value creation in S&T, and the phenomena embodied in this model suggest several central hypotheses.
Abstract: Science and technology (S&T) involves a broad community of investors, sponsors, investigators, adopters, and end-users. This wide range of stakeholders adds value in a variety of ways, resulting in what are termed value streams. This article focuses on elaborating and formalizing S&T value streams. This formulation is evaluated in the context of a case study of computer-based intelligent tutoring systems. The value streams identified in this context span several decades of S&T investments, R&D in numerous organizations, and deployments in a variety of school settings. Insights gained from this case study enable proposing a general model of value creation in S&T. The phenomena embodied in this model suggest several central hypotheses. Approaches to evaluating these hypotheses are discussed. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 76–91, 2003

Journal Article
TL;DR: The modified TOPSIS is a new multi criteria approach that has not followed the chief theory of ideal solution (TOPSIS) on the evaluation of each feasible scheme.
Abstract: The modified TOPSIS is a new multi criteria approach that has not followed chief theory of ideal solution (TOPSIS) on the evaluation of each feasible scheme. This paper will ameliorate ideal solution which is modified on the criterion not only close to ideal point but also far from negative ideal point.

Journal Article
TL;DR: The short-term traffic forecasting is defined, the principium of short- term traffic flow forecasting and the characteristic of the model are introduced, and the theoretic foundation, the feasibility and validity of some models are recommended and appraised.
Abstract: In this paper, the short-term traffic forecasting is defined, the principium of short-term traffic flow forecasting and the characteristic of the model are introduced. The methods on short-term traffic flow forecasting are summarized. Then, the theoretic foundation, the feasibility and validity of some models are recommended and appraised.

Journal Article
TL;DR: The experimental results show that the proposed Pareto multi objective genetic algorithm for multi objective programming problem is efficient, and it can provide satisfactory solutions for decision making.
Abstract: In view of the limitations of traditional methods for solving multi objective optimization problems,the Pareto multi objective genetic algorithm for multi objective programming problem is proposed. The experimental results show that the proposed method is efficient, and it can provide satisfactory solutions for decision making.

Journal Article
TL;DR: This paper proposes a formal model of ontology using Description Logic, and then analyzes the checking problems of terminology and instantiation.
Abstract: Ontology had been originally used in Philosophy,where it indicated the systematic explanation of Existence. Now this term has been used by Artificial Intelligence as an explicit specification of a conceptualization in various areas, such as conceptual modeling, information integration, agent-based system design, and semantic web. Ontology model and ontology checking recently are still under hot discussion. In this paper, we propose a formal model of ontology using Description Logic, and then analyze the checking problems of terminology and instantiation.

Journal Article
TL;DR: In this article, an order relation between interval numbers was proposed, by which inequality constraints of interval linear programming could be transformed into constraints with exact coefficients, and an example was given of an example where an IvLP was transformed into an exact linear programming and could be solved.
Abstract: A standard form of interval linear programming was first defined Then an order relation between interval numbers was proposed, by which inequality constraints of IvLP could be transformed into constraints with exact coefficients In the following equality constraints of IvLP were studied and were converted into inequality constraints with exact coefficients As a result an IvLP was transformed into an exact linear programming and could be solved Finally an example was given

Journal ArticleDOI
TL;DR: In this article, a new class of complex systems engineering tools, along with new analytical techniques, are envisioned, which can further support the continuing efforts of scientists in unraveling the mysteries of highly complex self-forming and self-organizing systems.
Abstract: The accomplishments of scientists conducting complex systems research could significantly expand the role of systems engineers currently wrestling with more narrowly defined complex systems. The rapid growth of problems extending from newly formed complex systems in society, economic activities, and geopolitical dynamics argues further for the immediate fusion of scientific insight with practical engineering approaches for problem solving. In supporting this new role for systems engineers, a new class of complex systems engineering tools, along with new analytical techniques, are envisioned. Such tools and techniques can further support the continuing efforts of scientists in unraveling the mysteries of highly complex self-forming and self-organizing systems.

Journal Article
Fu Zhuo1
TL;DR: An improved GA for the VRPSTW is described, in which a new coding method, the adaptive mechanism of crossover and mutation, and penalty function are introduced.
Abstract: Vehicle routing problem with soft time windows(VRPSTW) is a variation of vehicle routing problem(VRP), which is a typical NP-hard problem. In this paper, we describe an improved GA for the VRPSTW, in which a new coding method, the adaptive mechanism of crossover and mutation, and penalty function are introduced. Computational results on a set of benchmark problems show that the procedure is efficient.

Journal ArticleDOI
TL;DR: The key concept underlying the control strategy at the regional or global level is that the two attributes of the Complex Adaptive System in a dynamic environment, accessibility to many states and sensitivity to small perturbation, present us with an opportunity to manipulate the system's dynamics.
Abstract: Information infrastructures, such as the Internet and Computational Grids, have enabled a networked computing and communication environment. Although many organizations depend on network-centric information operations to support critical missions, existing information infrastructures provide little guarantee of the dependability of network-centric computing and communication. This paper discusses some problems with the dependability of existing information infrastructures, such as stateless or centralized resource management. A Complex Adaptive Systems approach to dependability of futuristic information infrastructures is then presented with emphasis on the detection of emergent states at the regional and global levels of these infrastructures, and the self-synchronized control of such infrastructures in response to emergent states. The key concept underlying our control strategy at the regional or global level is that the two attributes of the Complex Adaptive System in a dynamic environment, accessibility to many states and sensitivity to small perturbation, present us with an opportunity to manipulate the system’s dynamics. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 225–237, 2003

Journal ArticleDOI
TL;DR: The results of an experiment are shown that was designed to investigate the difficulty in distinguishing between the product and the process.
Abstract: When engineers design a system, they must design both the product and the process that will create it. Accordingly, systems engineers must write requirements for the product and the process. Stating these requirements in separate documents might make it easier to get the requirements right and manage the requirements when either the product or the process requirements change. But, of course, these two sets of documents must be intricately interrelated, integrated, and produced with extensive feedback loops. This paper shows the results of an experiment that was designed to investigate the difficulty in distinguishing between the product and the process. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 106–115, 2003 DOI 10.1002/sys.10035

Journal Article
TL;DR: In this article, an error propagation method for determining interval entropy weights is presented, where the attribute values are in the form of interval number according to the traditional concept of entropy weights, and a numerical example is given to show the feasibility and effectiveness of the proposed method.
Abstract: This paper investigates a problem for determining attribute weights in uncertain multiple attribute decision making, in which the attribute values are in the forms of interval number According to the traditional concept of entropy weights, an error propagation method for determining interval entropy weights is presented Finally, a numerical example is given to show the feasibility and effectiveness of the proposed method

Journal ArticleDOI
TL;DR: In this paper, a simulation model was developed to illustrate the impact of a shared component build strategy on the manufacturing system that would construct a component, and the results from this simulation showed that selecting an alternative manufacturing system architecture could result in a $3.04 million cost avoidance.
Abstract: The push for significantly more affordable Department of Defense programs in a limited budget environment has created a need to better understand the cost impacts of manufacturing decisions made during the early phases of system development. To reduce costs, Department of Defense programs are investigating the benefits of implementing a shared component build strategy. To implement this strategy, program managers need tools that will evaluate how their decision will affect the cost of the systems they are developing. This paper reports on the implementation of a shared component build strategy on two major missile system programs. A simulation model was developed to illustrate the impact of a shared build on the manufacturing system that would construct a component. The simulation models the production line of a common component for two major missile programs. The combination of the simulation and the corresponding user interface demonstrates the ease of experimenting with various manufacturing system architectures and highlights the benefits achieved by utilizing modeling and simulation. The results from this simulation showed that selecting an alternative manufacturing system architecture could result in a $3.04 million cost avoidance. These results support the theory that modeling and simulation is an invaluable decision-making tool that can support evaluation of a shared production build strategy. © 2003 Wiley Periodicals, Inc. Syst Eng 6: 63–75, 2003

Journal Article
TL;DR: The analysis of theChanging processes and the Poincare sections of the changing processes shows that chaos of the traffic flow exists indeed and the useful conclusion for study and application of traffic flow theory is presented.
Abstract: The chaos of the traffic flow based on the Car following model is studied.The traffic flow is generated based on Bierley Car following model which is programmed by using MATLAB. The changing processes of space headway of the front vehicle and the following vehicle in the traffic flow are obtained by the simulations. The analysis of the changing processes and the Poincare sections of the changing processes shows that chaos of the traffic flow exists indeed. The influence of the model parameters and the simulation parameters on the movement of the traffic flow is discussed and the relevant simulation results are given. Finally,the useful conclusion for study and application of traffic flow theory is presented.