scispace - formally typeset
Search or ask a question

Showing papers in "Ai Edam Artificial Intelligence for Engineering Design, Analysis and Manufacturing in 2014"


Journal ArticleDOI
TL;DR: It is argued that interdisciplinary interest in design fixation has led to increasingly broad definitions of the phenomenon that may be undermining empirical research efforts, educational efforts to minimize fixation, and the acquisition and dissemination of transdisciplinary knowledge about fixation effects.
Abstract: The term design fixation is often used interchangeably to refer to situations where designers limit their creative output because of an overreliance on features of preexisting designs, or more generally, an overreliance on a specific body of knowledge directly associated with a problem. In this paper, we argue that interdisciplinary interest in design fixation has led to increasingly broad definitions of the phenomenon that may be undermining empirical research efforts, educational efforts to minimize fixation, and the acquisition and dissemination of transdisciplinary knowledge about fixation effects. To address these issues, we recommend that researchers consider categorizing fixation phenomena into one of three classifications: unconscious adherence to the influence of prior designs, conscious blocks to change, and intentional resistance to new ideas. Next, we distinguish between concept-based design fixation, fixation to a specific class of known design concepts, and knowledge-based design fixation, fixation to a problem-specific knowledge base. With these distinctions in place, we propose a system of orders of design fixation, recommend methods for reducing fixation in inventive design, and recommend areas that are in need of further research within the field of design science.

74 citations


Journal ArticleDOI
TL;DR: A literature review of the main machine learning based scheduling approaches from the last decade is presented and knowledge is obtained that can be used to decide which is the most appropriate dispatching rule at each moment in time.
Abstract: A common way of dynamically scheduling jobs in a manufacturing system is by implementing dispatching rules. The issues with this method are that the performance of these rules depends on the state the system is in at each moment and also that no “ideal” single rule exists for all the possible states that the system may be in. Therefore, it would be interesting to use the most appropriate dispatching rule for each instance. To achieve this goal, a scheduling approach that uses machine learning can be used. Analyzing the previous performance of the system (training examples) by means of this technique, knowledge is obtained that can be used to decide which is the most appropriate dispatching rule at each moment in time. In this paper, a literature review of the main machine learning based scheduling approaches from the last decade is presented.

58 citations


Journal ArticleDOI
TL;DR: This method extends conventional DSM representation schema by incorporating multilevel test coverage data as vectors into the off-diagonal cells to provide a richer description of potential interactions between product architecture and SE-V integration test tasks than conventional domain mapping matrices.
Abstract: The systems engineering V (SE-V) is an established process model to guide the development of complex engineering projects (INCOSE, 2011). The SE-V process involves decomposition and integration of system elements through a sequence of tasks that produce both a system design and its testing specifications, followed by successive levels of build, integration, and test activities. This paper presents a method to improve SE-V implementation by mapping multilevel data into design structure matrix (DSM) models. DSM is a representation methodology for identifying interactions between either components or tasks associated with a complex engineering project (Eppinger & Browning, 2012). Multilevel refers to SE-V data on complex interactions that are germane either at multiple levels of analysis (e.g., component versus subsystem) conducted either within a single phase or across multiple time phases (e.g., early or late in the SE-V process). This method extends conventional DSM representation schema by incorporating multilevel test coverage data as vectors into the off-diagonal cells. These vectors provide a richer description of potential interactions between product architecture and SE-V integration test tasks than conventional domain mapping matrices. We illustrate this method with data from a complex engineering project in the offshore oil industry. Data analysis identifies potential for unanticipated outcomes based on incomplete coverage of SE-V interactions during integration tests. In addition, assessment of multilevel features using maximum and minimum function queries isolates all the interfaces that are associated with either early or late revelations of integration risks based on the planned suite of SE-V integration tests.

28 citations


Journal ArticleDOI
TL;DR: It is suggested that some level of presence can be linked with better design, and it implies thatlevel of presence might serve as an indicator of performance and learning in similar design-and-build activities.
Abstract: This paper explores the role of a designer's sense of engagement in early stage design. In the field of virtual reality, presence and immersion are standard measures of an individual's sense of engagement and involvement in an activity. High levels of presence might indicate that the designer is highly focused on the work. The central research question is the following: do designers who are more engaged in design activity, as measured by presence and immersive tendency questionnaires, produce better designs? An experiment was conducted to assess presence and immersive tendencies within the context of a hands-on, open-ended design-and-build activity. The results indicated that the designers' sense of immersion and presence ranged widely as well as their sense of frustration and calmness while performing the design activity. It was found that higher levels of presence correlated with either high design performance or low design performance. Lower levels of presence correlated with average design performance. No correlations were found between immersive tendency and design performance. This study suggests that some level of presence can be linked with better design, and it implies that level of presence might serve as an indicator of performance and learning in similar design-and-build activities.

26 citations


Journal ArticleDOI
TL;DR: It is concluded that neither of the methods is superior but that each of them is suitable for use in distinct application scenarios because of its different properties.
Abstract: We present and compare two evolutionary algorithm based methods for rectangular architectural layout generation: dense packing and subdivision algorithms. We analyze the characteristics of the two methods on the basis of three floor plan scenarios. Our analyses include the speed with which solutions are generated, the reliability with which optimal solutions can be found, and the number of different solutions that can be found overall. In a following step, we discuss the methods with respect to their different user interaction capabilities. In addition, we show that each method has the capability to generate more complex L-shaped layouts. Finally, we conclude that neither of the methods is superior but that each of them is suitable for use in distinct application scenarios because of its different properties.

26 citations


Journal ArticleDOI
TL;DR: The evaluations have shown that Mechanix is as effective as paper-and-pencil-based homework for teaching method of joints truss analysis; focus groups with students who used the program have revealed that they believe Mechanix enhances their learning and that they are highly engaged while using it.
Abstract: Massive open online courses, online tutoring systems, and other computer homework systems are rapidly changing engineering education by providing increased student feedback and capitalizing upon online systems' scalability. While online homework systems provide great benefits, a growing concern among engineering educators is that students are losing both the critical art of sketching and the ability to take a real system and reduce it to an accurate but simplified free-body diagram (FBD). For example, some online systems allow the drag and drop of forces onto FBDs, but they do not allow the user to sketch the FBDs, which is a vital part of the learning process. In this paper, we discuss Mechanix, a sketch recognition tool that provides an efficient means for engineering students to learn how to draw truss FBDs and solve truss problems. The system allows students to sketch FBDs into a tablet computer or by using a mouse and a standard computer monitor. Using artificial intelligence, Mechanix can determine not only the component shapes and features of the diagram but also the relationships between those shapes and features. Because Mechanix is domain specific, it can use those relationships to determine not only whether a student's work is correct but also why it is incorrect. Mechanix is then able to provide immediate, constructive feedback to students without providing final answers. Within this manuscript, we document the inner workings of Mechanix, including the artificial intelligence behind the scenes, and present studies of the effects on student learning. The evaluations have shown that Mechanix is as effective as paper-and-pencil-based homework for teaching method of joints truss analysis; focus groups with students who used the program have revealed that they believe Mechanix enhances their learning and that they are highly engaged while using it.

23 citations


Journal ArticleDOI
TL;DR: The outcome of this work is a recursive CBR process suitable to engineering design and compatible with standards, which is designed to give flexibility within theCBR process as well as to provide guidelines to the designer.
Abstract: This paper addresses the fulfillment of requirements related to case-based reasoning (CBR) processes for system design Considering that CBR processes are well suited for problem solving, the proposed method concerns the definition of an integrated CBR process in line with system engineering principles After the definition of the requirements that the approach has to fulfill, an ontology is defined to capitalize knowledge about the design within concepts Based on the ontology, models are provided for requirements and solutions representation Next, a recursive CBR process, suitable for system design, is provided Uncertainty and designer preferences as well as ontological guidelines are considered during the requirements definition, the compatible cases retrieval, and the solution definition steps This approach is designed to give flexibility within the CBR process as well as to provide guidelines to the designer Such questions as the following are conjointly treated: how to guide the designer to be sure that the requirements are correctly defined and suitable for the retrieval step, how to retrieve cases when there are no available similarity measures, and how to enlarge the research scope during the retrieval step to obtain a sufficient panel of solutions Finally, an example of system engineering in the aeronautic domain illustrates the proposed method A testbed has been developed and carried out to evaluate the performance of the retrieval algorithm and a software prototype has been developed in order to test the approach The outcome of this work is a recursive CBR process suitable to engineering design and compatible with standards Requirements are modeled by means of flexible constraints, where the designer preferences are used to express the flexibility Similar solutions can be retrieved even if similarity measures between features are not available Simultaneously, ontological guidelines are used to guide the process and to aid the designer to express her/his preferences

22 citations


Journal ArticleDOI
TL;DR: This research presents a grammar rule analysis method to provide a more systematic development process for grammar rules and aims to improve the quality of the rules and in turn have a major impact on thequality of the designs generated.
Abstract: The use of generative design grammars for computational design synthesis has been shown to be successful in many application areas. The development of advanced search and optimization strategies to guide the computational synthesis process is an active research area with great improvements in the last decades. The development of the grammar rules, however, often resembles an art rather than a science. Poor grammars drive the need for problem specific and sophisticated search and optimization algorithms that guide the synthesis process toward valid and optimized designs in a reasonable amount of time. Instead of tuning search algorithms for inferior grammars, this research focuses on designing better grammars to not unnecessarily burden the search process. It presents a grammar rule analysis method to provide a more systematic development process for grammar rules. The goal of the grammar rule analysis method is to improve the quality of the rules and in turn have a major impact on the quality of the designs generated. Four different grammars for automated gearbox synthesis are used as a case study to validate the developed method and show its potential.

20 citations


Journal ArticleDOI
TL;DR: In the first paper, “Evaluating FuncSION: A Software for Automated Synthesis of Design Solutions for Stimulating Ideation During Mechanical Conceptual Design,” Ujjwal Pal, Ying Chieh Liu, and Amaresh Chakrabarti address this problem directly by synthesizing transformations to the problem of converting energy from one distinct location to another.
Abstract: Nearly every human society has been fundamentally altered by computers in the last 30 years. We rely on servers, desktops, and handheld devices to facilitate duties of both our personal and professional lives. Most of this is just managing data: input, store, transfer, output. There is also analysis: given a set of inputs, solve for x. For engineers, x is a prediction of how hot, how much stress, how much efficiency, or how much cost. However, other than the use of computers to manage data or perform analysis, there is a third use of computation that is rarely perceived by the average computer user or engineer, and that is the use of computers in design synthesis. As our engineering artifacts grow in complexity, we need to offload some design decisions to the computer. We need the computer to help us synthesize many of the minute details in our engineering devices as well as ensure high performance by searching among a myriad of alternatives for the optimal combination of building blocks and parameter values. Computational design synthesis (CDS) is a research area focused on approaches to automating synthesis activities in design. Resulting methods may be fully automated or interactive with the goals of automatically generating a range of alternatives, sparking creativity and innovation, automating tedious or time-consuming engineering tasks, and simply exploring the creative abilities of computational systems. There is a fuzzy line between CDS and applied optimization. We have intended CDS to be more ambitious than the typical use of optimization to “solve for x.” It is intended to mimic what humans consider in design, not only parameters, like in a fixed vector, x, but also material choices, discrete component choices, and the basic architecture of building blocks. Such research is typically ambitious in scope, demanding in terms of developmental and computational resources, and extensive in terms of related work. The work is based on artificial intelligence, mathematical programming, computational geometry, graph theory, engineering design theory, and cognitive science. When applied later in the design process, meaningful results are only achievable by interfacing with the computational analysis tools that govern our engineering world, such as those for solving partial-differential field equations (e.g., finite element analysis or computational fluid dynamics) or those for solving ordinary-differential equations (e.g., three-dimensional dynamics). In addition to the technical challenge, it can be difficult to interface with these tools because many are expensive, deal with proprietary file formats, and are sometimes operable only within the tool’s graphical user interface. Combine this with highly iterative optimization methods (many design synthesis techniques require the use of large stochastic optimization methods), and researchers may occasionally find themselves up against practical limits in computational time and memory. Early in the design process, CDS has faced issues of how to represent and reason with nebulous notions of function and feasibility as well as methods of predicting performance when various parameters are undefined. In the first paper, “Evaluating FuncSION: A Software for Automated Synthesis of Design Solutions for Stimulating Ideation During Mechanical Conceptual Design,” Ujjwal Pal, Ying Chieh Liu, and Amaresh Chakrabarti address this problem directly by synthesizing transformations (without the need for detailed part definition) to the problem of converting energy from one distinct location to another. It is a classic example of a body of work that has grown steadily over the last three decades. It is our opinion, and likely the opinion of the paper’s authors, that this work must someday positively affect the design of electromechanical systems. CDS has also championed the use of generative design grammars as a means to simultaneously provide structure and design freedom during synthesis. A generative grammar is composed of rules that, unlike the traditional definition of rules, focus on defining the actions or design transformations and modifications that can be performed. Just like human languages (e.g., English, German, or Chinese), formal grammars require a vocabulary of terms that can take the form of strings, parameters, graphs, or shapes, which can also be represented by graphs. Unlike an expert system, the grammar rules are more about capturing design logic concisely than about Reprint requests to: Matthew I. Campbell, School of Mechanical, Industrial and Manufacturing Engineering, 408 Rogers Hall, Oregon State University, Corvallis, OR 97331-6001, USA. E-mail: matt.campbell@oregonstate. edu Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2014), 28, 207–208. # Cambridge University Press 2014 0890-0604/14 $25.00 doi:10.1017/S0890060414000171

15 citations


Journal ArticleDOI
TL;DR: The proposed method uses the generative abilities of graph grammars with simulation and analysis power of conventional computational fluid dynamics methods to solve shape and topology optimization of fluid channels using generative design synthesis methods.
Abstract: This paper presents a new technique for shape and topology optimization of fluid channels using generative design synthesis methods. The proposed method uses the generative abilities of graph grammars with simulation and analysis power of conventional computational fluid dynamics methods. The graph grammar interpreter GraphSynth is used to carry out graph transformations, which define different topologies for a given multiple-inlet multiple-outlet problem. After evaluating and optimizing the generated graphs, they are first transformed into meaningful three-dimensional shapes. These solutions are then analyzed by a computational fluid dynamics solver for final evaluation of the possible solutions. The effectiveness of the proposed method is checked by solving a variety of available test problems and comparing them with those found in the literature. Furthermore, by solving very complex large-scale problems, the robustness and effectiveness of the method is tested. To extend the work, future research directions are presented.

13 citations


Journal ArticleDOI
TL;DR: It is found that teams were less likely to apply overall biological analogies if they tended to recall existing solutions that could be easily associated with specific superficial or functional characteristics of biological phenomena.
Abstract: Biomimetic design applies biological analogies to solve design problems and has been known to produce innovative solutions. However, when designers are asked to perform biomimetic design, they often have difficulty recognizing analogies between design problems and biological phenomena. Therefore, this research aims to investigate designer behaviors that either hinder or promote the use of analogies in biomimetic design. A verbal protocol study was conducted on 30 engineering students working in small teams while participating in biomimetic design sessions. A coding scheme was developed to analyze cognitive processes involved in biomimetic design. We observed that teams were less likely to apply overall biological analogies if they tended to recall existing solutions that could be easily associated with specific superficial or functional characteristics of biological phenomena. We also found that the tendency to evaluate ideas, which reflects critical thinking, correlates with the likelihood of identifying overall biological analogies. Insights from this paper may contribute toward developing generalized methods to facilitate biomimetic design.

Journal ArticleDOI
TL;DR: These measures, based on examining product architecture characteristics that facilitate changes that include modularity, hierarchy, interfaces, performance sensitivity, and design margins, were incorporated in a software tool for exploring alternative configurations of fractionated space satellite systems.
Abstract: Adaptability can have many different definitions: reliability, robustness, survivability, and changeability (adaptability to requirements change). In this research, we focused entirely on the last type. We discuss two alternative approaches to requirements change adaptability. One is the valuation approach that is based on utility and cost of design changes in response to modified requirements. The valuation approach is theoretically sound because it is based on utility and decision theory, but it may be difficult to use in the real world. The second approach is based on examining product architecture characteristics that facilitate changes that include modularity, hierarchy, interfaces, performance sensitivity, and design margins. This approach is heuristic in nature but more practical to use. If calibrated, it could serve as a surrogate for real adaptability. These measures were incorporated in a software tool for exploring alternative configurations of fractionated space satellite systems.

Journal ArticleDOI
TL;DR: The goal of this work is to provide complex system designers with a means of using early design simulation data to identify and mitigate potential emergent failure behavior.
Abstract: This paper presents the use of data clustering methods applied to the analysis results of a design-stage, functional failure reasoning tool. A system simulation using qualitative descriptions of component behaviors and a functional reasoning tool are used to identify the functional impact of a large set of potential single and multiple fault scenarios. The impact of each scenario is collected as the set of categorical function “health” states for each component-level function in the system. This data represents the space of potential system states. The clustering and statistical tools presented in this paper are used to identify patterns in this system state space. These patterns reflect the underlying emergent failure behavior of the system. Specifically, two data analysis tools are presented and compared. First, a modified k-means clustering algorithm is used with a distance metric of functional effect similarity. Second, a statistical approach known as latent class analysis is used to find an underlying probability model of potential system failure states. These tools are used to reason about how the system responds to complex fault scenarios and assists in identifying potential design changes for fault mitigation. As computational power increases, the ability to reason with large sets of data becomes as critical as the analysis methods used to collect that data. The goal of this work is to provide complex system designers with a means of using early design simulation data to identify and mitigate potential emergent failure behavior.

Journal ArticleDOI
TL;DR: A new discrete method for particle swarm optimization is presented, which can be widely applied in MSS to find out the set of Pareto-optimal solutions for multiobjective optimization.
Abstract: The goal of machining scheme selection (MSS) is to select the most appropriate machining scheme for a previously designed part, for which the decision maker must take several aspects into consideration. Because many of these aspects may be conflicting, such as time, cost, quality, profit, resource utilization, and so on, the problem is rendered as a multiobjective one. Consequently, we consider a multiobjective optimization problem of MSS in this study, where production profit and machining quality are to be maximized while production cost and production time must be minimized, simultaneously. This paper presents a new discrete method for particle swarm optimization, which can be widely applied in MSS to find out the set of Pareto-optimal solutions for multiobjective optimization. To deal with multiple objectives and enable the decision maker to make decisions according to different demands on each evaluation index, an analytic hierarchy process is implemented to determine the weight value of evaluation indices. Case study is included to demonstrate the feasibility and robustness of the hybrid algorithm. It is shown from the case study that the multiobjective optimization model can simply, effectively, and objectively select the optimal machining scheme according to the different demands on evaluation indices.

Journal ArticleDOI
TL;DR: The proposed two-field model was able to capture important features of self-organizing systems, and the genetic algorithm was ability to generate self- Organizing mechanisms by which agents could form task-based structures to fulfill functional requirements.
Abstract: A computational approach for the design of self-organizing systems is proposed that employs a genetic algorithm to efficiently explore the vast space of possible configurations of a given system description. To generate the description of the system, a two-field based model is proposed in which agents are assigned parameterized responses to two “fields,” a task field encompassing environmental features and task objects, and a social field arising from agent interactions. The aggregate effect of these two fields, sensed by agents individually, governs the behavior of each agent, while the system-level behavior emerges from the actions of and interactions among the agents. Task requirements together with performance preferences are used to compose system fitness functions for evolving functional and efficient self-organizing mechanisms. Case studies on the evolutionary synthesis of self-organizing systems are presented and discussed. These case studies focus on achieving system-level behavior with minimal explicit coordination among agents. Agents were able to collectively display flocking, exploration, and foraging through self-organization. The proposed two-field model was able to capture important features of self-organizing systems, and the genetic algorithm was able to generate self-organizing mechanisms by which agents could form task-based structures to fulfill functional requirements.

Journal ArticleDOI
TL;DR: A reference framework for configuration is proposed that permits a more precise understanding of a configuration task, a definition of the basic concepts in product configuration, and a total configuration system view that describes how operators come together to perform the configuration task in the configuration process.
Abstract: This paper presents a reference framework for the configuration process. The reference framework is established through an extensive review of existing literature, and as such consolidates an extensive theoretical base. The review of literature shows a broadening of the understanding of the configuration task. The definition of the configuration task is somewhat ambiguous because different research groups define configuration tasks differently. This paper proposes a reference framework for configuration that permits a more precise understanding of a configuration task, a definition of the basic concepts in product configuration, and a total configuration system view that describes how operators come together to perform the configuration task in the configuration process. We will define the product, the product model, the configuration task, and the configuration system, and put the whole thing into perspective with the theory of technical systems, where we describe the configuration process and the different abstraction level of configurations. We will also use our resulting framework to describe sales configuration, technical configuration, and reconfiguration. We do this to synthesize previous work, to clarify and make coherent definitions of relevant terms, to extent the definition of product configuration to include “softer” products like information and service, and finally, to give a comparative framework to analyze work done in the field of product configuration. The total configuration system, together with the definition of key concepts, comprises a strong reference framework when working with, developing, and analyzing configuration systems.

Journal ArticleDOI
TL;DR: An overview of FuncSION in terms of representation of design problems, representation of building blocks, and rules with which building blocks are combined to generate concepts at three levels of abstraction: topological, spatial, and physical is provided.
Abstract: The goal of the work reported in this paper is to use automated, combinatorial synthesis to generate alternative solutions to be used as stimuli by designers for ideation. FuncSION, a computational synthesis tool that can automatically synthesize solution concepts for mechanical devices by combining building blocks from a library, is used for this purpose. The objectives of FuncSION are to help generate a variety of functional requirements for a given problem and a variety of concepts to fulfill these functions. A distinctive feature of FuncSION is its focus on automated generation of spatial configurations, an aspect rarely addressed by other computational synthesis programs. This paper provides an overview of FuncSION in terms of representation of design problems, representation of building blocks, and rules with which building blocks are combined to generate concepts at three levels of abstraction: topological, spatial, and physical. The paper then provides a detailed account of evaluating FuncSION for its effectiveness in providing stimuli for enhanced ideation.

Journal ArticleDOI
TL;DR: This paper presents an approach based on a generative multiperformance framework, configured for generating and optimizing architectural designsbased on a precedent design, constructed using a parametric modeling environment enabling the capture of precedent designs, extraction of spatial analytics, and demonstration of how populations can be used to drive the generation and optimization of alternate spatial solutions.
Abstract: Architectural spatial design is a wicked problem that can have a multitude of solutions for any given brief. The information needed to resolve architectural design problems is often not readily available during the early conceptual stages, requiring proposals to be evaluated only after an initial solution is reached. This “solution-driven” design approach focuses on the generation of designs as a means to explore the solution space. Generative design can be achieved computationally through parametric and algorithmic processes. However, utilizing a large repertoire of organiational patterns and design precedent knowledge together with the precise criteria of spatial evaluation can present design challenges even to an experienced architect. In the implementation of a parametric design process lies an opportunity to supplement the designer's knowledge with computational decision support that provides real-time spatial feedback during conceptual design. This paper presents an approach based on a generative multiperformance framework, configured for generating and optimizing architectural designs based on a precedent design. The system is constructed using a parametric modeling environment enabling the capture of precedent designs, extraction of spatial analytics, and demonstration of how populations can be used to drive the generation and optimization of alternate spatial solutions. A pilot study implementing the complete workflow of the system is used to illustrate the benefits of coupling parametric modeling with structured precedent analysis and design generation.

Journal ArticleDOI
TL;DR: The process used to extract the architect's knowledge and to incorporate it into the transformation grammar enable us to abstract the designer's actions and to define a sequence of actions that can define a possible strategy of design.
Abstract: This article focuses on a shape grammar, the rabo-de-bacalhau housing style, that was developed to enable the adaptation of existing houses to new requirements and most particular on the process of inference of the grammar. In the article we describe the process undertaken to develop the grammar and what the achievements of the transformation grammar are regarding the possibilities of a mass customization of a dwelling's rehabilitation work. The goal of this article is to describe and discuss how the designer's knowledge was encoded into shape rules. The process used to extract the architect's knowledge and to incorporate it into the transformation grammar enable us to abstract the designer's actions and to define a sequence of actions that can define a possible strategy of design. The proposed design methodology generates dwelling layouts that are legal because they follow the grammar language and adequate because they meet the a priori user and design requirements.

Journal ArticleDOI
TL;DR: This Special Issue includes a subset of the research in these communities, focusing specifically on the early design stages of reliability and uncertainty modeling, analysis, and simulation in complex systems.
Abstract: As engineering systems grow in size and complexity, they pose increasingly significant challenges. In particular, the cost and time required for design and development are growing at an unsustainable rate, including delays and cost overruns for major institutions including Boeing, Airbus, NASA’s Constellation program, General Motors, and Chrysler. Furthermore, failures introduced during the development process result in serious consequences for leading industries developing large complex systems such as aircraft, space launch systems, submarines, and military vehicles. Over the last decade, these industries have acknowledged that the challenges warrant new theories and methodologies. As a result, multiple research communities have risen to the challenge of exploring the fundamental issues leading to the growing problems. This Special Issue includes a subset of the research in these communities, focusing specifically on the early design stages. The Special Issue begins with some very interesting work from Steven Eppinger, Nitin Joglekar, Alison Olechowski, and Terence Teo, who, in their paper, “Improving the Systems Engineering Process with Multilevel Analysis of Interactions,” present some advances in multilevel design structure matrix (DSM) representations of complex engineering systems. The method accounts for multilevel data in the analysis of dependencies using DSM models, extending representation schema that incorporate multilevel and multiple timescale test coverage data as vectors into the off-diagonal DSM cells. A particularly inventive aspect of the method integrates the product architecture, as modeled by the DSM, and the traditional systems engineering V-model of tasks in a systems engineering development process. Readers of this paper will recognize that the multilevel analysis of DSMs contributes a data collection and mapping methodology, providing engineering managers with insights on improving the systems integration process. It also contributes a theoretical basis and a method for data aggregation and query that accounts for differing scales, in terms of both level and timing, to explore if different types of integration risks may be evident at different time scales. Reliability and uncertainty are concepts of paramount importance in designing complex engineering systems. Appropriately, there are a set of papers that focus on unique elements of reliability and uncertainty modeling, analysis, and simulation in complex systems. In their paper, “A Robust System Reliability Analysis Using Partitioning and Parallel Processing of Markov Chain,” Po Ting Lin, Yu-Cheng Chou, Yung Ting, Shian-Shing Shyu, and Chang-Kuo Chen present a robust method to analyze the reliability of complex redundant systems. Using an approach to partition and reorder a Markov chain’s transition probability matrix, parallel computing techniques are leveraged to facilitate the execution of the method because submatrices calculations are independent of each other. The coupled submatrices can represent subsystems, modules, or controllers of a larger complex system, allowing the model to be used across a diverse set of applications. Simulation results demonstrate that compared with the sequential method applied to an intact Markov chain, the proposed method can improve the performance and produce allowable accuracy for the reliability analysis on large-scale systems. In their paper, “Managing Uncertainty in Potential Supplier Identification,” Yun Ye, Marija Jankovic, Gül Kremer, and Jean-Claude Bocquet focus on the problem of identifying a network of suppliers to provide a large number of compatible modules for a complex system integration and assembly. Because of the use of modular design in complex systems, suppliers are typically more involved in the innovative design of the system. However, using novel architectures and suppliers with potentially better performance often comes with higher levels of uncertainty, because new suppliers usually add uncertainties to the system development. This makes the technical ability of suppliers more important to better satisfy system requirements. To address this challenge, in this paper, an Architecture and Supplier Identification Tool is presented, which generates all possible product architectures and the corresponding suppliers based on new requirements through matrix mapping and propagation. The Architecture and Supplier Identification Tool allows for the overall uncerReprint requests to: Irem Y. Tumer, School of Mechanical, Industrial, and Manufacturing Engineering, Oregon State University, Corvallis, OR 97331, USA. E-mail: irem.tumer@oregonstate.edu Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2014), 28, 307–309. # Cambridge University Press 2014 0890-0604/14 $25.00 doi:10.1017/S0890060414000481

Journal ArticleDOI
TL;DR: A multistudy approach is presented that allows design thinking of complex systems to be studied by triangulating causal controlled lab findings with coded data from more complex products, making them more powerful while studying complex engineering systems.
Abstract: A multistudy approach is presented that allows design thinking of complex systems to be studied by triangulating causal controlled lab findings with coded data from more complex products. A case study illustration of this approach is provided. During the conceptual design of engineering systems, designers face many cognitive challenges, including design fixation, errors in their mental models, and the sunk cost effect. These factors need to be mitigated for the generation of effective ideas. Understanding the effects of these challenges in a realistic and complex engineering system is especially difficult due to a variety of factors influencing the results. Studying the design of such systems in a controlled environment is extremely challenging because of the scale and complexity of such systems and the time needed to design the systems. Considering these challenges, a mixed-method approach is presented for studying the design thinking of complex engineering systems. This approach includes a controlled experiment with a simple system and a qualitative cognitive-artifacts study on more complex engineering systems followed by the triangulation of results. The triangulated results provide more generalizable information for complex system design thinking. This method combines the advantages of quantitative and qualitative study methods, making them more powerful while studying complex engineering systems. The proposed method is illustrated further using an illustrative study on the cognitive effects of physical models during the design of engineering systems.

Journal ArticleDOI
TL;DR: The proposed approach based on fuzzy regression approach can successfully optimize fuzzy multiple responses in a wide range of manufacturing applications on the Taguchi's method and has the distinct advantage of being able to generate models using only a small number of experimental data sets and minimizing inherent variations.
Abstract: In reality, the behavior of processes is sometimes vague and the observed data is irregular. This research proposes an approach for optimizing fuzzy multiple responses using fuzzy regression. In this approach, each response repetition is transformed into a signal to noise ratio then modeled using statistical multiple regression. A trapezoidal fuzzy regression model is formulated for each response utilizing the statistical regression coefficients. The most desirable response values and the deviation function are determined for each response. Finally, four optimization models are formulated for the trapezoidal membership fuzzy number to obtain the optimal factor level at each number. Two case studies are adopted for illustration, where excluding response fuzziness will result in misleading optimal factor settings if solved by the traditional optimization techniques. In conclusion, the proposed approach based on fuzzy regression approach can successfully optimize fuzzy multiple responses in a wide range of manufacturing applications on the Taguchi's method. Moreover, compared to other approaches, such as data envelopment analysis and grey relational analysis, the proposed approach has the distinct advantage of being able to generate models using only a small number of experimental data sets and minimizing inherent variations.

Journal ArticleDOI
TL;DR: The Architecture & Supplier Identification Tool is proposed, which generates all possible architectures and corresponding suppliers based on new requirements through matrix mapping and propagation and aims at providing decision support for early design of complex systems.
Abstract: As a benefit of modularization of complex systems, original equipment manufacturers (OEMs) can choose suppliers in a less constricted way when faced with new or evolving requirements. However, new suppliers usually add uncertainties to the system development. Because suppliers are tightly integrated into the design process in modular design and therefore greatly influence the outcome of the OEM's products, the uncertainty along with requirements satisfaction of the suppliers and their modules should be controlled starting from potential supplier identification. In addition, to better satisfy new requirements, the potential supplier identification should be combined with architecture generation to enable the new technology integration. In this paper, we propose the Architecture & Supplier Identification Tool, which generates all possible architectures and corresponding suppliers based on new requirements through matrix mapping and propagation. Using the Architecture & Supplier Identification Tool, the overall uncertainty and requirements satisfaction of generated architectures can be estimated and controlled. The proposed method aims at providing decision support for early design of complex systems, thereby helping OEMs have an integrated view of suppliers and system architectures in requirements satisfaction and overall uncertainty.

Journal ArticleDOI
TL;DR: When PIBSO is applied to solve the printed circuit board assembly optimization problem (PCBAOP), it performs superiorly over existing genetic algorithm and adaptive particle swarm optimization on length of tour and CPU running time, respectively.
Abstract: A novel swarm intelligence approach for combinatorial optimization is proposed, which we call probability increment based swarm optimization (PIBSO). The population evolution mechanism of PIBSO is depicted. Each state in search space has a probability to be chosen. The rule of increasing the probabilities of states is established. Incremental factor is proposed to update probability of a state, and its value is determined by the fitness of the state. It lets the states with better fitness have higher probabilities. Usual roulette wheel selection is employed to select states. Population evolution is impelled by roulette wheel selection and state probability updating. The most distinctive feature of PIBSO is because roulette wheel selection and probability updating produce a trade-off between global and local search; when PIBSO is applied to solve the printed circuit board assembly optimization problem (PCBAOP), it performs superiorly over existing genetic algorithm and adaptive particle swarm optimization on length of tour and CPU running time, respectively. The reason for having such advantages is analyzed in detail. The success of PCBAOP application verifies the effectiveness and efficiency of PIBSO and shows that it is a good method for combinatorial optimization in engineering.

Journal ArticleDOI
TL;DR: FBR is a real-time, dynamical, distributed mechanism that regulates the emergence process for CSO systems to self-organize and self-reconfigure in complex operation environments and extends the system flexibility and robustness without imposing global control over local cells or agents.
Abstract: Multiagent systems have been considered as a potential solution for developing adaptive systems In this research, a cellular self-organizing (CSO) approach is proposed for developing such multiagent adaptive systems The design of CSO systems however is difficult because the global effect emerges from local actions and interactions that are often hard to specify and control In order to achieve high-level flexible and robustness of CSO systems and retain the capability of specifying desired global effects, we propose a field-based regulative control mechanism, called field-based behavior regulation (FBR) FBR is a real-time, dynamical, distributed mechanism that regulates the emergence process for CSO systems to self-organize and self-reconfigure in complex operation environments FBR characterizes the task environment in terms of “fields” and extends the system flexibility and robustness without imposing global control over local cells or agents This paper describes the model of CSO systems and FBR, and demonstrates their effectiveness through simulation-based case studies

Journal ArticleDOI
TL;DR: The articles in this Special Issue were submitted by authors present at the conference, and they were peer reviewed through two rounds of reviews.
Abstract: The Fifth International Conference on Design Computing and Cognition was held at Texas A&M University in College Station, Texas, on June 7–9, 2012 (Gero, in press). The main conference was preceded by six workshops: NSF Bio-Inspired Design Workshop: Charting a Course for ComputerAided Design; Analogies and Metaphors in Design Cognition: Theory and Tools for Design Practice; Evaluation Methods for Creativity Support Environments; Functional Descriptions in Engineering; Studying the Design Process: Quantitative and Qualitative Approaches; and Design Creativity Workshop 2012. The articles in this Special Issue were submitted by authors present at the conference, and they were peer reviewed through two rounds of reviews. The Design Computing and Cognition Conference engages a diverse audience with shared and complementary interests. A selection of topics includes the following:

Journal ArticleDOI
TL;DR: This paper aims to propose a criterion, based on similarity among fractal cells, developed and implemented in a Tabu search heuristics, in order to allocate it on the shop floor in a feasible computational time, and shows to be quite promising.
Abstract: Recently, new types of layouts have been proposed in the literature in order to handle a large number of products. Among these are the fractal layout, aiming at minimization of routing distances. There are already researchers focusing on the design; however, we have noticed that the current approach usually executes several times the allocations of fractal cells on the shop floor up to find the best allocations, which may present a significant disadvantage when applied to a large number of fractal cells owing to combinatorial features. This paper aims to propose a criterion, based on similarity among fractal cells, developed and implemented in a Tabu search heuristics, in order to allocate it on the shop floor in a feasible computational time. Once our proposed procedure is modeled, operations of each workpiece are separated in n subsets and submitted to simulation. The results (traveling distance and makespan) are compared to distributed layout and to functional layout. The results show, in general, a trade-off behavior, that is, when the total routing distance decreases, the makespan increases. Based on our proposed method, depending on the value of segregated fractal cell similarity, it is possible to reduce both performance parameters. Finally, we conclude the proposed procedure shows to be quite promising because allocations of fractal cells demand reduced central processing unit time.

Journal ArticleDOI
TL;DR: It is concluded that mechanically transformable products support and enable ECBs, especially when existing infrastructure presents obstacles, which may increase the rate of participation in ECB, which then justifies improvements to the shortfalls in infrastructure for which they compensate.
Abstract: The aim of this work is to explore the relationship between products that mechanically transform and individual environmentally conscious behavior (ECB). Our qualitative study led to observations on how each of the three transformation principles, expand/collapse, expose/cover, and fuse/divide, specifically supports ECBs. As expected, expand/collapse enables better portability of products. Increased portability of reusable products (e.g., travel mugs and shopping bags) reduces reliance on their disposable counterparts. A less expected observation is that increased portability also increases the spontaneity by which ECBs could be carried out. While there are fewer ECB supporting products that incorporate the expose/cover principle, we believe that it enables one to include, yet hide potentially unaesthetic, features that support ECB in often-used or worn items. Finally, we found fuse/divide to enable portability beyond what is possible with expand/collapse alone. Fuse/divide may also make possible other product transport and reuse strategies. We conclude that mechanically transformable products support and enable ECBs, especially when existing infrastructure presents obstacles. Such products may increase the rate of participation in ECB, which then justifies improvements to the shortfalls in infrastructure for which they compensate.

Journal ArticleDOI
TL;DR: It is found that local impulses tend to slow convergence, but systems also subjected to dissolution or division impulses still favor parallel arrangements, and statistically uphold the conclusion that the strategy to mitigate combination impulses is unaffected by the presence of local impulses.
Abstract: During the design of complex systems, a design process may be subjected to stochastic disruptions, interruptions, and changes, which can be described broadly as “design impulses.” These impulses can have a significant impact on the transient response and converged equilibrium for the design system. We distinguish this research by focusing on the interactions between local and architectural impulses in the form of designer mistakes and dissolution, division, and combination impulses, respectively, for a distributed design case study. We provide statistical support for the “parallel character hypothesis,” which asserts that parallel arrangements generally best mitigate dissolution and division impulses. We find that local impulses tend to slow convergence, but systems also subjected to dissolution or division impulses still favor parallel arrangements. We statistically uphold the conclusion that the strategy to mitigate combination impulses is unaffected by the presence of local impulses.

Journal ArticleDOI
TL;DR: The simulation results show that, compared with the sequential method applied to an intact Markov chain, the proposed PPMC can improve the performance and produce allowable accuracy for the reliability analysis on large-scale systems of MMR controllers.
Abstract: This paper presents a robust reliability analysis method for systems of multimodular redundant (MMR) controllers using the method of partitioning and parallel processing of a Markov chain (PPMC). A Markov chain is formulated to represent the N distinct states of the MMR controllers. Such a Markov chain has N2 directed edges, and each edge corresponds to a transition probability between a pair of start and end states. Because N can be easily increased substantially, the system reliability analysis may require large computational resources, such as the central processing unit usage and memory occupation. By the PPMC, a Markov chain's transition probability matrix can be partitioned and reordered, such that the system reliability can be evaluated through only the diagonal submatrices of the transition probability matrix. In addition, calculations regarding the submatrices are independent of each other and thus can be conducted in parallel to assure the efficiency. The simulation results show that, compared with the sequential method applied to an intact Markov chain, the proposed PPMC can improve the performance and produce allowable accuracy for the reliability analysis on large-scale systems of MMR controllers.