scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Family learning: A process modeling method for cyber-additive manufacturing network

09 Feb 2021-Vol. 54, pp 1-16
TL;DR: A data-driven model called family learning is proposed to jointly model similar-but-non-identical products as family members by quantifying the shared information among these products in the CAMNet by optimizing a similarity generation model based on design factors.
Abstract: A Cyber-Additive Manufacturing Network (CAMNet) integrates connected additive manufacturing processes with advanced data analytics as computation services to support personalized product realizatio...
Citations
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Data-driven design (D3), a new design paradigm benefited from advanced data analytics and computational intelligence, has gradually promoted the research of data-driven product design (DDPD) ever since 2000 s as discussed by the authors .

11 citations

Proceedings ArticleDOI
01 May 2019
TL;DR: A method to decompose a group of existing advanced data analytics models into their distributed variants is proposed via alternative direction method of multipliers (ADMM), which improves the computation services in a Fog-Cloud computation network.
Abstract: Cyber-manufacturing systems (CMS) interconnect manufacturing facilities via sensing and actuation networks to provide reliable computation and communication services in smart manufacturing. In CMS, various advanced data analytics have been proposed to support effective decision-making. However, most of them were formulated in a centralized manner to be executed on single workstations, or on Cloud computation units as the data size dramatically increases. Therefore, the computation or communication service may not be responsive to support online decision-making in CMS. In this research, a method to decompose a group of existing advanced data analytics models (i.e., family learning for CMS modeling) into their distributed variants is proposed via alternative direction method of multipliers (ADMM). It improves the computation services in a Fog-Cloud computation network. A simulation study is conducted to validate the advantages of the proposed distributed method on Fog-Cloud computation network over Cloud computation system. Besides, six performance evaluation metrics are adopted from the literature to access the performance of computation and communication. The evaluation results also indicate the relationship between Fog-Cloud architectures and computation performances, which can contribute to the efficient design of Fog-Cloud architectures in the future.

10 citations

References
More filters
Journal ArticleDOI
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Abstract: SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.

40,785 citations


"Family learning: A process modeling..." refers background or methods or result in this paper

  • ...Moreover, if we model the individual product as one model for each, even with an advanced variable selection method, such as Lasso regression (Tibshirani, 1996), if the sample size is too small, then there will not be enough degrees of freedom to support model estimation with an accurate result....

    [...]

  • ...Both prediction performance and variable selection results outperform three benchmark methods, i.e., Lasso regression (Tibshirani, 1996), data-shared Lasso (Gross and Tibshirani, 2016) and MTL (Evgeniou and Pontil, 2004)....

    [...]

  • ...The results showed that the proposed family learning model outperforms Lasso regression (Tibshirani, 1996), data-shared Lasso (Gross and Tibshirani, 2016), and MTL (Evgeniou and Pontil, 2004), especially when the sample size is limited....

    [...]

  • ...…loss for model estimation; q1 is the tuning parameter which controls the sparsity of the model; the first penalty kBk1 ¼ P i bij j is a Lasso regularization (Tibshirani, 1996) term that forces the coefficients of insignificant variables to be zeros; q2 is the tuning parameter that…...

    [...]

  • ...The family learning is compared with three benchmark models to evaluate its prediction performance: (i) the Lasso regression (Tibshirani, 1996), which should have similar performance with the proposed method when the sample size is large enough; (ii) the data-shared Lasso (Gross and Tibshirani,…...

    [...]

Book
Christopher M. Bishop1
17 Aug 2006
TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

22,840 citations

Journal ArticleDOI
TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Abstract: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.

18,616 citations


"Family learning: A process modeling..." refers background in this paper

  • ...However, adequate samples from source and target domains are required to yield an accurate model for the target domain (Pan and Yang, 2010)....

    [...]

Journal ArticleDOI
TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Abstract: Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together.The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the

16,538 citations


"Family learning: A process modeling..." refers background in this paper

  • ...This can lead to unstable variable selection results with Lasso regularization terms (Zou and Hastie, 2005)....

    [...]

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations


"Family learning: A process modeling..." refers methods in this paper

  • ...As a physical experiment to physically quantify the quality performance of a SLM product is usually both time-consuming and economically expensive, a simulation study is employed (Montgomery, 2017)....

    [...]

  • ...A fractional factorial design (Montgomery, 2017) with three levels of settings and two kinds of designs is conducted to test the modeling performance....

    [...]