scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Anomaly Detection and Classification in a Laser Powder Bed Additive Manufacturing Process using a Trained Computer Vision Algorithm

01 Jan 2018-Additive manufacturing (Elsevier)-Vol. 19, pp 114-126
TL;DR: A computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process, which has the potential to become a component of a real-time control system in an LPBF machine.
Abstract: Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.
Citations
More filters
Posted Content
TL;DR: From smart grids to disaster management, high impact problems where existing gaps can be filled by ML are identified, in collaboration with other fields, to join the global effort against climate change.
Abstract: Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help. Here we describe how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by machine learning, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the machine learning community to join the global effort against climate change.

441 citations


Cites methods from "Anomaly Detection and Classificatio..."

  • ...ML is applied to improve those processes, for example through failure detection [184, 185] or material design [186]....

    [...]

Journal ArticleDOI
TL;DR: A comprehensive review on the state-of-the-art of ML applications in a variety of additive manufacturing domains can be found in this paper, where the authors provide a section summarizing the main findings from the literature and provide perspectives on some selected interesting applications.
Abstract: Additive manufacturing (AM) has emerged as a disruptive digital manufacturing technology. However, its broad adoption in industry is still hindered by high entry barriers of design for additive manufacturing (DfAM), limited materials library, various processing defects, and inconsistent product quality. In recent years, machine learning (ML) has gained increasing attention in AM due to its unprecedented performance in data tasks such as classification, regression and clustering. This article provides a comprehensive review on the state-of-the-art of ML applications in a variety of AM domains. In the DfAM, ML can be leveraged to output new high-performance metamaterials and optimized topological designs. In AM processing, contemporary ML algorithms can help to optimize process parameters, and conduct examination of powder spreading and in-process defect monitoring. On the production of AM, ML is able to assist practitioners in pre-manufacturing planning, and product quality assessment and control. Moreover, there has been an increasing concern about data security in AM as data breaches could occur with the aid of ML techniques. Lastly, it concludes with a section summarizing the main findings from the literature and providing perspectives on some selected interesting applications of ML in research and development of AM.

274 citations

Journal ArticleDOI
TL;DR: In this article, the authors focus on the available mechanistic models of additive manufacturing (AM) that have been adequately validated and evaluate the functionality of AM models in understanding of the printability of commonly used AM alloys and the fabrication of functionally graded alloys.

238 citations

Journal ArticleDOI
TL;DR: In the authors’ perspective, in situ monitoring of AM processes will significantly benefit from the object detection ability of ML, and data sharing of AM would enable faster adoption of ML in AM.
Abstract: Additive manufacturing (AM) or 3D printing is growing rapidly in the manufacturing industry and has gained a lot of attention from various fields owing to its ability to fabricate parts with complex features. The reliability of the 3D printed parts has been the focus of the researchers to realize AM as an end-part production tool. Machine learning (ML) has been applied in various aspects of AM to improve the whole design and manufacturing workflow especially in the era of industry 4.0. In this review article, various types of ML techniques are first introduced. It is then followed by the discussion on their use in various aspects of AM such as design for 3D printing, material tuning, process optimization, in situ monitoring, cloud service, and cybersecurity. Potential applications in the biomedical, tissue engineering and building and construction will be highlighted. The challenges faced by ML in AM such as computational cost, standards for qualification and data acquisition techniques will also be discussed. In the authors’ perspective, in situ monitoring of AM processes will significantly benefit from the object detection ability of ML. As a large data set is crucial for ML, data sharing of AM would enable faster adoption of ML in AM. Standards for the shared data are needed to facilitate easy sharing of data. The use of ML in AM will become more mature and widely adopted as better data acquisition techniques and more powerful computer chips for ML are developed.

229 citations


Cites background or methods from "Anomaly Detection and Classificatio..."

  • ...(Scime and Beuth 2018a, b)....

    [...]

  • ...Scime and Beuth compared the computational time of BoW and MsCNN for anomaly detection and found that the computational time for each layer for the BoW technique was 4 s, which is shorter than that of the MsCNN (7 s) (Scime and Beuth 2018a, b)....

    [...]

  • ...Some examples of unsupervised learning algorithms used in AM field are K means clustering (Scime and Beuth 2018a, b, 2019; Snell et al....

    [...]

  • ...They compared the BoW technique with multiscale CNN (MsCNN) and found that MsCNN can achieve higher classification accuracies but it is more computationally expensive (75% slower) (Scime and Beuth 2018a, b)....

    [...]

  • ...2018; Scime and Beuth 2018a, b; Shevchik et  al....

    [...]

Journal ArticleDOI
TL;DR: In this article, a visible-light high speed camera with a fixed field of view is used to study the morphology of L-PBF melt pools in the Inconel 718 material system.
Abstract: Because many of the most important defects in Laser Powder Bed Fusion (L-PBF) occur at the size and timescales of the melt pool itself, the development of methodologies for monitoring the melt pool is critical. This works examines the possibility of in-situ detection of keyholing porosity and balling instabilities. Specifically, a visible-light high speed camera with a fixed field of view is used to study the morphology of L-PBF melt pools in the Inconel 718 material system. A scale-invariant description of melt pool morphology is constructed using Computer Vision techniques and unsupervised Machine Learning is used to differentiate between observed melt pools. By observing melt pools produced across process space, in-situ signatures are identified which may indicate flaws such as those observed ex-situ. This linkage of ex-situ and in-situ morphology enabled the use of supervised Machine Learning to classify melt pools observed (with the high speed camera) during fusion of non-bulk geometries such as overhangs.

210 citations

References
More filters
Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.

17,017 citations

Proceedings ArticleDOI
20 Sep 1999
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

16,989 citations

Proceedings Article
01 Jan 2004
TL;DR: This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches and shows that it is simple, computationally efficient and intrinsically invariant.
Abstract: We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information.

5,046 citations