Open AccessPosted Content
Challenges in Deploying Machine Learning: a Survey of Case Studies
TLDR
By mapping found challenges to the steps of the machine learning deployment workflow it is shown that practitioners face issues at each stage of the deployment process.Abstract:
In recent years, machine learning has received increased interest both as an academic research field and as a solution for real-world business problems. However, the deployment of machine learning models in production systems can present a number of issues and concerns. This survey reviews published reports of deploying machine learning solutions in a variety of use cases, industries and applications and extracts practical considerations corresponding to stages of the machine learning deployment workflow. Our survey shows that practitioners face challenges at each stage of the deployment. The goal of this paper is to layout a research agenda to explore approaches addressing these challenges.read more
Citations
More filters
Proceedings ArticleDOI
Exploiting BERT For Multimodal Target Sentiment Classification Through Input Space Translation
Zaid Khan,Yun Fu +1 more
TL;DR: This article proposed a two-stream model that translates images in input space using an object-aware transformer followed by a single-pass non-autoregressive text generation approach, which increases the amount of text available to the language model and distills the object-level information in complex images.
Posted ContentDOI
Technology Readiness Levels for Machine Learning Systems.
Alexander Lavin,Ciarán M. Gilligan-Lee,Alessya Visnjic,Siddha Ganju,Dava J. Newman,Sujoy Ganguly,Danny Lange,Atılım Güneş Baydin,Amit Sharma,Adam Gibson,Yarin Gal,Eric P. Xing,Chris A. Mattmann,James Parr +13 more
TL;DR: The Machine Learning Technology Readiness Levels (MLTRL) framework defines a principled process to ensure robust, reliable, and responsible systems while being streamlined for ML workflows, including key distinctions from traditional software engineering.
Proceedings ArticleDOI
Exploiting BERT for Multimodal Target Sentiment Classification through Input Space Translation
Zaid Khan,Yun Fu +1 more
TL;DR: Zakh et al. as mentioned in this paper proposed a two-stream model that translates images in input space using an object-aware transformer followed by a single-pass non-autoregressive text generation approach.
Journal ArticleDOI
On Predictive Maintenance in Industry 4.0: Overview, Models, and Challenges
Mounia Achouch,Mariya Dimitrova,Khaled Ziane,Sasan Sattarpanah Karganroudi,Rizck Dhouib,Hussein Ibrahim,Mehdi Adda +6 more
TL;DR: An exhaustive literature review of methods and applied tools for intelligent predictive maintenance models in Industry 4.0 is presented by identifying and categorizing the life cycle of maintenance projects and the challenges encountered, and the models associated with this type of maintenance are presented.
References
More filters
Proceedings ArticleDOI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Proceedings Article
Practical Bayesian Optimization of Machine Learning Algorithms
TL;DR: This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Proceedings ArticleDOI
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
TL;DR: A new class of model inversion attack is developed that exploits confidence values revealed along with predictions and is able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and recover recognizable images of people's faces given only their name.
Proceedings Article
Adversarial Machine Learning at Scale
TL;DR: This article showed that adversarial training confers robustness to single-step attack methods, while multi-step attacks are somewhat less transferable than single step attack methods and single step attacks are the best for mounting black-box attacks.