Recent Trends in Deep Learning with Applications
01 Jan 2018-pp 201-222
TL;DR: The main purpose of using deep learning algorithms are such as faster processing, low-cost hardware, and modern growths in machine learning techniques.
Abstract: Deep learning methods play a vital role in Internet of things analytics. One of the main subgroups of machine learning algorithm is Deep Learning. Raw data is collected from devices. Collecting data from all situations and doing pre-processing is complex. Monitoring data through sensors continuously is also complex and expensive. Deep learning algorithms will solve these types of issues. A deep learning method signifies at various levels of representation from lower level features to very higher level features of data. The higher level features provide more abstract thoughts of information than the lower level which contains raw data. It is a developing methodology and has been commonly applied in art, image caption, machine translation, natural language processing, object detection, robotics, and visual tracking. The main purpose of using deep learning algorithms are such as faster processing, low-cost hardware, and modern growths in machine learning techniques. This review paper gives an understanding of deep learning methods and their recent advances in Internet of things.
Citations
More filters
••
TL;DR: In this paper, a survey on the use of ML for enhancing IoT applications is presented, and an in-depth overview of the various IoT applications that can be enhanced using ML, such as smart cities, smart homes, and smart healthcare.
Abstract: The number of Internet of Things (IoT) devices to be connected via the Internet is overgrowing. The heterogeneity and complexity of the IoT in terms of dynamism and uncertainty complicate this landscape dramatically and introduce vulnerabilities. Intelligent management of IoT is required to maintain connectivity, improve Quality of Service (QoS), and reduce energy consumption in real time within dynamic environments. Machine Learning (ML) plays a pivotal role in QoS enhancement, connectivity, and provisioning of smart applications. Therefore, this survey focuses on the use of ML for enhancing IoT applications. We also provide an in-depth overview of the variety of IoT applications that can be enhanced using ML, such as smart cities, smart homes, and smart healthcare. For each application, we introduce the advantages of using ML. Finally, we shed light on ML challenges for future IoT research, and we review the current literature based on existing works.
26 citations
••
01 Jan 2019
TL;DR: The essentials of deep learning methods with convolutional neural networks are presented and their achievements in medical image analysis, such as in deep feature representation, detection, segmentation, classification, and prediction are analyzed.
Abstract: Deep learning is an essential method of machine learning. Deep learning is rapidly suitable for the most sophisticated stage of a technology, prominent to enriched performance in numerous medical applications. The latest growth in machine learning, specifically with respect to deep learning, aids in recognition, classification, and computation of patterns in medical images. The main aim of these improvements is the ability to derive feature representation from learned data rather than designing those features by hand from domain-specific knowledge. In deep learning, the bottom-level network represents a low-level feature representation while the top-level network represents the output feature information. The computation of a deep learning network is faster with cheap hardware. In this chapter, we present the essentials of deep learning methods with convolutional neural networks and analyze their achievements in medical image analysis, such as in deep feature representation, detection, segmentation, classification, and prediction. Finally, we conclude by a discussion of research challenges and indicate future directions for further enhancements.
14 citations
•
TL;DR: An attempt to automate the human likeliness evaluation of the output text samples coming from natural language generation methods used to solve several tasks by using a discrimination procedure based on large pretrained language models and their probability distributions.
Abstract: Automatic evaluation of various text quality criteria produced by data-driven intelligent methods is very common and useful because it is cheap, fast, and usually yields repeatable results. In this paper, we present an attempt to automate the human likeliness evaluation of the output text samples coming from natural language generation methods used to solve several tasks. We propose to use a human likeliness score that shows the percentage of the output samples from a method that look as if they were written by a human. Instead of having human participants label or rate those samples, we completely automate the process by using a discrimination procedure based on large pretrained language models and their probability distributions. As follow up, we plan to perform an empirical analysis of human-written and machine-generated texts to find the optimal setup of this evaluation approach. A validation procedure involving human participants will also check how the automatic evaluation correlates with human judgments.
5 citations
Cites background from "Recent Trends in Deep Learning with..."
...Same as other related disciplines such as MT (Machine Translation) or TS (Text Summarization), it has surged in the last decade, greatly pushed by the significant advances in text applications of deep neural networks [27,3] as well as the creation of large datasets [16,8,17]....
[...]
•
01 Dec 2020TL;DR: A hybridized approach has been followed to classify lung nodule as benign or malignant to help in early detection of lung cancer and help in the life expectancy of lungcancer patients thereby reducing the mortality rate by this deadly disease scourging the world.
Abstract: Deep learning techniques have become very popular among Artificial Intelligence (AI) techniques in many areas of life. Among many types of deep learning techniques, Convolutional Neural Networks (CNN) can be useful in image classification applications. In this work, a hybridized approach has been followed to classify lung nodule as benign or malignant. This will help in early detection of lung cancer and help in the life expectancy of lung cancer patients thereby reducing the mortality rate by this deadly disease scourging the world. The hybridization has been carried out between handcrafted features and deep features. The machine learning algorithms such as SVM and Logistic Regression have been used to classify the nodules based on the features. The dimensionality reduction technique, Principle Component Analysis (PCA) has been introduced to improve the performance of hybridized features with SVM. The experiments have been carried out with 14 different methods. It has been found that GLCM + VGG19 + PCA + SVM outperformed all other models with an accuracy of 94.93%, sensitivity of 90.9%, specificity of 97.36% and precision of 95.44%. The F1 score was found to be 0.93 and the AUC was 0.9843. The False Positive Rate was found to be 2.637% and False Negative Rate was 9.09%.
3 citations
References
More filters
••
14 Jun 2009TL;DR: The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.
Abstract: There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.
2,668 citations
••
14 Jun 2011
TL;DR: A novel convolutional auto-encoder (CAE) for unsupervised feature learning that initializing a CNN with filters of a trained CAE stack yields superior performance on a digit and an object recognition benchmark.
Abstract: We present a novel convolutional auto-encoder (CAE) for unsupervised feature learning. A stack of CAEs forms a convolutional neural network (CNN). Each CAE is trained using conventional on-line gradient descent without additional regularization terms. A max-pooling layer is essential to learn biologically plausible features consistent with those found by previous approaches. Initializing a CNN with filters of a trained CAE stack yields superior performance on a digit (MNIST) and an object recognition (CIFAR10) benchmark.
1,832 citations
•
28 Jun 2011TL;DR: It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold.
Abstract: We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising auto-encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pretraining.
1,347 citations
•
03 Jan 1986
TL;DR: This chapter contains sections titled: Relaxation Searches, Easy and Hard learning, The Boltzmann Machine Learning Algorithm, An Example of Hard Learning, Achieving Reliable Computation with Unreliable Hardware, and an Example of the Effects of Damage.
Abstract: This chapter contains sections titled: Relaxation Searches, Easy and Hard Learning, The Boltzmann Machine Learning Algorithm, An Example of Hard Learning, Achieving Reliable Computation with Unreliable Hardware, An Example of the Effects of Damage, Conclusion, Acknowledgments, Appendix: Derivation of the Learning Algorithm, References
1,271 citations
•
21 Jun 2010
TL;DR: It is shown that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted.
Abstract: Many modern visual recognition algorithms incorporate a step of spatial 'pooling', where the outputs of several nearby feature detectors are combined into a local or global 'bag of features', in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.
1,239 citations