Recent Trends in Deep Learning with Applications
01 Jan 2018-pp 201-222
TL;DR: The main purpose of using deep learning algorithms are such as faster processing, low-cost hardware, and modern growths in machine learning techniques.
Abstract: Deep learning methods play a vital role in Internet of things analytics. One of the main subgroups of machine learning algorithm is Deep Learning. Raw data is collected from devices. Collecting data from all situations and doing pre-processing is complex. Monitoring data through sensors continuously is also complex and expensive. Deep learning algorithms will solve these types of issues. A deep learning method signifies at various levels of representation from lower level features to very higher level features of data. The higher level features provide more abstract thoughts of information than the lower level which contains raw data. It is a developing methodology and has been commonly applied in art, image caption, machine translation, natural language processing, object detection, robotics, and visual tracking. The main purpose of using deep learning algorithms are such as faster processing, low-cost hardware, and modern growths in machine learning techniques. This review paper gives an understanding of deep learning methods and their recent advances in Internet of things.
Citations
More filters
••
TL;DR: In this paper, a survey on the use of ML for enhancing IoT applications is presented, and an in-depth overview of the various IoT applications that can be enhanced using ML, such as smart cities, smart homes, and smart healthcare.
Abstract: The number of Internet of Things (IoT) devices to be connected via the Internet is overgrowing. The heterogeneity and complexity of the IoT in terms of dynamism and uncertainty complicate this landscape dramatically and introduce vulnerabilities. Intelligent management of IoT is required to maintain connectivity, improve Quality of Service (QoS), and reduce energy consumption in real time within dynamic environments. Machine Learning (ML) plays a pivotal role in QoS enhancement, connectivity, and provisioning of smart applications. Therefore, this survey focuses on the use of ML for enhancing IoT applications. We also provide an in-depth overview of the variety of IoT applications that can be enhanced using ML, such as smart cities, smart homes, and smart healthcare. For each application, we introduce the advantages of using ML. Finally, we shed light on ML challenges for future IoT research, and we review the current literature based on existing works.
26 citations
••
01 Jan 2019
TL;DR: The essentials of deep learning methods with convolutional neural networks are presented and their achievements in medical image analysis, such as in deep feature representation, detection, segmentation, classification, and prediction are analyzed.
Abstract: Deep learning is an essential method of machine learning. Deep learning is rapidly suitable for the most sophisticated stage of a technology, prominent to enriched performance in numerous medical applications. The latest growth in machine learning, specifically with respect to deep learning, aids in recognition, classification, and computation of patterns in medical images. The main aim of these improvements is the ability to derive feature representation from learned data rather than designing those features by hand from domain-specific knowledge. In deep learning, the bottom-level network represents a low-level feature representation while the top-level network represents the output feature information. The computation of a deep learning network is faster with cheap hardware. In this chapter, we present the essentials of deep learning methods with convolutional neural networks and analyze their achievements in medical image analysis, such as in deep feature representation, detection, segmentation, classification, and prediction. Finally, we conclude by a discussion of research challenges and indicate future directions for further enhancements.
14 citations
•
TL;DR: An attempt to automate the human likeliness evaluation of the output text samples coming from natural language generation methods used to solve several tasks by using a discrimination procedure based on large pretrained language models and their probability distributions.
Abstract: Automatic evaluation of various text quality criteria produced by data-driven intelligent methods is very common and useful because it is cheap, fast, and usually yields repeatable results. In this paper, we present an attempt to automate the human likeliness evaluation of the output text samples coming from natural language generation methods used to solve several tasks. We propose to use a human likeliness score that shows the percentage of the output samples from a method that look as if they were written by a human. Instead of having human participants label or rate those samples, we completely automate the process by using a discrimination procedure based on large pretrained language models and their probability distributions. As follow up, we plan to perform an empirical analysis of human-written and machine-generated texts to find the optimal setup of this evaluation approach. A validation procedure involving human participants will also check how the automatic evaluation correlates with human judgments.
5 citations
Cites background from "Recent Trends in Deep Learning with..."
...Same as other related disciplines such as MT (Machine Translation) or TS (Text Summarization), it has surged in the last decade, greatly pushed by the significant advances in text applications of deep neural networks [27,3] as well as the creation of large datasets [16,8,17]....
[...]
•
01 Dec 2020TL;DR: A hybridized approach has been followed to classify lung nodule as benign or malignant to help in early detection of lung cancer and help in the life expectancy of lungcancer patients thereby reducing the mortality rate by this deadly disease scourging the world.
Abstract: Deep learning techniques have become very popular among Artificial Intelligence (AI) techniques in many areas of life. Among many types of deep learning techniques, Convolutional Neural Networks (CNN) can be useful in image classification applications. In this work, a hybridized approach has been followed to classify lung nodule as benign or malignant. This will help in early detection of lung cancer and help in the life expectancy of lung cancer patients thereby reducing the mortality rate by this deadly disease scourging the world. The hybridization has been carried out between handcrafted features and deep features. The machine learning algorithms such as SVM and Logistic Regression have been used to classify the nodules based on the features. The dimensionality reduction technique, Principle Component Analysis (PCA) has been introduced to improve the performance of hybridized features with SVM. The experiments have been carried out with 14 different methods. It has been found that GLCM + VGG19 + PCA + SVM outperformed all other models with an accuracy of 94.93%, sensitivity of 90.9%, specificity of 97.36% and precision of 95.44%. The F1 score was found to be 0.93 and the AUC was 0.9843. The False Positive Rate was found to be 2.637% and False Negative Rate was 9.09%.
3 citations
References
More filters
••
13 Jun 2010TL;DR: This paper proposes to use histogram intersection based kNN method to construct a Laplacian matrix, which can well characterize the similarity of local features, and incorporates it into the objective function of sparse coding to preserve the consistence in sparse representation of similar local features.
Abstract: Sparse coding which encodes the original signal in a sparse signal space, has shown its state-of-the-art performance in the visual codebook generation and feature quantization process of BoW based image representation. However, in the feature quantization process of sparse coding, some similar local features may be quantized into different visual words of the codebook due to the sensitiveness of quantization. In this paper, to alleviate the impact of this problem, we propose a Laplacian sparse coding method, which will exploit the dependence among the local features. Specifically, we propose to use histogram intersection based kNN method to construct a Laplacian matrix, which can well characterize the similarity of local features. In addition, we incorporate this Laplacian matrix into the objective function of sparse coding to preserve the consistence in sparse representation of similar local features. Comprehensive experimental results show that our method achieves or outperforms existing state-of-the-art results, and exhibits excellent performance on Scene 15 data set.
483 citations
••
TL;DR: A training method that encodes each word into a different vector in semantic space and its relation to low entropy coding is presented and is applied to the stylish analyses of two Chinese novels.
390 citations
••
TL;DR: This work applies the Laplacian sparse coding to feature quantization in Bag-of-Words image representation, and it outperforms sparse coding and achieves good performance in solving the image classification problem and is successfully used to solve the semi-auto image tagging problem.
Abstract: Sparse coding exhibits good performance in many computer vision applications. However, due to the overcomplete codebook and the independent coding process, the locality and the similarity among the instances to be encoded are lost. To preserve such locality and similarity information, we propose a Laplacian sparse coding (LSc) framework. By incorporating the similarity preserving term into the objective of sparse coding, our proposed Laplacian sparse coding can alleviate the instability of sparse codes. Furthermore, we propose a Hypergraph Laplacian sparse coding (HLSc), which extends our Laplacian sparse coding to the case where the similarity among the instances defined by a hypergraph. Specifically, this HLSc captures the similarity among the instances within the same hyperedge simultaneously, and also makes the sparse codes of them be similar to each other. Both Laplacian sparse coding and Hypergraph Laplacian sparse coding enhance the robustness of sparse coding. We apply the Laplacian sparse coding to feature quantization in Bag-of-Words image representation, and it outperforms sparse coding and achieves good performance in solving the image classification problem. The Hypergraph Laplacian sparse coding is also successfully used to solve the semi-auto image tagging problem. The good performance of these applications demonstrates the effectiveness of our proposed formulations in locality and similarity preservation.
366 citations
•
07 Dec 2009TL;DR: A new type of top-level model for Deep Belief Nets is introduced, a third-order Boltzmann machine, trained using a hybrid algorithm that combines both generative and discriminative gradients that substantially outperforms shallow models such as SVMs.
Abstract: We introduce a new type of top-level model for Deep Belief Nets and evaluate it on a 3D object recognition task. The top-level model is a third-order Boltzmann machine, trained using a hybrid algorithm that combines both generative and discriminative gradients. Performance is evaluated on the NORB database (normalized-uniform version), which contains stereo-pair images of objects under different lighting conditions and viewpoints. Our model achieves 6.5% error on the test set, which is close to the best published result for NORB (5.9%) using a convolutional neural net that has built-in knowledge of translation invariance. It substantially outperforms shallow models such as SVMs (11.6%). DBNs are especially suited for semi-supervised learning, and to demonstrate this we consider a modified version of the NORB recognition task in which additional unlabeled images are created by applying small translations to the images in the database. With the extra unlabeled data (and the same amount of labeled data as before), our model achieves 5.2% error.
344 citations
••
05 Sep 2011TL;DR: A novel regularizer when training an autoencoder for unsupervised feature extraction yields representations that are significantly better suited for initializing deep architectures than previously proposed approaches, beating state-of-the-art performance on a number of datasets.
Abstract: We propose a novel regularizer when training an autoencoder for unsupervised feature extraction. We explicitly encourage the latent representation to contract the input space by regularizing the norm of the Jacobian (analytically) and the Hessian (stochastically) of the encoder's output with respect to its input, at the training points. While the penalty on the Jacobian's norm ensures robustness to tiny corruption of samples in the input space, constraining the norm of the Hessian extends this robustness when moving further away from the sample. From a manifold learning perspective, balancing this regularization with the auto-encoder's reconstruction objective yields a representation that varies most when moving along the data manifold in input space, and is most insensitive in directions orthogonal to the manifold. The second order regularization, using the Hessian, penalizes curvature, and thus favors smooth manifold. We show that our proposed technique, while remaining computationally efficient, yields representations that are significantly better suited for initializing deep architectures than previously proposed approaches, beating state-of-the-art performance on a number of datasets.
313 citations