scispace - formally typeset

Book ChapterDOI

Recent Trends in Deep Learning with Applications

K. Balaji1, K. Lavanya1
01 Jan 2018-pp 201-222

...read more


Citations
More filters
Book ChapterDOI
01 Jan 2019
TL;DR: The essentials of deep learning methods with convolutional neural networks are presented and their achievements in medical image analysis, such as in deep feature representation, detection, segmentation, classification, and prediction are analyzed.
Abstract: Deep learning is an essential method of machine learning. Deep learning is rapidly suitable for the most sophisticated stage of a technology, prominent to enriched performance in numerous medical applications. The latest growth in machine learning, specifically with respect to deep learning, aids in recognition, classification, and computation of patterns in medical images. The main aim of these improvements is the ability to derive feature representation from learned data rather than designing those features by hand from domain-specific knowledge. In deep learning, the bottom-level network represents a low-level feature representation while the top-level network represents the output feature information. The computation of a deep learning network is faster with cheap hardware. In this chapter, we present the essentials of deep learning methods with convolutional neural networks and analyze their achievements in medical image analysis, such as in deep feature representation, detection, segmentation, classification, and prediction. Finally, we conclude by a discussion of research challenges and indicate future directions for further enhancements.

4 citations

Posted Content
TL;DR: An attempt to automate the human likeliness evaluation of the output text samples coming from natural language generation methods used to solve several tasks by using a discrimination procedure based on large pretrained language models and their probability distributions.
Abstract: Automatic evaluation of various text quality criteria produced by data-driven intelligent methods is very common and useful because it is cheap, fast, and usually yields repeatable results. In this paper, we present an attempt to automate the human likeliness evaluation of the output text samples coming from natural language generation methods used to solve several tasks. We propose to use a human likeliness score that shows the percentage of the output samples from a method that look as if they were written by a human. Instead of having human participants label or rate those samples, we completely automate the process by using a discrimination procedure based on large pretrained language models and their probability distributions. As follow up, we plan to perform an empirical analysis of human-written and machine-generated texts to find the optimal setup of this evaluation approach. A validation procedure involving human participants will also check how the automatic evaluation correlates with human judgments.

4 citations


Cites background from "Recent Trends in Deep Learning with..."

  • ...Same as other related disciplines such as MT (Machine Translation) or TS (Text Summarization), it has surged in the last decade, greatly pushed by the significant advances in text applications of deep neural networks [27,3] as well as the creation of large datasets [16,8,17]....

    [...]

Journal ArticleDOI
Abstract: The number of Internet of Things (IoT) devices to be connected via the Internet is overgrowing. The heterogeneity and complexity of the IoT in terms of dynamism and uncertainty complicate this landscape dramatically and introduce vulnerabilities. Intelligent management of IoT is required to maintain connectivity, improve Quality of Service (QoS), and reduce energy consumption in real time within dynamic environments. Machine Learning (ML) plays a pivotal role in QoS enhancement, connectivity, and provisioning of smart applications. Therefore, this survey focuses on the use of ML for enhancing IoT applications. We also provide an in-depth overview of the variety of IoT applications that can be enhanced using ML, such as smart cities, smart homes, and smart healthcare. For each application, we introduce the advantages of using ML. Finally, we shed light on ML challenges for future IoT research, and we review the current literature based on existing works.

4 citations

DOI
01 Dec 2020
TL;DR: A hybridized approach has been followed to classify lung nodule as benign or malignant to help in early detection of lung cancer and help in the life expectancy of lungcancer patients thereby reducing the mortality rate by this deadly disease scourging the world.
Abstract: Deep learning techniques have become very popular among Artificial Intelligence (AI) techniques in many areas of life. Among many types of deep learning techniques, Convolutional Neural Networks (CNN) can be useful in image classification applications. In this work, a hybridized approach has been followed to classify lung nodule as benign or malignant. This will help in early detection of lung cancer and help in the life expectancy of lung cancer patients thereby reducing the mortality rate by this deadly disease scourging the world. The hybridization has been carried out between handcrafted features and deep features. The machine learning algorithms such as SVM and Logistic Regression have been used to classify the nodules based on the features. The dimensionality reduction technique, Principle Component Analysis (PCA) has been introduced to improve the performance of hybridized features with SVM. The experiments have been carried out with 14 different methods. It has been found that GLCM + VGG19 + PCA + SVM outperformed all other models with an accuracy of 94.93%, sensitivity of 90.9%, specificity of 97.36% and precision of 95.44%. The F1 score was found to be 0.93 and the AUC was 0.9843. The False Positive Rate was found to be 2.637% and False Negative Rate was 9.09%.

References
More filters
Proceedings Article
03 Dec 2012
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,871 citations

Proceedings Article
01 Jan 2015
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,857 citations

Journal ArticleDOI
01 Jan 1998
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

34,930 citations

Proceedings ArticleDOI
07 Jun 2015
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

29,453 citations

Journal ArticleDOI
01 Jan 1988-Nature
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract: We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

19,542 citations