scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Image retrieval using latent feature learning by deep architecture

01 Dec 2014-pp 1-4
TL;DR: This paper is introducing a novel approach for image retrieval task which collaboratively make use of the technicalities of natural language processing and deep architecture.
Abstract: The explosive growth of data, images in the World Wide Web makes it critical to the information retrievals. Image retrieval has been recognized as an elementary problem in the retrieval tasks and this exercise has got a wide attention based on the underlying domain characteristics. For instance, in social media data encompasses of noisy, diverse, heterogeneous, interconnected data. To confront these numerous characteristics and employ image retrieval the widely accepted deep architecture concept is utilized with the help of natural language latent query features. In this paper, we are introducing a novel approach for image retrieval task which collaboratively make use of the technicalities of natural language processing and deep architecture.
Citations
More filters
Book ChapterDOI
01 Jan 2022
TL;DR: In this article, the authors discuss the architecture and working of deep neural networks and focus on its application in the detection and treatment of various diseases like cancer, diabetes, Alzheimer's and Parkinson's disease.
Abstract: Machine learning is quickly becoming an important tool for diagnosis and prognosis of various medical conditions. Complex input output mappings are dealt in deep learning, which is developed based on machine learning approach. Due to its efficiency and similarity to the working of the human brain, deep neural networks are a preferred method of processing and analysing medical data. In addition to diagnosis, deep learning is used to study the progression of disease, develop a personalised treatment plan and for overall patient management. This chapter discusses the architecture and working of deep neural networks and focus on its application in the detection and treatment of various diseases like cancer, diabetes, Alzheimer’s and Parkinson’s disease.

17 citations

Book ChapterDOI
Anna Corrias1
01 Jan 2022

2 citations

Book ChapterDOI
George Hickman1
01 Jan 2022
TL;DR: In this article , the authors present various deep learning models and analysis algorithms which have been employed in some or the other forms for studying gene characteristics and gene development or have the potential to form the basis for ground breaking research for the same.
Abstract: DNA sequencing deals with figuring out the order of arrangement of the bases in the DNA. These bases are the building blocks of DNA molecules and their arrangement mostly determines the genetic information carried within a DNA segment, therefore sequencing becomes a very important aspect in the field of genomics. Now it becomes ever more important to optimize this process of sequencing and analysis and the field of deep learning has a lot to offer. Autoencoders are artificial neural networks which are trained in an unsupervised manner to obtain feature representation or dimensionality reduction. Now as clustering is difficult to perform for data with large dimensions, autoencoders can be used to reduce the dimension of data by associating each gene cluster with an autoencoder. Genetic algorithms are algorithms which are based on Darwin’s law of evolution and provide a better alternative to traditional clustering algorithms which have been found to have various drawbacks when implemented for genetic data. Drug repositioning is the examination of existing drugs on new disease targets and pharmacogenomics, looking to predict the target’s response to a drug. Deep learning acts as a powerful tool for repositioning drugs by allowing us to perform robust predictions and provide deep insights to drug-disease combinations. This chapter aims to provide the reader with various deep learning models and analysis algorithms which have been employed in some or the other forms for studying gene characteristics and gene development or have the potential to form the basis for ground breaking research for the same.
Book ChapterDOI
hjhgfrn1
01 Jan 2022
TL;DR: Deep learning and convolutional neural networks (CNNs) have brought about a revolution for the analysis of gene expression images as mentioned in this paper , which solved some of the setbacks faced by traditional machine learning approaches while advances in technology have enabled the capture of gene sequence images.
Abstract: Throughout this chapter the objective is to bring deep learning techniques and algorithms, specifically CNN, which bring about ease for a researcher with respect to time and resources. The concepts are explained as an overview to implant an intuition of the techniques which can be further elaborated with the mathematics in the references. Computational biology involves the examination of how proteins interact with each other through the simulation of protein folding, motion, and interaction. Current computational biology research can be divided into a number of broad areas, mainly based on the type of experimental data that is analyzed or modeled. Deep learning and in particular, Convolutional Neural Networks (CNNs) has brought about a revolution for the analysis of gene expression images. This technique solves some of the setbacks faced by traditional machine learning approaches while advances in technology have enabled the capture of gene sequence images, while in some cases non-image data captured can be converted to an image for analysis.
References
More filters
Journal ArticleDOI
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Abstract: We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

15,055 citations

Journal ArticleDOI
TL;DR: An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time and may actually be seen as an extension of the principal component analysis (PCA).

8,522 citations

Book
01 Jan 2009
TL;DR: The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.
Abstract: Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.

7,767 citations


"Image retrieval using latent featur..." refers background in this paper

  • ...The difficulty of coming up with appropriate visual and semantic categories can be solved by the “DEEP ARCHITECTURES” [10] which is the main focus of this paper....

    [...]

  • ...The difficulty of coming up with appropriate visual and semantic categories can be solved by the “DEEP ARCHITECTURES” [10] which is the main focus of this paper....

    [...]

Journal ArticleDOI
TL;DR: An algorithm for suffix stripping is described, which has been implemented as a short, fast program in BCPL, and performs slightly better than a much more elaborate system with which it has been compared.
Abstract: The automatic removal of suffixes from words in English is of particular interest in the field of information retrieval. An algorithm for suffix stripping is described, which has been implemented as a short, fast program in BCPL. Although simple, it performs slightly better than a much more elaborate system with which it has been compared. It effectively works by treating complex suffixes as compounds made up of simple suffixes, and removing the simple suffixes in a number of steps. In each step the removal of the suffix is made to depend upon the form of the remaining stem, which usually involves a measure of its syllable length.

7,572 citations

Proceedings Article
04 Dec 2006
TL;DR: These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
Abstract: Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

4,385 citations


"Image retrieval using latent featur..." refers background in this paper

  • ...Deep neural network [13] is an ANN which has many hidden layers between the input and the output layer....

    [...]