scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

TensorFlow Based Website Click through Rate (CTR) Prediction Using Heat maps

01 Sep 2018-pp 97-102
TL;DR: A framework using TensorFlow to identify and detect the users click activity in real time and take automated decisions like placement of suitable products, placement of advertisements and others based on the highest clicks recorded by the users is proposed.
Abstract: Web Heat Maps are used to identify the click patterns and activities visited by the users of the website. Using heat maps, one can make a manual decision based on the user's click activity. This paper proposes a framework using TensorFlow to identify and detect the users click activity in real time. Tensor Flow also suggest or take business decisions predicted through users clicks. This paper models Tensor Flow's machine learning library to take automated decisions like placement of suitable products, placement of advertisements and others based on the highest clicks recorded by the users. The results predict that the future businesses like e-commerce, fashion and retail industry can benefit more if this framework is deployed in such applications.
Citations
More filters
DOI
08 Sep 2021
TL;DR: In this article, the authors conducted a systematic review of the literature of research articles and identified research gaps in the application of AI in fashion e-commerce, which can be beneficial for researchers in the academic world as future lines of research.
Abstract: Artificial Intelligence (AI) has already strongly transformed many industries such as healthcare, finance, automotive, education and retail. In recent years, AI implementation in Business to Customer (B2C) e-commerce is increasing significantly. The aim of this research is to study the impact and significance of AI in fashion e-commerce. For this purpose, we conducted a systematic review of the literature of research articles. In which 79 articles related to the topic were retrieved from “Web Of Science” database. First, the articles were categorized according to the AI methods used. Second, they were classified according to their purpose in fashion e-commerce area. As a result of these categorizations, research gaps in the application of AI were identified. These gaps can be beneficial for researchers in the academic world as future lines of research.

1 citations

Journal ArticleDOI
TL;DR: In this article, a systematic review of the literature is carried out, in which data from the Web Of Science and Scopus databases were used to analyze 219 publications on the subject.
Abstract: Many industries, including healthcare, banking, the auto industry, education, and retail, have already undergone significant changes because of artificial intelligence (AI). Business-to-Customer (B2C) e-commerce has considerably increased the use of AI in recent years. The purpose of this research is to examine the significance and impact of AI in the realm of fashion e-commerce. To that end, a systematic review of the literature is carried out, in which data from the Web Of Science and Scopus databases were used to analyze 219 publications on the subject. The articles were first categorized using AI techniques. In the realm of fashion e-commerce, they were divided into two categories. These categorizations allowed for the identification of research gaps in the use of AI. These gaps offer potential and possibilities for further research.
References
More filters
Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Proceedings ArticleDOI
Jia Deng1, Wei Dong1, Richard Socher1, Li-Jia Li1, Kai Li1, Li Fei-Fei1 
20 Jun 2009
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Abstract: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.

49,639 citations

Book ChapterDOI
06 Sep 2014
TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Abstract: Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.

12,783 citations

Journal ArticleDOI
TL;DR: This article provides an overview of progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.
Abstract: Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models (GMMs) to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. An alternative way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition benchmarks, sometimes by a large margin. This article provides an overview of this progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.

9,091 citations