scispace - formally typeset
Search or ask a question
Author

J. Divya Udayan

Bio: J. Divya Udayan is an academic researcher from VIT University. The author has contributed to research in topics: Deep learning & Rendering (computer graphics). The author has an hindex of 3, co-authored 19 publications receiving 28 citations. Previous affiliations of J. Divya Udayan include Amrita Vishwa Vidyapeetham & Gandhi Institute of Technology and Management.

Papers
More filters
Journal ArticleDOI
TL;DR: A novel method is proposed for emotion classification by using deep learning network with transfer learning method to achieve promising significant effect on emotion classification with good accuracy and PDA value, when compared with other state-of-art methods.
Abstract: Emotion is subjective which convey rich semantics based on an image that induces different emotion based on each individual. A novel method is proposed for emotion classification by using deep learning network with transfer learning method. Transfer learning techniques are the predictive model that reuses the model trained on related predictive problems. The purpose of the proposed work is to classify the emotion perception from images based on visual features. Image augmentation and segmentation is performed to build powerful classifier. The performance of deep convolution neural network (CNN) is improved with transfer learning techniques in large scale Image-Emotion-dataset effectively. The experiments conducted on this dataset and result shows that proposed method achieve promising significant effect on emotion classification with good accuracy and PDA value, when compared with other state-of-art methods.

13 citations

Proceedings ArticleDOI
01 Feb 2020
TL;DR: This paper aims to investigate the effectiveness of Augmented reality in brand building and marketing along with the traditional mode of advertisement (digital and print) in one of the world’s most complex and biggest industry.
Abstract: Digital media and the internet has provided us with infinite opportunities for brand building and marketing. Whenever a product is advertised, it must add value in terms of product and brand knowledge. Good product, design, content and advertisement tend to create a better image in mind of the client. Hence, there are more chances that the product will be chosen by the client. Augmented reality (AR) is an innovative technology which adds virtual objects to the reality that can create real like experiences. Interactive design helps to feel and experience the product without physically going anywhere or purchasing it. This creates a strong imprint in the customer's mind about the product, therefore it is easier to recall the product. This paper aims to investigate the effectiveness of Augmented reality in brand building and marketing along with the traditional mode of advertisement (digital and print). One of the world’s most complex and biggest industry is targeted for the study, the valves industry. An Augmented reality application is developed for the customers as well as the employees, then a survey is carried out to compare the results when only traditional methods of advertising are used and when Augmented reality is added to traditional ways of advertising.

13 citations

Journal ArticleDOI
TL;DR: This paper presents a systematic literature review of the current state of the art that focuses on 3D reconstruction of non-rigid object, articulated motion and human performance in real-time and discusses the limitations of current methods.
Abstract: Background: Recent developments in capturing devices like kinect, Intel real sense camera etc., has impelled research in 3D reconstruction especially in the dynamic scene and the performance in terms of both reconstruction quality and speed has increased and thus, have supported many application like teleportation, gaming, free view point video, CG films etc. This paper provides systematic literature review of 3D reconstruction techniques applied in dynamic scene. The objective of this systematic literature review is to provide the detail technical progress in 3D reconstruction techniques for dynamic scene and to find the research gap in this field. Purpose: This paper presents a systematic literature review of the current state of the art that focuses on 3D reconstruction of non-rigid object, articulated motion and human performance in real-time. We further discuss the limitations of current methods and emphasize promising technologies for future development. Methods: Search was conducted on five databases to find 3D reconstruction techniques for dynamic scene. As reconstruction of dynamic scene can be further categorized as rigid object 3D reconstruction and non-rigid object reconstruction based on the object being reconstructed in the dynamic scene. Thus we have searched for both categories for review and we have concentrated on the dynamic scene generated where object is dynamic while camera is static. Results: 281 papers were initially searched further than after abstract screening 100 were selected later after detail study 46 were selected for systematic literature review and are presented in the table.

9 citations

Journal ArticleDOI
TL;DR: The advantages of deep learning approaches that can be brought by developing a framework that can enhance prediction of heart related diseases using ECG are looked into.
Abstract: The cardiovascular related diseases can however be controlled through earlier detection as well as risk evaluation and prediction. In this paper the application of deep learning methods for CVD diagnosis using ECG is addressed and also discussed the deep learning with Python. A detailed analysis of related articles has been conducted. The results indicate that convolutional neural networks are the most widely used deep learning technique in the CVD diagnosis. This research paper looks into the advantages of deep learning approaches that can be brought by developing a framework that can enhance prediction of heart related diseases using ECG.

7 citations

Proceedings ArticleDOI
J. Divya Udayan1, Hyung-Seok Kim1, Jun Lee1, Jee-In Kim1, Keetae Kim 
10 Jul 2013
TL;DR: A new data representation mechanism and transmission scheme to render the 3D models with less content and less computational cost and a technique for rendering with selective LODs by emphasizing the regions of interest (ROIs) depending on the importance of the building for a specific user.
Abstract: Streaming of 3D data sets is a key technology for remote rendering and visualization of huge and complex geometrical models like large scale city models. Even with high speed network it is still difficult to share the 3D information of a city scene among different users. In our work, we have devised a new data representation mechanism and transmission scheme to render the 3D models with less content and less computational cost. Our approach uses light weight building geometry and multi-level textured LOD representation to transmit data via streaming. In addition, we suggest a technique for rendering with selective LODs by emphasizing the regions of interest (ROIs) depending on the importance of the building for a specific user. The LOD distribution is updated over the ROI as the gaze point of the viewer moves over the model surfaces. Preliminary tests and evaluations reveal the feasibility of our method to be extended to large scale mobile applications.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work aims to present a survey of recent developments in analyzing the multimodal sentiments (involving text, audio, and video/image) which involve human–machine interaction and challenges involved in analyzing them.
Abstract: The analysis of sentiments is essential in identifying and classifying opinions regarding a source material that is, a product or service. The analysis of these sentiments finds a variety of applications like product reviews, opinion polls, movie reviews on YouTube, news video analysis, and health care applications including stress and depression analysis. The traditional approach of sentiment analysis which is based on text involves the collection of large textual data and different algorithms to extract the sentiment information from it. But multimodal sentimental analysis provides methods to carry out opinion analysis based on the combination of video, audio, and text which goes a way beyond the conventional text‐based sentimental analysis in understanding human behaviors. The remarkable increase in the use of social media provides a large collection of multimodal data that reflects the user's sentiment on certain aspects. This multimodal sentimental analysis approach helps in classifying the polarity (positive, negative, and neutral) of the individual sentiments. Our work aims to present a survey of recent developments in analyzing the multimodal sentiments (involving text, audio, and video/image) which involve human–machine interaction and challenges involved in analyzing them. A detailed survey on sentimental dataset, feature extraction algorithms, data fusion methods, and efficiency of different classification techniques are presented in this work.

47 citations

Journal ArticleDOI
TL;DR: This study has worked on developing a process which translates data in a way without any type of information loss, to manage data and metadata in such a way so they may not improve complexity and keep the strong linkage among them.
Abstract: In big data, data originates from many distributed and different sources in the shape of audio, video, text and sound on the bases of real time; which makes it massive and complex for traditional systems to handle. For this, data representation is required in the form of semantically-enriched for better utilization but keeping it simplified is essential. Such a representation is possible by using Resource Description Framework (RDF) introduced by World Wide Web Consortium (W3C). Bringing and transforming data from different sources in different formats into the RDF form having rapid ratio of increase is still an issue. This requires improvements to cover transition of information among all applications with induction of simplicity to reduce complexities of prominently storing data. With the improvements induced in the shape of big data representation for transformation of data to form into Extensible Markup Language (XML) and then into RDF triple as linked in real time. It is highly needed to make transformation more data friendly. We have worked on this study on developing a process which translates data in a way without any type of information loss. This requires to manage data and metadata in such a way so they may not improve complexity and keep the strong linkage among them. Metadata is being kept generalized to keep it more useful than being dedicated to specific types of data source. Which includes a model explaining its functionality and corresponding algorithms focusing how it gets implemented. A case study is used to show transformation of relational database textual data into RDF, and at end results are being discussed.

44 citations

Journal ArticleDOI
TL;DR: The review shows that initial hand-operated SGs gave way to automatic generation, which in turn developed into automated SG extraction, through increasing levels of computational capabilities, and as an overall result, example-based research perspectives raise important possibilities for intelligent design systems.
Abstract: Recent Artificial Intelligence studies have achieved substantial improvements in practical tasks by using extensive amounts of data. We assume that a substantial part of the data to guide artificial design technologies resides in existing design examples. Developing ways to use this data may enable improvements in intelligent design tools, with the hope that these may provide more effective design workflows and more productive design practices. Such improvements may result in more in-depth evaluations of potentials and alternatives for design situations; hence better planning for the spatial environment. Various approaches have been developed to use representations of architectural examples for artificially tackling architectural design tasks. This study presents a review of the historical development of these approaches, with an overall aim to investigate where and how design examples have been used for practical computational design applications. The review encompasses traditional and recent Shape Grammar and Procedural Modeling studies, Case-Based Design, Similarity-Based Evaluation and Design, and recent studies on the architectural uses of Machine Vision, Semantic Modeling, Machine Learning, and Classification. The emphasis of the review is on the studies that aim at designing or generating new design examples, particularly for building layouts, facades, envelopes, and massing. For a comparative evaluation of the current capabilities of the examined lineages of studies, we propose a minimum set of design capabilities, and assess each study through this framework. This reveals the overall patterns of already covered requirements. The review shows that initial hand-operated SGs gave way to automatic generation, which in turn developed into automated SG extraction, through increasing levels of computational capabilities. Case-Based Design has been neglected; however, it can be reinvigorated through novel AI techniques. On the other hand, Similarity-Based Evaluation may complement and balance the orientation towards technical performance. Machine Learning and Computer Vision appear as potential intermediaries for connecting these threads. There are example-based studies towards almost all aspects of artificial design; yet, these have not been tackled adequately or definitively. In particular, dynamic process control is still only a future potential. On the other hand, the examined research lineages have the potential to assume complementary roles for more capable and multifaceted design systems. As an overall result, example-based research perspectives raise important possibilities for intelligent design systems.

26 citations

Patent
13 Mar 2015
TL;DR: In this paper, the authors present a system for processing and/or transmitting 3D data, where a partitioning component receives captured data associated with a 3D model of an interior environment and partitions the captured data into at least one data chunk associated with at least a first level and a second level of detail.
Abstract: Systems and techniques for processing and/or transmitting three-dimensional (3D) data are presented. A partitioning component receives captured 3D data associated with a 3D model of an interior environment and partitions the captured 3D data into at least one data chunk associated with at least a first level of detail and a second level of detail. A data component stores 3D data including at least the first level of detail and the second level of detail for the at least one data chunk. An output component transmits a portion of data from the at least one data chunk that is associated with the first level of detail or the second level of detail to a remote client device based on information associated with the first level of detail and the second level of detail.

22 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a multi-feature fusion transfer learning (MFTL) method that applies knowledge and skills obtained from the Baijiabao landslide scenario and sufficient monitoring data to improve the prediction capacity for other landslides, such as the Bazimen and Baishuihe landslides.
Abstract: Rainfall reservoir-induced landslides in the Zigui Basin, China Three Gorges Reservoir (CTGR) area, exhibit typical step-like deformation characteristics with mutation and creep states. Previous landslide displacement forecasting models yielded low prediction accuracy especially for mutational displacements. Coupled with the lack of monitoring sites and data limitations, it is extremely difficult to obtain accurate and reliable early warnings for landslides. The multi-feature fusion transfer learning (MFTL) method proposed in this paper applies the knowledge and skills obtained from the Baijiabao landslide scenario and sufficient monitoring data to improve the prediction capacity for other landslides, such as the Bazimen and Baishuihe landslides. The model barely relies on the long-time continuous monitoring process, and it can not only fill gaps in data when monitoring is interrupted, but also provide real-time displacement predictions based on accurate weather forecasting and periodic reservoir scheduling. In addition, the non-uniform weight error (NWE) evaluation method is proposed in this paper to focus more on the mutation state prediction accuracy because landslide instability is most likely to occur in this stage. Compared with other intelligent algorithms, the results indicate that the MFTL method owns low prediction error and high reliability, as well as the positive generalization ability in landslide prediction. This study paves the potential way for realizing the real-time, whole-process and accurate landslide forecasting.

20 citations