scispace - formally typeset
Search or ask a question
Author

Ankita Gupta

Other affiliations: Amity University
Bio: Ankita Gupta is an academic researcher from Guru Gobind Singh Indraprastha University. The author has contributed to research in topics: Graph (abstract data type) & Tree traversal. The author has an hindex of 1, co-authored 6 publications receiving 12 citations. Previous affiliations of Ankita Gupta include Amity University.

Papers
More filters
Proceedings ArticleDOI
01 Feb 2015
TL;DR: An algorithm that is divided in two parts: computing the repeated frames by processing the image pixels to produce a frame-by-frame motion energy time and computing the tampering attack and its location with the help of the Support Vector Machine helps to predict whether the given video has been tampered or not.
Abstract: The large amount of video content is being transmitted over internet and other channels. With the help of existing multimedia editing tools one can easily change the content of data which lead to lose the authenticity of the information. Thus, it becomes necessary to develop different methods by which the authenticity of the videos can be confirmed. In the past researchers have proposed several methods for authentication of videos. This paper presents an algorithm that is divided in two parts: computing the repeated frames by processing the image pixels to produce a frame-by-frame motion energy time and computing the tampering attack and its location with the help of the Support Vector Machine. This helps to predict whether the given video has been tampered or not.

13 citations

Proceedings ArticleDOI
01 Aug 2018
TL;DR: The research work focuses on implementing a data driven approach towards understanding the water utility management, finding loopholes in demand supply patterns and customer base analysis using data analytics methods and geospatial approach.
Abstract: Over the years the utility-customer relationship has rapidly transformed with vast number of factors affecting consumption including environmental factors like geography, weather, population, migration etc. Over the years the utility-customer relationship has rapidly transformed with vast number of factors affecting consumption including environmental factors like geography, weather, population, migration etc. The research work focuses on implementing a data driven approach towards understanding the water utility management, finding loopholes in demand supply patterns and customer base analysis using data analytics methods and geospatial approach. The first half focuses research work aims at understanding the water utility consumption in metropolitan city of parts of City of Delhi, using data analytics that aims at understanding per household consumption of consumers and segmentation of customers based on customer profiling.

2 citations

Book ChapterDOI
01 Jan 2018
TL;DR: This paper focuses mainly on formulating idea of IoT into practical concepts, architectural modelling techniques of IoT, understanding the enabling elements and method for practical setup of embedded object communication, and future implication of IoT in perspective of India.
Abstract: Internet of things (IoT) presents “smart objects” as the core tool to focus on ubiquitous computing and cyber physical systems. Strong connectivity and an evolved generation of embedded systems facilitate the common grounds for intelligent object-based computing eliminating human-to-human or human-to-machine interactions. Basic objective of IoT is enabling seamless self generation of data and information transfer between Objects; knitting the physical objects into the virtual world. This paper focuses mainly on formulating idea of IoT into practical concepts, architectural modelling techniques of IoT, understanding the enabling elements and method for practical setup of embedded object communication. We have also evaluated the popularity of IoT applications as compared to other. Furthermore, future implication of IoT in perspective of India is given.

2 citations

Proceedings ArticleDOI
01 Jan 2018
TL;DR: This paper aims in understanding this phenomenon of Vardha Cyclone, the most intense tropical cyclone of 2016 in North Indian Ocean, using PingER data.
Abstract: The PingER project was initiated by SLACNational Accelerator Laboratory, Stanford, California with the end goal of observing end to end network performance. In the last decade PingER has produced a gigantic measure of information that has been stored in CSV documents. However, because of the need to recover information proficiently, it has been suggested that the information be placed into RDF triples. Translating and investigating such expansive volumes of information turns into an essential concern. Utilizing grouping calculations, new and intriguing examples can be seen in the informational indexes. Anomaly investigation can be performed offering knowledge to the exemptions happening in the dataset and dissecting the likely explanations of such. This paper aims in understanding this phenomenon of Vardha Cyclone, the most intense tropical cyclone of 2016 in North Indian Ocean, using PingER data.)

1 citations

Proceedings ArticleDOI
TL;DR: A Pareto analysis performed on the significant drivers showed that the Duration of Work, Complexity of the Building, Plinth Area and Built-up Area, Height and Specifications were the most important cost drivers in a construction project.
Abstract: In Construction Management, it is difficult to predict the cost estimate during the preliminary stage of the project because of limited information and unknown factors. Artificial Neural Networks can help in the prediction of estimate because of their simplicity and adaptability to non-linear problems. Due to their self-organizing nature they can be used to solve the problems even with low level programming. This makes them useful in interpreting and generalizing inadequate input information. ANN’s are crud e derivatives of the biological neural network with single layered or multi-layered neuron in the form of input layer, hidden layer and output layer. The neural network first has to undergo training from historical data in order to make predictions or show results. The size of the data set, number of hidden neurons and the neural network architecture determines the success of the results. Selecting the right data set becomes imperative in this case. For the purpose of cost estimation, the cost drivers were taken as inputs and their estimated costs were taken as the target value. The cost drivers were selected carefully through literature review and survey to provide more accurate results for the estimate. The main drivers identified were: type of building, location, seismic zone, project complexity, ground condition, soil condition, plot area, plinth area, built-up area, number of stories, number of basement, principal structural material, type of foundation, level of design complexity, modular design, market conditions, construction conditions, risk factor, impact of risk, estimated duration of work, specification, quality of work and detailed cost estimate of project. A Pareto analysis performed on the significant drivers showed that the Duration of Work, Complexity of the Building, Plinth Area and Built-up Area, Height and Specifications were the most important cost drivers in a construction project. A problem was formulated based on these drivers with numerical and categorical data. The data set was trained with a neural network using the MATLAB software using feed forward backpropagation. Training was carried out till the greatest correlation and least Mean Squared Error was obtained after multiple iterations. This trained data was used to predict the cost for a new project. The output of the testing was 87% accurate despite the small data set used.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A survey on passive video tampering detection methods is presented; the preliminaries of video files required for understanding video tampering forgery are presented; some open issues are identified that help to identify new research areas in passiveVideo tampering detection.

91 citations

Journal ArticleDOI
TL;DR: This paper presents a comprehensive and scrutinizing bibliography addressing the published literature in the field of passive-blind video content authentication, with primary focus on forgery/tamper detection, video re-capture and phylogeny detection, and video anti- Forensics and counter anti-forensics.
Abstract: In this digital day and age, we are becoming increasingly dependent on multimedia content, especially digital images and videos, to provide a reliable proof of occurrence of events. However, the availability of several sophisticated yet easy-to-use content editing software has led to great concern regarding the trustworthiness of such content. Consequently, over the past few years, visual media forensics has emerged as an indispensable research field, which basically deals with development of tools and techniques that help determine whether or not the digital content under consideration is authentic, i.e., an actual, unaltered representation of reality. Over the last two decades, this research field has demonstrated tremendous growth and innovation. This paper presents a comprehensive and scrutinizing bibliography addressing the published literature in the field of passive-blind video content authentication, with primary focus on forgery/tamper detection, video re-capture and phylogeny detection, and video anti-forensics and counter anti-forensics. Moreover, the paper intimately analyzes the research gaps found in the literature, provides worthy insight into the areas, where the contemporary research is lacking, and suggests certain courses of action that could assist developers and future researchers explore new avenues in the domain of video forensics. Our objective is to provide an overview suitable for both the researchers and practitioners already working in the field of digital video forensics, and for those researchers and general enthusiasts who are new to this field and are not yet completely equipped to assimilate the detailed and complicated technical aspects of video forensics.

81 citations

Proceedings ArticleDOI
22 Jun 2018
TL;DR: The idea behind this project was to develop a home automation system which is a cheaper alternative to commercial options of home automation and at the same time could be seamlessly integrated with the commercial products.
Abstract: Smart home is a term that is commonly used to refer to homes where the appliances, lightning, air-conditioning, TVs etc. are capable of communicating with each other and can be controlled remotely according to a predefined schedule or via some kind of interface. In this project presents a Home automation system using Wireless Fidelity as communication interface. The idea behind this project was to develop a home automation system which is a cheaper alternative to commercial options of home automation and at the same time could be seamlessly integrated with the commercial products. Also the interface of the system should be as simple and easy to learn as possible so that even the elderly and disabled people are able to use it. The system could be controlled using an android application or Google Assistant. The android application is used to communicate with the firebase database and update its values, this in turn enables us to control the various sensors and electrical appliances in home. The project uses Node MCU ESP 8266 12 E as the controller and also as the module for wireless communication. Various sensors like MQ6, MQ 135, DHT 11 etc. have been used to take reading of the environment around the house and to keep a watch on it. Also Firebase has been used as the database to keep record of readings of the various sensors.

30 citations

Journal ArticleDOI
TL;DR: A two-step algorithm is proposed, in which the suspicious frames are identified, their features are extracted and compared with other frames of the test video to take the decision to identify frame duplication attack in MPEG-4 video.
Abstract: This paper presents passive blind forgery detection to identify frame duplication attack in Moving Picture Experts Group-4 (MPEG-4) video. In this attack, one or more frames are copied and pasted at other location in the same video to hide or highlight particular activity. Since the tampered frames are from the same video, their statistical properties are uniform, which makes challenging to identify duplicate frames. In this paper, a two-step algorithm is proposed, in which the suspicious frames are identified, their features are extracted and compared with other frames of the test video to take the decision. Scale Invariant Feature Transform (SIFT) key-points are used as feature for comparison. Finally, Random Sample Consensus algorithm is used to locate duplicate frames. The proposed method is tested on compressed and uncompressed videos with variable compression rate. The simulation results show that the proposed scheme is competent to detect the tampered frames with 99.8% average accuracy. Comparative analysis is made for the proposed method with existing methods with respect to parameters Precision Rate (PR), Recall Rate (RR), Detection Accuracy (DA). The average values of PR, RR, and DA for the proposed method are 99.9%, 99.7%, 99.8% respectively, which are better than other methods. The proposed method needs average 33 seconds of simulation time, which is less as compared to other methods.

17 citations

Journal ArticleDOI
TL;DR: The results of the advisor recommendation system can provide recommendations to students regarding the final assignment advisor who has conducted research in accordance with the topic of the student's final assignment written in Indonesian.
Abstract: In a higher education such as universities, final project are under supervision of one or more supervisors with a similar research interest or topic. The determination of the final project supervisor is an important factor in the work of the student's final project. However, the lack of information about the supervisor can hamper students in making the determination of the supervisor. Thus, a system is needed that can facilitate students in determining the final project or thesis advisors in accordance with the research topic. This problem is the basis of this research. The study is conducted by developing a web-based system and applying the TF-IDF word weighting and cosine similarity method. TF-IDF method is a way to give the weight of the relationship of a word to the document. The cosine similarity is a method for calculating the similarity between two objects expressed in two vectors by using keywords from a document as a measure. The results of the advisor recommendation system can provide recommendations to students regarding the final assignment advisor who has conducted research in accordance with the topic of the student's final assignment written in Indonesian. In 20 testing, the accuracy of the comparison of the results of the system recommendations with the actual data obtained an average of 75% by comparing system recommendation with actual assigned supervisor.

13 citations