scispace - formally typeset
Search or ask a question
Author

Devendra K. Tayal

Bio: Devendra K. Tayal is an academic researcher from Indira Gandhi Institute of Technology. The author has contributed to research in topics: Fuzzy logic & Computer science. The author has an hindex of 11, co-authored 67 publications receiving 471 citations. Previous affiliations of Devendra K. Tayal include Guru Gobind Singh Indraprastha University & University of Delhi.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed approach contributes in the betterment of the society by helping the investigating agencies in crime detection and criminals’ identification, and thus reducing the crime rates.
Abstract: In the current paper, we propose an approach for the design and implementation of crime detection and criminal identification for Indian cities using data mining techniques. Our approach is divided into six modules, namely--data extraction (DE), data preprocessing (DP), clustering, Google map representation, classification and WEKA® implementation. First module, DE extracts the unstructured crime dataset from various crime Web sources, during the period of 2000---2012. Second module, DP cleans, integrates and reduces the extracted crime data into structured 5,038 crime instances. We represent these instances using 35 predefined crime attributes. Safeguard measures are taken for the crime database accessibility. Rest four modules are useful for crime detection, criminal identification and prediction, and crime verification, respectively. Crime detection is analyzed using k-means clustering, which iteratively generates two crime clusters that are based on similar crime attributes. Google map improves visualization to k-means. Criminal identification and prediction is analyzed using KNN classification. Crime verification of our results is done using WEKA®. WEKA® verifies an accuracy of 93.62 and 93.99 % in the formation of two crime clusters using selected crime attributes. Our approach contributes in the betterment of the society by helping the investigating agencies in crime detection and criminals' identification, and thus reducing the crime rates.

96 citations

Book ChapterDOI
01 Jan 2019
TL;DR: This study shows that AI is the backbone of all the NLP enabled intelligent tutor systems and helps in developing qualities such as self reflection, answering deep questions, resolving conflict statements, generating creative questions, and choice-making skills.
Abstract: The contribution of Artificial Intelligence (AI) in the field of education has always been significant. From robotic teaching to the development of an automated system for answer sheet evaluation, AI has always helped both the teachers and the students. In this paper we have done an in depth analysis of the various research developments that were carried out across the globe corresponding to artificial intelligence techniques applied to education sector so as to summarize and highlight the role of AI in teaching and student’s evaluation. Our study shows that AI is the backbone of all the NLP enabled intelligent tutor systems. These systems helps in developing qualities such as self reflection, answering deep questions, resolving conflict statements, generating creative questions, and choice-making skills.

35 citations

Proceedings ArticleDOI
05 Mar 2014
TL;DR: Two algorithms are presented, one to identify a sarcastic tweet and other to perform polarity detection on political sarcastic tweets, which are used to know the dissatisfaction of people against a particular party, candidate or government.
Abstract: Sarcasm is an activity of saying or writing in such a way that the textual meaning of what is said is opposite of what is meant. Generally, sarcastic sentences are used to express negative feelings. Thus in political polarity detection, it can be used to know the dissatisfaction of people against a particular party, candidate or government and hence considering this other aspects can determine the poll results. This paper presents two algorithms, one to identify a sarcastic tweet and other to perform polarity detection on political sarcastic tweets. It also includes the current approach we are using to identify the sarcastic political tweets and future aspects. It also emphasizes on using the social networking sites and mainly Twitter as a source for predicting political results.

34 citations

Journal ArticleDOI
TL;DR: A novel approach for assigning reviewers to proposals based on the strong mathematical foundation of fuzzy logic, comprised of all the different aspects of expertise modeling and reviewer assignment is proposed.
Abstract: Reviewer Assignment Problem (RAP) is one of the cardinal problems in Government Funding agencies where the expertise level of the referee reviewing a proposal needs to be optimised to guarantee the selection of good R&D projects. Although many solutions have been proposed for RAP in the past, none of them deals with the inherent imprecision associated with the problem. For instance, it is not possible to determine the "exact expertise level" of a particular reviewer in a particular domain. In this paper, we propose a novel approach for assigning reviewers to proposals. To calculate the expertise of a reviewer in a particular domain, we create a type-2 fuzzy set by assigning relevant weights to the various factors that affect the expertise of the reviewer in that domain. We also create a fuzzy set of the proposal by selecting three keywords that best represent the proposal. We then use a fuzzy functions based equality operator to compute the equality of the type-2 fuzzy set of experts and the fuzzy set of proposal keywords, which is then subjected to a set of relevant constraints to optimize the solution. We consider the four important aspects: workload balancing of reviewers, avoiding Conflicts of Interest, considering individual preferences by incorporating bidding and mapping multiple keywords of a proposal. As an extension to this approach, we further consider the relative importance of each keyword with respect to the submitted proposal by using representative percentage weights to create the FUZZY sets which represent the keywords. Hence, we propose an integrated solution based on the strong mathematical foundation of fuzzy logic, comprised of all the different aspects of expertise modeling and reviewer assignment. An Expert System has also been developed for the same.

33 citations

Journal ArticleDOI
TL;DR: The proposed ClusFuDE method uses an improved automatic clustering approach for clustering the historical numerical data and provides the lowest MSE and MAPE when compared to all other methods available in the literature.

33 citations


Cited by
More filters
01 Jan 2002

9,314 citations

Book ChapterDOI
01 Jan 2016
TL;DR: Sentiment analysis is the task of automatically determining from text the attitude, emotion, or some other affectual state of the author as mentioned in this paper, which is a difficult task due to the complexity and subtlety of language use.
Abstract: Sentiment analysis is the task of automatically determining from text the attitude, emotion, or some other affectual state of the author. This chapter summarizes the diverse landscape of tasks and applications associated with sentiment analysis. We outline key challenges stemming from the complexity and subtlety of language use, the prevalence of creative and non-standard language, and the lack of paralinguistic information, such as tone and stress markers. We describe automatic systems and datasets commonly used in sentiment analysis. We summarize several manual and automatic approaches to creating valence- and emotion-association lexicons. We also discuss preliminary approaches for sentiment composition (how smaller units of text combine to express sentiment) and approaches for detecting sentiment in figurative and metaphoric language—these are the areas where we expect to see significant work in the near future.

315 citations

Journal Article
TL;DR: The reasons why Facebook chose Hadoop and HBase over other systems such as Apache Cassandra and Voldemort are described and the application requirements for consistency, availability, partition tolerance, data model and scalability are discussed.
Abstract: Facebook recently deployed Facebook Messages, its first ever user-facing application built on the Apache Hadoop platform. Apache HBase is a database-like layer built on Hadoop designed to support billions of messages per day. This paper describes the reasons why Facebook chose Hadoop and HBase over other systems such as Apache Cassandra and Voldemort and discusses the applicationBs requirements for consistency, availability, partition tolerance, data model and scalability. I explore the enhancements made to Hadoop to make it a more effective realtime system, the tradeoffs we made while configuring the system, and how this solution has significant advantages over the sharded MySQL database scheme used in other applications at Facebook and many other web-scalecompanies. I discuss the motivations behind my design choices, the challenges that we face in day-to-day operations, and future capabilities and improvements still under development.I offer these observations on the deployment as a model for other companies who are contemplating a Hadoop-based solution over traditional sharded RDBMS deployments.

279 citations

Journal ArticleDOI
TL;DR: Despite the field remaining nascent, AI-driven health interventions could lead to improved health outcomes in LMICs and the global health community will need to work quickly to establish guidelines for development, testing, and use, and develop a user-driven research agenda to facilitate equitable and ethical use.

241 citations