scispace - formally typeset
Search or ask a question
Author

Ritu Chauhan

Bio: Ritu Chauhan is an academic researcher from Amity University. The author has contributed to research in topics: Big data & Knowledge extraction. The author has an hindex of 11, co-authored 50 publications receiving 349 citations. Previous affiliations of Ritu Chauhan include Hamdard University & Amity Institute of Biotechnology.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper discusses the data analytical tools and data mining techniques to analyze the medical data as well as spatial data to generate efficient clusters on discrete and continuous spatial medical databases.
Abstract: vast amount of hidden data in huge databases has created tremendous interests in the field of data mining. This paper discusses the data analytical tools and data mining techniques to analyze the medical data as well as spatial data. Spatial data mining includes discovery of interesting and useful patterns from spatial databases by grouping the objects into clusters. This study focuses on discrete and continuous spatial medical databases on which clustering techniques are applied and the efficient clusters were formed. The clusters of arbitrary shapes are formed if the data is continuous in nature. Furthermore, this application investigated data mining techniques such as classical clustering and hierarchical clustering on the spatial data set to generate the efficient clusters. The experimental results showed that there are certain facts that are evolved and can not be superficially retrieved from raw data.

54 citations

Journal ArticleDOI
01 Sep 2019
TL;DR: This research has tried to focus more on diagnosis of Diabetes disease, which is one of the fastest growing chronic diseases all over the world as declared by World Health Organization in the year 2014, and the different techniques like Gradient Boosting, Logistic Regression and Naive Bayes, which can be used for the diagnosis with attained accuracy.
Abstract: Machine learning is a subset of Artificial Intelligence when combined with Data Mining techniques plays a promising role in the field of prediction. We live in an era where data generation is exponential with time but if the generated data is not put to work or not converted to knowledge data, its generation is of no use. Similarly, in Healthcare also, data availability is high, so is the need to extract the information from it for better prognosis, diagnosis, treatment, drug development, and overall healthcare. In this research, we have tried to focus more on diagnosis of Diabetes disease, which is one of the fastest growing chronic diseases all over the world as declared by World Health Organization in the year 2014. We have also tried to show the different techniques like Gradient Boosting, Logistic Regression and Naive Bayes, which can be used for the diagnosis of diabetes disease with attained accuracy as 86% for the Gradient Boosting, 79% for Logistic Regression and 77% for Naive Bayes.

48 citations

Journal ArticleDOI
TL;DR: This study explores potential data mining applications in the Casemix context, which is expected to yield effective and efficient health care services by determining hidden relevant patterns which can’t be processed by human capabilities all alone.
Abstract: Background This study explores potential data mining applications in the Casemix context, which is expected to yield effective and efficient health care services. The objective of work focuses on determining hidden relevant patterns which can’t be processed by human capabilities all alone. California Drug and Alcohol treatment Assessment (CALDATA) of administrative type database can be relevant study for the medical diagnosis in usage of alcohol and drugs for patients admitted and discharged during the stay in hospital to discover knowledge for recovery process.

43 citations

Book ChapterDOI
01 Jan 2020
TL;DR: This paper aims to check and understand IoHT programs to gain exquisite healthcare and provide the current time to track patient medical conditions, prevent critical situations, and improve the comfort of the clever IoT environment.
Abstract: In current times, there is a demand for a system with its associated devices, individuals, times, places, and networks fully integrated in the IoT. Internet of things has developed a health tracking device to be the closing blocks. The aim of an effective IoT healthcare gadget is to provide the current time to track patient medical conditions, prevent critical situations, and improve the comfort of the clever IoT environment. The fields of technological expertise and electronics have fused into the Internet of healthcare things (IoHT), one of the most exemplary technological progresses. Despite the fact that IoHT’s effect in health care was still widespread in its initial areas. In order to gain exquisite healthcare, this paper aims to check and understand IoHT programs.

27 citations

Journal ArticleDOI
TL;DR: The aim of the study, was to utilize the CNN model on the air quality dataset to detect patterns for future prediction modelling to discover the highest level of CO, SO2 and NO2 levels in past five years among the different cities of India from 2015-2020.

27 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: Examining the postponement of primary curative care among this marginalized group of people by drawing from the National Transgender Discrimination Survey suggests that experience, identity, state of transition, and disclosure of transgender or gender nonconforming status are associated with postponement due to discrimination.

255 citations

Journal ArticleDOI
TL;DR: An innovative neural network approach to achieve better stock market predictions by using the embedded layer and the automatic encoder, respectively, to vectorize the data, in a bid to forecast the stock via long short-term memory neural network.
Abstract: This paper aims to develop an innovative neural network approach to achieve better stock market predictions. Data were obtained from the live stock market for real-time and off-line analysis and results of visualizations and analytics to demonstrate Internet of Multimedia of Things for stock analysis. To study the influence of market characteristics on stock prices, traditional neural network algorithms may incorrectly predict the stock market, since the initial weight of the random selection problem can be easily prone to incorrect predictions. Based on the development of word vector in deep learning, we demonstrate the concept of “stock vector.” The input is no longer a single index or single stock index, but multi-stock high-dimensional historical data. We propose the deep long short-term memory neural network (LSTM) with embedded layer and the long short-term memory neural network with automatic encoder to predict the stock market. In these two models, we use the embedded layer and the automatic encoder, respectively, to vectorize the data, in a bid to forecast the stock via long short-term memory neural network. The experimental results show that the deep LSTM with embedded layer is better. Specifically, the accuracy of two models is 57.2 and 56.9%, respectively, for the Shanghai A-shares composite index. Furthermore, they are 52.4 and 52.5%, respectively, for individual stocks. We demonstrate research contributions in IMMT for neural network-based financial analysis.

251 citations

Journal ArticleDOI
TL;DR: A Blockchain-based platform is proposed that can be used for storing and managing electronic medical records in a Cloud environment that makes the work easier, keeps an eye on the security and accuracy of the data and also reduces the cost of maintenance.
Abstract: The healthcare data is an important asset and rich source of healthcare intellect. Medical databases, if created properly, will be large, complex, heterogeneous and time varying. The main challenge nowadays is to store and process this data efficiently so that it can benefit humans. Heterogeneity in the healthcare sector in the form of medical data is also considered to be one of the biggest challenges for researchers. Sometimes, this data is referred to as large-scale data or big data. Blockchain technology and the Cloud environment have proved their usability separately. Though these two technologies can be combined to enhance the exciting applications in healthcare industry. Blockchain is a highly secure and decentralized networking platform of multiple computers called nodes. It is changing the way medical information is being stored and shared. It makes the work easier, keeps an eye on the security and accuracy of the data and also reduces the cost of maintenance. A Blockchain-based platform is proposed that can be used for storing and managing electronic medical records in a Cloud environment.

197 citations