scispace - formally typeset
Search or ask a question
Author

Ishita Das

Bio: Ishita Das is an academic researcher from University of Calcutta. The author has contributed to research in topics: Ecoregion & Loan. The author has an hindex of 1, co-authored 2 publications receiving 6 citations.

Papers
More filters
Book ChapterDOI
01 Jan 2019
TL;DR: In this paper, the authors proposed a data warehouse model which integrates the existing parameters of loan disbursement related decisions and also incorporates the newly identified concepts to give the priorities to the customers who don't have any old credit history.
Abstract: Disbursement of loan is an important decision-making process for the corporate like banks and NBFC (Non-banking Finance Corporation) those offers loans. The business involves several parameters and the data which are associated to these parameters are generated from heterogeneous data sources and also belong to different business verticals. Henceforth the decision-making on loan scenarios are critical and the outcome involve solving the issues like whether to grant the loan or not, if sanctioned what is highest amount, etc. In this paper we consider the traditional parameters of loan sanction process along with these we identify one special case of Indian credit lending scenario where the people having old loans with good repayment history get priority. This limits the business opportunities for Bank/NBFC or other loan disbursement organizations as potential good customers having no loan history are treated with less priority. In this research work we propose a data warehouse model which integrates the existing parameters of loan disbursement related decisions and also incorporates the newly identified concepts to give the priorities to the customers who don’t have any old credit history.

5 citations

Journal ArticleDOI
TL;DR: A multichambered textulariid foraminifer from the world's largest mangrove ecosystem, the Sundarbans, India, is described, which has an agglutinated wall structure, planispirally coiled test, and a single high-arched equatorial aperture.
Abstract: We describe Srinivasania sundarbanensis n. gen. et sp. nov., a multichambered textulariid foraminifer from the world's largest mangrove ecosystem, the Sundarbans, India. The new genus has an agglutinated wall structure, planispirally coiled test, and a single high-arched equatorial aperture located at the base of the final chamber with a narrow, agglutinated lip and with morphological similarity to the genera GobbettiaDhillon, 1968, and HaplophragmoidesCushman, 1910. Phylogenetic analyses, using partial small subunit rRNA gene, partial large subunit rRNA gene, and concatenated (LSU+SSU) sequence data clearly show the placement of this new taxon among other textulariid foraminifers, distant from all other genera in a strongly supported clade. In the new genus and species, the test is discoidal, measuring 100 to 350 µm in diameter with six to seven chambers in the final whorl. Elemental characterization (SEM-EDS) of the agglutinated test wall reveals a preference for quartz grains (SiO2) to construct its test. It is a common species and is presently known only from the northern marsh environments of Indian Sundarbans.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper describes a framework which is in the form of an extension to Unified Modelling Language (UML) which focuses on the accurate representations of the properties of the MD systems based on domain specific information.
Abstract: Data Warehouse (DW) applications provide past detail for judgment process for the companies. It is acknowledged that these systems depend on Multidimensional (MD) modelling different from traditional database modelling. MD modelling keeps data in the form of facts and dimensions. Some proposals have been presented to achieve the modelling of these systems, but none of them covers the MD modelling completely. There is no any approach which considers all the major components of MD systems. Some proposals provide their proprietary visual notations, which force the architects to gain knowledge of new precise model. This paper describes a framework which is in the form of an extension to Unified Modelling Language (UML). UML is worldwide known to design a variety of perspectives of software systems. Therefore, any method using the UML reduces the endeavour of designers in understanding the novel notations. Another exceptional characteristic of the UML is that it can be extended to bring in novel elements for different domains. In addition, the proposed UML profile focuses on the accurate representations of the properties of the MD systems based on domain specific information. The proposed framework is validated using a specific case study. Moreover, an evaluation and comparative analysis of the proposed framework is also provided to show the efficiency of the proposed work.

3 citations

Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, a data warehouse for bank data relating to consumers, goods, services, etc. has been presented, where the implementation steps of Kimball lifecycle have been presented followed by the ETL process for bank customer's data.
Abstract: In today’s world, the banking sector has played a key role in the financial development of a country. Generally, in banking sector, there are many types of historical data in multiple heterogeneous databases, and posing queries on these heterogeneous databases is a very complex process. Since banks are running digitally—and generating numerous data—it is a simple transformation to attain a better way to use that data. Therefore, the increasing competition of market changes has demanded bank intelligence for analyzing those enormous data. In this paper, we construct a data warehouse and present the data warehouse applicability in the investigation of the banking data relating to consumers, goods, services, etc. At first, the implementation steps of Kimball lifecycle have been presented followed by the ETL process for bank customer’s data. Afterward, OLAP cube has been developed using Microsoft Visual Studio 2019. Finally, OLAP analysis has been done using Microsoft power BI. The experimental result has unveiled the uniformity and strength of OLAP-based solutions to expansible bank intelligence.

3 citations

Posted Content
TL;DR: Techniques to exploit the advantages of multicore architecture is studied to address solving graph problems of Big Data and Internet of Things.
Abstract: With the advent of era of Big Data and Internet of Things, there has been an exponential increase in the availability of large data sets. These data sets require in-depth analysis that provides intelligence for improvements in methods for academia and industry. Majority of the data sets are represented and available in the form of graphs. Therefore, the problem at hand is to address solving graph problems. Since the data sets are large, the time it takes to analyze the data is significant. Hence, in this paper, we explore techniques that can exploit existing multicore architecture to address the issue. Currently, most Central Processing Units have incorporated multicore design; in addition, co-processors such as Graphics Processing Units have large number of cores that can used to gain significant speedup. Therefore, in this paper techniques to exploit the advantages of multicore architecture is studied.

2 citations

Posted Content
TL;DR: This paper proposes techniques to compress the adjacency matrix representation of the graph and shows that large graphs can be efficiently stored in smaller memory and exploit the parallel processing power of compute nodes as well as efficiently transfer data between resources.
Abstract: Graphs can be used to represent a wide variety of data belonging to different domains. Graphs can capture the relationship among data in an efficient way, and have been widely used. In recent times, with the advent of Big Data, there has been a need to store and compute on large data sets efficiently. However, considering the size of the data sets in question, finding optimal methods to store and process the data has been a challenge. Therefore, in this paper, we study different graph compression techniques and propose novel algorithms to do the same. Specifically, given a graph G = (V, E), where V is the set of vertices and E is the set of edges, and |V| = n, we propose techniques to compress the adjacency matrix representation of the graph. Our algorithms are based on finding patterns within the adjacency matrix data, and replacing the common patterns with specific markers. All the techniques proposed here are lossless compression of graphs. Based on the experimental results, it is observed that our proposed techniques achieve almost 70% compression as compared to adjacency matrix representation. The results show that large graphs can be efficiently stored in smaller memory and exploit the parallel processing power of compute nodes as well as efficiently transfer data between resources.

1 citations

Posted Content
TL;DR: This paper proposes techniques to compress graphs by finding specific patterns and replacing those with identifiers that are of variable length, an idea inspired by Huffman Coding, to reduce the space requirements of the graph by compressing the adjacency representation of the same.
Abstract: Graphs have been extensively used to represent data from various domains. In the era of Big Data, information is being generated at a fast pace, and analyzing the same is a challenge. Various methods have been proposed to speed up the analysis of the data and also mining it for information. All of this often involves using a massive array of compute nodes, and transmitting the data over the network. Of course, with the huge quantity of data, this poses a major issue to the task of gathering intelligence from data. Therefore, in order to address such issues with Big Data, using data compression techniques is a viable option. Since graphs represent most real world data, methods to compress graphs have been in the forefront of such endeavors. In this paper we propose techniques to compress graphs by finding specific patterns and replacing those with identifiers that are of variable length, an idea inspired by Huffman Coding. Specifically, given a graph G = (V, E), where V is the set of vertices and E is the set of edges, and |V| = n, we propose methods to reduce the space requirements of the graph by compressing the adjacency representation of the same. The proposed methods show up to 80% reduction is the space required to store the graphs as compared to using the adjacency matrix. The methods can also be applied to other representations as well. The proposed techniques help solve the issues related to computing on the graphs on resources limited compute nodes, as well as reduce the latency for transfer of data over the network in case of distributed computing.