Bio: Amlan Chatterjee is an academic researcher from California State University, Dominguez Hills. The author has contributed to research in topics: Commercial aviation & Shared memory. The author has an hindex of 9, co-authored 32 publications receiving 194 citations. Previous affiliations of Amlan Chatterjee include University of Oklahoma & California State University.
01 Sep 2016
TL;DR: This paper surveys wearable computing devices and classify the same based on the form of assistance that is delivered to the person wearing the device and introduces a framework for futuristic devices that can operate in harsh environments.
Abstract: In the past decade there have been significant advancements in computer technology that have reduced the hardware form factor as well as increased energy efficient computing. Using network protocols for near field communication such as Body Area Networks (BANs), smaller and lighter computing units with attached sensors have transformed into wearable devices. These devices have served a plethora of purposes including providing assistance to people with disabilities, gathering data, serving as sensors and enhancing human capabilities among other things. Depending on the usage and infrastructure, the devices can be classified into respective domains. In this paper we survey wearable computing devices and classify the same based on the form of assistance that is delivered to the person wearing the device. We also introduce a framework for futuristic devices that can operate in harsh environments.
01 Nov 2018
TL;DR: An automated computer assisted framework has been proposed to analyze and detect the type of the disease from the current condition of the breast and three models have been compared in terms of accuracy.
Abstract: Breast cancer is one of the major threats to the human being. Early identification can prevent some of the premature deaths. Manual methods are sometimes very tedious and time consuming. Moreover manual diagnosis can be prone to error. Automated analysis can reduce the overhead of the manual diagnosis and reduce the error. In this work, an automated computer assisted framework has been proposed to analyze and detect the type of the disease from the current condition of the breast. Histological slides have been used for automated diagnosis. SIFT based feature selection and extraction method has been used followed by a Bag-of-Features method. The extracted features are classified by a metaheuristic supported Artificial Neural Network. Three models have been compared in terms of accuracy and obtained results are reported in a comprehensive manner.
••29 Oct 2015
TL;DR: This work proposes a simple algorithm to compress this graph without having read in the entire graph into the main memory, and develops algorithms to solve edge queries and node queries that directly operates on the compressed graph.
Abstract: Social networks are constantly changing as new members join, existing members leave, and ‘followers’ or ‘friends’ are formed and disappear. The model that captures this constantly changing graph is the streaming graph model. Given a massive graph data stream wherein the number of nodes is in the order of millions and the number of edges is the tens of millions, we propose a simple algorithm to compress this graph without having read in the entire graph into the main memory. Our algorithm uses the quadtree data structure that is implicitly constructed to produce the compressed graph output. As a result of this implicit construction, our algorithm allows for node and edge additions/deletions that directly modifies the output compressed graph. We further develop algorithms to solve edge queries (is there any between two nodes?) and node queries (for a given node, list all its neighbors) that directly operates on the compressed graph. We have performed extensive empirical evaluations of our algorithms using publicly available, large social networks such as LiveJournal, Pokec, Twitter, and others. Our empirical evaluation is based on several parameters including time to compress, memory required by the compression algorithm, size of compressed graph, and time and memory size required to execute queries. We have also presented extensions to the compression algorithm that we have developed.
••21 May 2012
TL;DR: This work proposes novel memory storage and retrieval techniques that enable parallel graph computations to overcome main impediments to fully exploiting potential performance in architectures having a massive number of GPUs.
Abstract: The availability and utility of large numbers of Graphical Processing Units (GPUs) have enabled parallel computations using extensive multi-threading. Sequential access to global memory and contention at the size-limited shared memory have been main impediments to fully exploiting potential performance in architectures having a massive number of GPUs. We propose novel memory storage and retrieval techniques that enable parallel graph computations to overcome the above issues. More specifically, given a graph G = (V, E) and an integer k
••20 May 2013
TL;DR: This paper proposes and evaluates techniques to efficiently utilize the different levels of the memory hierarchy of GPUs, with the focus being on the larger global memory for triangle counting and combinatorial counting on graphs.
Abstract: Studying properties of graphs is essential to various applications, and recent growth of online social networks has spurred interests in analyzing their structures using Graphical Processing Units (GPUs). Utilizing the faster available shared memory on GPUs have provided tremendous speed-up for solving many general-purpose problems. However, when data required for processing is large and needs to be stored in the global memory instead of the shared memory, simultaneous memory accesses by threads in execution becomes the bottleneck for achieving higher throughput. In this paper, for storing large graphs, we propose and evaluate techniques to efficiently utilize the different levels of the memory hierarchy of GPUs, with the focus being on the larger global memory. Given a graph G = (V, E), we provide an algorithm to count the number of triangles in G, while storing the adjacency information on the global memory. Our computation techniques and data structure for retrieving the adjacency information is derived from processing the breadth-first-search tree of the input graph. Also, techniques to generate combinations of nodes for testing the properties of graphs induced by the same are discussed in detail. Our methods can be extended to solve other combinatorial counting problems on graphs, such as finding the number of connected sub graphs of size k, number of cliques (resp. independent sets) of size k, and related problems for large data sets. In the context of the triangle counting algorithm, we analyze and utilize primitives such as memory access coalescing and avoiding partition camping that offset the increase in access latency of using a slower but larger global memory. Our experimental results for the GPU implementation show at least 10 times speedup for triangle counting over the CPU counterpart. Another 6 - 8% increase in performance is obtained by utilizing the above mentioned primitives as compared to the naive implementation of the program on the GPU.
TL;DR: It is shown that by introducing wearable device technology to mining sites, the safety of mining operations can be enhanced and wearable devices should be further used in the mining industry.
Abstract: This paper reviews current trends in wearable device technology, and provides an overview of its prevalent and potential deployments in the mining industry. This review includes the classification of wearable devices with some examples of their utilization in various industrial fields as well as the features of sensors used in wearable devices. Existing applications of wearable device technology to the mining industry are reviewed. In addition, a wearable safety management system for miners and other possible applications are proposed. The findings of this review show that by introducing wearable device technology to mining sites, the safety of mining operations can be enhanced. Therefore, wearable devices should be further used in the mining industry.
TL;DR: A classification of some of the most important challenges when handling big data is presented and solutions that could address the identified challenges are recommended.
Abstract: Today, an enormous amount of data is being continuously generated in all walks of life by all kinds of devices and systems every day. A significant portion of such data is being captured, stored, aggregated and analyzed in a systematic way without losing its "4V" (i.e., volume, velocity, variety, and veracity) characteristics. We review major drivers of big data today as well the recent trends and established platforms that offer valuable perspectives on the information stored in large and heterogeneous data sets. Then, we present a classification of some of the most important challenges when handling big data. Based on this classification, we recommend solutions that could address the identified challenges, and in addition we highlight cross-disciplinary research directions that need further investigation in the future.
TL;DR: A survey and taxonomy on lossless graph compression can be found in this paper, where the authors exhaustively analyze this domain and present a taxonomy of existing lossless compression schemes.
Abstract: Various graphs such as web or social networks may contain up to trillions of edges. Compressing such datasets can accelerate graph processing by reducing the amount of I/O accesses and the pressure on the memory subsystem. Yet, selecting a proper compression method is challenging as there exist a plethora of techniques, algorithms, domains, and approaches in compressing graphs. To facilitate this, we present a survey and taxonomy on lossless graph compression that is the first, to the best of our knowledge, to exhaustively analyze this domain. Moreover, our survey does not only categorize existing schemes, but also explains key ideas, discusses formal underpinning in selected works, and describes the space of the existing compression schemes using three dimensions: areas of research (e.g., compressing web graphs), techniques (e.g., gap encoding), and features (e.g., whether or not a given scheme targets dynamic graphs). Our survey can be used as a guide to select the best lossless compression scheme in a given setting.
TL;DR: The insights provided by this work can help the smartwatch stakeholders to mitigate the existing drawbacks and formulate better growth strategies along with the directions for further development with an aim to increase the customers’ continuous usage of smartwatches.
Abstract: This work aims to examine the underlying factors associated with the continuous usage of smartwatches and propose a relevant theoretical framework. In order to understand and correlate the exact user motivations and expectations before and after using a smartwatch, a dual approach is taken comprising of a detailed literature review with thematic analysis of the data obtained from an ethnographic study involving 42 participants. Nine key determinants of continuous usage of smartwatches are identified, with four of them being introduced for the first time (perceived comfort, self-socio motivation, battery-life concern, and perceived accuracy and functional limitations) in the wearable context. Thereafter, a research model is developed based upon the expectation-confirmation model (ECM) and empirically tested using a partial least square structural equation modeling approach (PLS-SEM) on data obtained from 312 long-term smartwatch users across four Asian countries. Perceived usefulness, hedonic motivation, perceived comfort and self-socio motivation have a positive impact on the continuous usage. However, perceived privacy, battery-life concern, and perceived accuracy and functional limitations have a negative impact on the continuous usage of smartwatches, the last one being the greatest predictor. The effect of hedonic motivation on perceived usefulness is non-significant. The model explains 64.8% of the variance in the final dependable construct, i.e. continuous usage. The insights provided by this work can help the smartwatch stakeholders to mitigate the existing drawbacks and formulate better growth strategies along with the directions for further development with an aim to increase the customers’ continuous usage of smartwatches.
••23 May 2016
TL;DR: In this paper, a parallel triangle counting algorithm for CUDA GPU was proposed, which can find 8.8 billion triangles in a 180 million edges graph in 12 seconds on the Nvidia GeForce GTX 980 GPU.
Abstract: The clustering coefficient and the transitivity ratio are concepts often used in network analysis, which creates a need for fast practical algorithms for counting triangles in large graphs. Previous research in this area focused on sequential algorithms, MapReduce parallelization, and fast approximations. In this paper we propose a parallel triangle counting algorithm for CUDA GPU. We describe the implementation details necessary to achieve high performance and present the experimental evaluation of our approach. The algorithm achieves 15 to 35 times speedup over our CPU implementation, and is capable of finding 8.8 billion triangles in a 180 million edges graph in 12 seconds on the Nvidia GeForce GTX 980 GPU.