Author
Atanendu Shekhar Mandal
Other affiliations: Academy of Scientific and Innovative Research
Bio: Atanendu Shekhar Mandal is an academic researcher from Central Electronics Engineering Research Institute. The author has contributed to research in topics: Video tracking & Frame rate. The author has an hindex of 3, co-authored 5 publications receiving 23 citations. Previous affiliations of Atanendu Shekhar Mandal include Academy of Scientific and Innovative Research.
Topics: Video tracking, Frame rate, Smart camera, Change detection, Compass
Papers
More filters
TL;DR: A new FPGA resource optimized hardware architecture for real-time edge detection using the Sobel compass operator uses a single processing element to compute the gradient for all directions while maintaining real- time video frame rates.
Abstract: This paper presents a new FPGA resource optimized hardware architecture for real-time edge detection using the Sobel compass operator. The architecture uses a single processing element to compute the gradient for all directions. This greatly economizes on the FPGA resources' usages (more than 40% reduction) while maintaining real-time video frame rates. The measured performance of the architecture is 50 fps for standard PAL size video and 200 fps for CIF size video. The use of pipelining further improved the performance (185 fps for PAL size video and 740 fps for CIF size video) without significant increase in FPGA resources.
15 citations
TL;DR: The proposed VLSI architecture robustly detects the changes in a video stream in real time at 25 frames per second in gray scale CIF size video and its implementation on Virtex-IIPro FPGA platform is presented.
Abstract: Change detection is one of the several important problems in the design of any automated video surveillance system. Appropriate selection of frames of significant changes can minimize the communication and processing overheads for such systems. This research presents the design of a VLSI architecture for change detection in a video sequence and its implementation on Virtex-IIPro FPGA platform. Clustering-based scheme is used for change detection. The proposed system is designed to meet the real-time requirements of video surveillance applications. It robustly detects the changes in a video stream in real time at 25 frames per second (fps) in gray scale CIF size video.
6 citations
29 Jun 2017
TL;DR: The design and implementation of an FPGA-based smart camera system for automated video surveillance that meets the real-time requirements of video surveillance applications while aiming atFPGA resource reduction is presented.
Abstract: Automated video surveillance is a rapidly evolving area and has been gaining importance in the research community in recent years due to its capabilities of performing more efficient and effective surveillance by employing smart cameras. In this article, we present the design and implementation of an FPGA-based smart camera system for automated video surveillance. The complete system is prototyped on Xilinx ML510 FPGA platform and meets the real-time requirements of video surveillance applications while aiming at FPGA resource reduction. The implemented smart camera system is capable of automatically performing real-time motion detection, real-time video history generation, real-time focused region extraction, real-time filtering of frames of interest, and real-time object tracking of identified target with automatic purposive camera movement. The system is designed to work in real-time for live color video streams of standard PAL (720 × 576) resolution, which is the most commonly used video resolution for current generation surveillance systems. The implemented smart camera system is also capable of processing HD resolution video streams in real-time.
4 citations
TL;DR: CovBaseAI as mentioned in this paper uses an ensemble of three deep learning models and an expert decision system (EDS) for COVID-Pneumonia diagnosis, trained entirely on pre-COVID-19 datasets.
Abstract: SARS-CoV2 pandemic exposed the limitations of artificial intelligence based medical imaging systems. Earlier in the pandemic, the absence of sufficient training data prevented effective deep learning (DL) solutions for the diagnosis of COVID-19 based on X-Ray data. Here, addressing the lacunae in existing literature and algorithms with the paucity of initial training data; we describe CovBaseAI, an explainable tool using an ensemble of three DL models and an expert decision system (EDS) for COVID-Pneumonia diagnosis, trained entirely on pre-COVID-19 datasets. The performance and explainability of CovBaseAI was primarily validated on two independent datasets. Firstly, 1401 randomly selected CxR from an Indian quarantine center to assess effectiveness in excluding radiological COVID-Pneumonia requiring higher care. Second, curated dataset; 434 RT-PCR positive cases and 471 non-COVID/Normal historical scans, to assess performance in advanced medical settings. CovBaseAI had an accuracy of 87% with a negative predictive value of 98% in the quarantine-center data. However, sensitivity was 0.66-0.90 taking RT-PCR/radiologist opinion as ground truth. This work provides new insights on the usage of EDS with DL methods and the ability of algorithms to confidently predict COVID-Pneumonia while reinforcing the established learning; that benchmarking based on RT-PCR may not serve as reliable ground truth in radiological diagnosis. Such tools can pave the path for multi-modal high throughput detection of COVID-Pneumonia in screening and referral.
4 citations
TL;DR: CovBaseAI is described, an explainable tool which uses an ensemble of three DL models and an expert decision system (EDS) for Cov-Pneum diagnosis, trained entirely on datasets from the pre-COVID-19 period, and has better performance than publicly available algorithms trained on CO VID-19 data but needs further improvement.
Abstract: The coronavirus disease of 2019 (COVID-19) pandemic exposed a limitation of artificial intelligence (AI) based medical image interpretation systems. Early in the pandemic, when need was greatest, the absence of sufficient training data prevented effective deep learning (DL) solutions. Even now, there is a need for Chest-X-ray (CxR) screening tools in low and middle income countries (LMIC), when RT-PCR is delayed, to exclude COVID-19 pneumonia (Cov-Pneum) requiring transfer to higher care. In absence of local LMIC data and poor portability of CxR DL algorithms, a new approach is needed. Axiomatically, it is faster to repurpose existing data than to generate new datasets. Here, we describe CovBaseAI, an explainable tool which uses an ensemble of three DL models and an expert decision system (EDS) for Cov-Pneum diagnosis, trained entirely on datasets from the pre-COVID-19 period. Portability, performance, and explainability of CovBaseAI was primarily validated on two independent datasets. First, 1401 randomly selected CxR from an Indian quarantine-center to assess effectiveness in excluding radiologic Cov-Pneum that may require higher care. Second, a curated dataset with 434 RT-PCR positive cases of varying levels of severity and 471 historical scans containing normal studies and non-COVID pathologies, to assess performance in advanced medical settings. CovBaseAI had accuracy of 87% with negative predictive value of 98% in the quarantine-center data for Cov-Pneum. However, sensitivity varied from 0.66 to 0.90 depending on whether RT-PCR or radiologist opinion was set as ground truth. This tool with explainability feature has better performance than publicly available algorithms trained on COVID-19 data but needs further improvement.
2 citations
Cited by
More filters
TL;DR: In this paper , a review of 99 Q1 articles covering explainable artificial intelligence (XAI) techniques is presented, including SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, and others.
Abstract: Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community.Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded.In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others.We discovered that detecting abnormalities in 1D biosignals and identifying key text in clinical notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.
80 citations
16 Mar 2020
TL;DR: The proposed algorithm performs the synchronization of the videos and does proper alliance employing the algorithms of motion detection and contour filtering and is developed in Java which assists its model using its library that is open source.
Abstract: Monitoring of traffic and unprecedented violence has become very much necessary in the urban as well as the rural areas, so the paper attempts to develop a CCTV surveillance for unprecedented violence and traffic monitoring. The proffered method performs the synchronization of the videos and does proper alliance employing the algorithms of motion detection and contour filtering. The steps in motion detection identifies the movement of the objects such as vehicles and unprecedented activities whereas the filtering is used to identify the object itself using its color. The synchronization and the alignment process affords to provide the details of the each objects on the scenario. The proposed algorithm is developed in Java which assists its model using its library that is open source. The validation of the proposed model was carried out using the data set acquired from real time and results were acquired. Moreover the results acquired were compared with the algorithms that were created in the early stages, the comparison proved that the proffered model was capable of obtaining a consecutive quick outcomes of 12.3912 *factor than the existing methods for the resolution of the video used in testing was 240.01x 320.01 with 40 frames per second with cameras of high definition. Further the results acquired were computed to run the application of the embedded CPU and the GPU processors.
33 citations
TL;DR: The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T) FPGA platform.
Abstract: Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T) FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576) resolution video streams directly coming from the camera.
17 citations
TL;DR: This paper presents a comprehensive review and a comparative study of various hardware/FPGA implementations of Sobel edge detector and explored different architectures for Sobel gradient computation unit in order to show the various trade-offs involved in choosing one over another.
Abstract: This paper presents a comprehensive review and a comparative study of various hardware/FPGA implementations of Sobel edge detector and explored different architectures for Sobel gradient computation unit in order to show the various trade-offs involved in choosing one over another. The different architectures using pipelining and/or parallelism (key methodologies for improving the performance/frame rates) are explored for gradient computation unit in Sobel edge detector. How the different architectures affected performance (in terms of video frame rate and image size) and area (in terms of FPGA resources usages) has been demonstrated. By exploiting the trade-offs between video frame rate, image size, and FPGA resources a designer should be able to find an optimal architecture for a given application.
14 citations
TL;DR: In this article , a multi-scale attention model MA-DenseNet201 was proposed for the classification of Coronavirus Disease (COVID-19) cases, which outperformed eight state-of-the-art CNN models in terms of sensitivity and interpretation with lung localization network.
Abstract: The devastating outbreak of Coronavirus Disease (COVID-19) cases in early 2020 led the world to face health crises. Subsequently, the exponential reproduction rate of COVID-19 disease can only be reduced by early diagnosis of COVID-19 infection cases correctly. The initial research findings reported that radiological examinations using CT and CXR modality have successfully reduced false negatives by RT-PCR test. This research study aims to develop an explainable diagnosis system for the detection and infection region quantification of COVID-19 disease. The existing research studies successfully explored deep learning approaches with higher performance measures but lacked generalization and interpretability for COVID-19 diagnosis. In this study, we address these issues by the Covid-MANet network, an automated end-to-end multi-task attention network that works for 5 classes in three stages for COVID-19 infection screening. The first stage of the Covid-MANet network localizes attention of the model to the relevant lungs region for disease recognition. The second stage of the Covid-MANet network differentiates COVID-19 cases from bacterial pneumonia, viral pneumonia, normal and tuberculosis cases, respectively. To improve the interpretation and explainability, three experiments have been conducted in exploration of the most coherent and appropriate classification approach. Moreover, the multi-scale attention model MA-DenseNet201 proposed for the classification of COVID-19 cases. The final stage of the Covid-MANet network quantifies the proportion of infection and severity of COVID-19 in the lungs. The COVID-19 cases are graded into more specific severity levels such as mild, moderate, severe, and critical as per the score assigned by the RALE scoring system. The MA-DenseNet201 classification model outperforms eight state-of-the-art CNN models, in terms of sensitivity and interpretation with lung localization network. The COVID-19 infection segmentation by UNet with DenseNet121 encoder achieves dice score of 86.15% outperforming UNet, UNet++, AttentionUNet, R2UNet, with VGG16, ResNet50 and DenseNet201 encoder. The proposed network not only classifies images based on the predicted label but also highlights the infection by segmentation/localization of model-focused regions to support explainable decisions. MA-DenseNet201 model with a segmentation-based cropping approach achieves maximum interpretation of 96% with COVID-19 sensitivity of 97.75%. Finally, based on class-varied sensitivity analysis Covid-MANet ensemble network of MA-DenseNet201, ResNet50 and MobileNet achieve 95.05% accuracy and 98.75% COVID-19 sensitivity. The proposed model is externally validated on an unseen dataset, yields 98.17% COVID-19 sensitivity.
12 citations