scispace - formally typeset
Search or ask a question
Author

Ramanathan Muthuganapathy

Bio: Ramanathan Muthuganapathy is an academic researcher from Indian Institute of Technology Madras. The author has contributed to research in topics: Delaunay triangulation & Voronoi diagram. The author has an hindex of 12, co-authored 48 publications receiving 554 citations. Previous affiliations of Ramanathan Muthuganapathy include Technion – Israel Institute of Technology & Purdue University.


Papers
More filters
Journal ArticleDOI
TL;DR: A quantitative evaluation comparing the consistency of the two raters as well as exploring the performance of the eleven submitted results in addition to three other lesion segmentation algorithms are presented.

259 citations

Journal ArticleDOI
TL;DR: The proposed algorithm showed a comparable performance with respect to existing unimodal and multi-modal methods for GTCS detection and shows the potential to build an ambulatory monitoring convulsive seizure detection system.
Abstract: Epileptic seizure detection requires specialized approaches such as video/electroencephalography monitoring. However, these approaches are restricted mainly to hospital setting and requires video/EEG analysis by experts, which makes these approaches resource- and labor-intensive. In contrast, we aim to develop a wireless remote monitoring system based on a single wrist-worn accelerometer device, which is sensitive to multiple types of convulsive seizures and is capable of detecting seizures with short duration. Simple time domain features including a new set of Poincare plot based features were extracted from the active movement events recorded using a wrist-worn accelerometer device. The best features were then selected using the area under the ROC curve analysis. Kernelized support vector data description was then used to classify nonseizure and seizure events. The proposed algorithm was evaluated on $\text{5576}\;\text{h}$ of recordings from 79 patients and detected 40 ( $\text{86.95}\%$ ) of 46 convulsive seizures (generalized tonic-clonic (GTCS), psychogenic nonepileptic, and complex partial seizures) from 20 patients with a total of 270 false alarms ( $\text{1.16/24}\;\text{h}$ ). Furthermore, the algorithm showed a comparable performance (sensitivity $\text{95.23}\%$ and false alarm rate $\text{0.64/24}\;\text{h}$ ) with respect to existing unimodal and multimodal methods for GTCS detection. The promising results shows the potential to build an ambulatory monitoring convulsive seizure detection system. A wearable accelerometer based seizure detection system would aid in continuous assessment of convulsive seizures in a timely and non-invasive manner.

55 citations

Proceedings ArticleDOI
13 Jun 2005
TL;DR: An algorithm for generating the Voronoi cells for a set of rational C1-continuous planar closed curves, which is precise up to machine precision is presented.
Abstract: We present an algorithm for generating the Voronoi cells for a set of rational C1-continuous planar closed curves, which is precise up to machine precision. Initially, bisectors for pairs of curves, (C(t), Ci(r)), are generated symbolically and represented as implicit forms in the tr-parameter space. Then, the bisectors are properly trimmed after being split into monotone pieces. The trimming procedure uses the orientation of the original curves as well as their curvature fields, resulting in a set of trimmed-bisector segments represented as implicit curves in a parameter space. A lower-envelope algorithm is then used in the parameter space of the curve whose Voronoi cell is sought. The lower envelope represents the exact boundary of the Voronoi cell.

36 citations

Journal ArticleDOI
TL;DR: This paper presents a fully automatic Delaunay based sculpting algorithm for approximating the shape of a finite set of points S in R 2 and introduces the notion of directed boundary samples which characterizes the two dimensional objects based on the alignment of their boundaries in the cavities.
Abstract: In this paper, we present a fully automatic Delaunay based sculpting algorithm for approximating the shape of a finite set of points S in R 2 . The algorithm generates a relaxed Gabriel graph ( R G G ) that consists of most of the Gabriel edges and a few non-Gabriel edges induced by the Delaunay triangulation. Holes are characterized through a structural pattern called as body-arm formed by the Delaunay triangles in the void regions. R G G is constructed through an iterative removal of Delaunay triangles subjected to circumcenter (of triangle) and topological regularity constraints in O ( n log n ) time using O ( n ) space. We introduce the notion of directed boundary samples which characterizes the two dimensional objects based on the alignment of their boundaries in the cavities. Theoretically, we justify our algorithm by showing that under given sampling conditions, the boundary of R G G captures the topological properties of objects having directed boundary samples. Unlike many other approaches, our algorithm does not require tuning of any external parameter to approximate the geometric shape of point set and hence human intervention is completely eliminated. Experimental evaluations of the proposed technique are done using L 2 error norm measure, which is the symmetric difference between the boundaries of reconstructed shape and the original shape. We demonstrate the efficacy of our automatic shape reconstruction technique by showing several examples and experiments with varying point set densities and distributions.

28 citations

Journal ArticleDOI
TL;DR: A Delaunay-based, unified method for reconstruction irrespective of the type of the input point set, which works for boundary samples as well as dot patterns and has been shown to perform well independent of sampling models.

26 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations

01 Jan 2013

801 citations

Journal ArticleDOI
J. Michael Wilson1, Cecil Piya1, Yung C. Shin1, Fu Zhao1, Karthik Ramani1 
TL;DR: In this paper, the authors demonstrate the successful repair of defective voids in turbine airfoils based on a new semi-automated geometric reconstruction algorithm and a laser direct deposition process.

330 citations

Journal ArticleDOI
TL;DR: Without manual tuning, nnU-Net surpasses most specialised deep learning pipelines in 19 public international competitions and sets a new state of the art in the majority of the 49 tasks, demonstrating a vast hidden potential in the systematic adaptation of deep learning methods to different datasets.
Abstract: Biomedical imaging is a driver of scientific discovery and core component of medical care, currently stimulated by the field of deep learning. While semantic segmentation algorithms enable 3D image analysis and quantification in many applications, the design of respective specialised solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We propose nnU-Net, a deep learning framework that condenses the current domain knowledge and autonomously takes the key decisions required to transfer a basic architecture to different datasets and segmentation tasks. Without manual tuning, nnU-Net surpasses most specialised deep learning pipelines in 19 public international competitions and sets a new state of the art in the majority of the 49 tasks. The results demonstrate a vast hidden potential in the systematic adaptation of deep learning methods to different datasets. We make nnU-Net publicly available as an open-source tool that can effectively be used out-of-the-box, rendering state of the art segmentation accessible to non-experts and catalyzing scientific progress as a framework for automated method design.

314 citations