scispace - formally typeset
Search or ask a question
Author

Edgar Roman Rangel

Bio: Edgar Roman Rangel is an academic researcher. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 1, co-authored 1 publications receiving 7 citations.

Papers
More filters
DOI
01 Jan 2013
TL;DR: This paper focuses on content-based image retrieval, which involves clustering, sparse coding, and histogram of orientations in the context of Maya civilization.
Abstract: Keywords: content-based image retrieval ; shape descriptor ; histogram of orientations ; clustering ; sparse coding ; image detection ; cultural heritage ; Maya civilization ; hieroglyphs These Ecole polytechnique federale de Lausanne EPFL, n° 5616 (2013)Programme doctoral Genie electriqueFaculte des sciences et techniques de l'ingenieurInstitut de genie electrique et electroniqueLaboratoire de l'IDIAPJury: S. Susstrunk (presidente), S. Marchand-Maillet, J.-Ph. Thiran, C. Wang Public defense: 2013-2-27 Reference doi:10.5075/epfl-thesis-5616Print copy in library catalog Record created on 2013-02-20, modified on 2017-05-10

7 citations

Proceedings ArticleDOI
01 Jul 2022
TL;DR: This work introduces a self-supervised generative representation to discover gait-motion related patterns, under the pretext of video reconstruction and an anomaly detection framework, allowing to discover PD differences regarding a control population.
Abstract: Parkinson's Disease (PD), the second most common neurodegenerative disorder, is associated with voluntary movement disorders caused by progressive dopamine deficiency. Gait motor alterations constitute a main tool to diagnose, characterize and personalize treatments. Nonetheless, such evaluation is biased by expert observations, reporting a false positive diagnosis up to 24%. Learning computational tools are recently emerged as potential alternatives to support diagnosis and to quantify kinematic patterns during locomotion. Nonetheless, such learning schemes required a large amount of balanced and stratified data examples, which may result unrealistic in clinical scenarios. This work introduces a self-supervised generative representation to discover gait-motion related patterns, under the pretext of video reconstruction and an anomaly detection framework. From the learned scheme, it is recovered a hidden embedding gait descriptor that constitutes a digital biomarker, allowing to discover PD differences regarding a control population. The proposed approach was validated with 11 PD patients (H&Y scale between 2.5 and 3.0) and 11 control subjects, and trained with only control population, achieving an AUC of 99.4% in the classification task. Clinical Relevance- A digital biomarker that helps in the diagnosis of PD using videos of a patient's gait to capture important and relevant motion patterns to avoid subjectivity when an expert made a diagnosis.
Journal ArticleDOI
TL;DR: In this article , a self-supervised generative representation for Parkinson's disease is introduced, under the pretext of video reconstruction and anomaly detection framework, which is trained following a one-class weakly supervised learning to avoid inter-class variance and approach the multiple relationships that represent locomotion.
Abstract: Parkinson’s Disease is associated with gait movement disorders, such as postural instability, stiffness, and tremors. Today, some approaches implemented learning representations to quantify kinematic patterns during locomotion, supporting clinical procedures such as diagnosis and treatment planning. These approaches assumes a large amount of stratified and labeled data to optimize discriminative representations. Nonetheless, these considerations may restrict the operability of approaches in real scenarios during clinical practice. This work introduces a self-supervised generative representation, under the pretext of video reconstruction and anomaly detection framework. This architecture is trained following a one-class weakly supervised learning to avoid inter-class variance and approach the multiple relationships that represent locomotion. For validation 14 PD patients and 23 control subjects were recorded, and trained with the control population only, achieving an AUC of 86.9%, homoscedasticity level of 80% and shapeness level of 70% in the classification task considering its generalization.
Journal ArticleDOI
TL;DR: In this article , a cross-attention deep autoencoder was proposed for the localization and delineation of brain lesion from MRI images, which can better support the discrimination between healthy and lesion regions, which results in favorable prognosis and follow-up of patients.
Abstract: The key component of stroke diagnosis is the localization and delineation of brain lesions, especially from MRI studies. Nonetheless, this manual delineation is time-consuming and biased by expert opinion. The main purpose of this study is to introduce an autoencoder architecture that effectively integrates cross-attention mechanisms, together with hierarchical deep supervision to delineate lesions under scenarios of remarked unbalance tissue classes, challenging geometry of the shape, and a variable textural representation. This work introduces a cross-attention deep autoencoder that focuses on the lesion shape through a set of convolutional saliency maps, forcing skip connections to preserve the morphology of affected tissue. Moreover, a deep supervision training scheme was herein adapted to induce the learning of hierarchical lesion details. Besides, a special weighted loss function remarks lesion tissue, alleviating the negative impact of class imbalance. The proposed approach was validated on the public ISLES2017 dataset outperforming state-of-the-art results, achieving a dice score of 0.36 and a precision of 0.42. Deeply supervised cross-attention autoencoders, trained to pay more attention to lesion tissue, are better at estimating ischemic lesions in MRI studies. The best architectural configuration was achieved by integrating ADC, TTP and Tmax sequences. The contribution of deeply supervised cross-attention autoencoders allows better support the discrimination between healthy and lesion regions, which in consequence results in favorable prognosis and follow-up of patients.

Cited by
More filters
Journal Article
TL;DR: In this paper, a strong boundary fragment model (BFM) is proposed to detect object classes using only the object's boundary, which is able to detect objects principally defined by their shape rather than their appearance.
Abstract: The objective of this work is the detection of object classes, such as airplanes or horses. Instead of using a model based on salient image fragments, we show that object class detection is also possible using only the object's boundary. To this end, we develop a novel learning technique to extract class-discriminative boundary fragments. In addition to their shape, these ''codebook entries also determine the object's centroid (in the manner of Leibe et al. [19]). Boosting is used to select discriminative combinations of boundary fragments (weak detectors) to form a strong Boundary-Fragment-Model (BFM) detector. The generative aspect of the model is used to determine an approximate segmentation. We demonstrate the following results: (i) the BFM detector is able to represent and detect object classes principally defined by their shape, rather than their appearance; and (ii) in comparison with other published results on several object classes (airplanes, cars-rear, cows) the BFM detector is able to exceed previous performances, and to achieve this with less supervision (such as the number of training images).

34 citations

Journal ArticleDOI
TL;DR: From experiments, the data-driven representation performs overall in par with the hand-designed representation for similar locality sizes on which the descriptor is computed, and it is observed that a larger number of hidden units, the use of average pooling, and a larger training data size in the SA representation all improved the descriptor performance.
Abstract: Shape representations are critical for visual analysis of cultural heritage materials. This article studies two types of shape representations in a bag-of-words-based pipeline to recognize Maya glyphs. The first is a knowledge-driven Histogram of Orientation Shape Context (HOOSC) representation, and the second is a data-driven representation obtained by applying an unsupervised Sparse Autoencoder (SA). In addition to the glyph data, the generalization ability of the descriptors is investigated on a larger-scale sketch dataset. The contributions of this article are four-fold: (1) the evaluation of the performance of a data-driven auto-encoder approach for shape representation; (2) a comparative study of hand-designed HOOSC and data-driven SA; (3) an experimental protocol to assess the effect of the different parameters of both representations; and (4) bridging humanities and computer vision/machine learning for Maya studies, specifically for visual analysis of glyphs. From our experiments, the data-driven representation performs overall in par with the hand-designed representation for similar locality sizes on which the descriptor is computed. We also observe that a larger number of hidden units, the use of average pooling, and a larger training data size in the SA representation all improved the descriptor performance. Additionally, the characteristics of the data and stroke size play an important role in the learned representation.

15 citations

Journal ArticleDOI
01 Nov 1962-Americas
TL;DR: Thompson's catalog represented just what it said: it was a catalogue of most of the glyphs known up to the time of its publication as discussed by the authors, which was a critical tool, for in that period few signs could be read with any certainty, and it was easier to refer to a sign as T110 rather than to something like "that squished sign with the ends marked off and parallel lines along the middle".
Abstract: The year 1962 saw the publication of a major new book in Maya studies from the University of Oklahoma Press: J. Eric S. Thompson's A Catalog of Maya Hieroglyphs. Thompson's Catalog represented just what it said: it was a catalogue of most of the glyphs known up to the time of its publication. Especially over the couple of decades after its publication it was a critical tool, for in that period few signs could be read with any certainty. With Thompson's Catalog it was easier to refer to a sign as "T110" rather than to something like "that squished sign with the ends marked off and parallel lines along the middle".

7 citations

Journal ArticleDOI
TL;DR: The Histogram of Orientation Shape Context (HOOSC) shape descriptor is introduced to the Digital Humanities community and a graph-based glyph visualization interface is developed to facilitate efficient exploration and analysis of hieroglyphs.
Abstract: Maya hieroglyphic analysis requires epigraphers to spend a significant amount of time browsing existing catalogs to identify individual glyphs. Automatic Maya glyph analysis provides an efficient way to assist scholars’ daily work. We introduce the Histogram of Orientation Shape Context (HOOSC) shape descriptor to the Digital Humanities community. We discuss key issues for practitioners and study the effect that certain parameters have on the performance of the descriptor. Different HOOSC parameters are tested in an automatic ancient Maya hieroglyph retrieval system with two different settings, namely, when shape alone is considered and when glyph co-occurrence information is incorporated. Additionally, we developed a graph-based glyph visualization interface to facilitate efficient exploration and analysis of hieroglyphs. Specifically, a force-directed graph prototype is applied to visualize Maya glyphs based on their visual similarity. Each node in the graph represents a glyph image; the width of an edge indicates the visual similarity between the two according glyphs. The HOOSC descriptor is used to represent glyph shape, based on which pairwise glyph similarity scores are computed. To evaluate our tool, we designed evaluation tasks and questionnaires for two separate user groups, namely, a general public user group and an epigrapher scholar group. Evaluation results and feedback from both groups show that our tool provides intuitive access to explore and discover the Maya hieroglyphic writing, and could potentially facilitate epigraphy work. The positive evaluation results and feedback further hint the practical value of the HOOSC descriptor.

6 citations