scispace - formally typeset
M

M. Marcinkiewicz

Researcher at University of Montpellier

Publications -  34
Citations -  1788

M. Marcinkiewicz is an academic researcher from University of Montpellier. The author has contributed to research in topics: Quantum well & Terahertz radiation. The author has an hindex of 12, co-authored 34 publications receiving 1112 citations. Previous affiliations of M. Marcinkiewicz include University of Warsaw & Centre national de la recherche scientifique.

Papers
More filters
Posted ContentDOI

Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge

Spyridon Bakas, +438 more
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Journal ArticleDOI

Data Augmentation for Brain-Tumor Segmentation: A Review.

TL;DR: The current advances in data-augmentation techniques applied to magnetic resonance images of brain tumors are reviewed and the most promising research directions to follow are highlighted in order to synthesize high-quality artificial brain-tumor examples which can boost the generalization abilities of deep models.
Journal ArticleDOI

Hyperspectral Band Selection Using Attention-Based Convolutional Neural Networks

TL;DR: A novel algorithm for hyperspectral band selection is introduced that couples new attention-based convolutional neural networks used to weight the bands according to their importance with an anomaly detection technique which is exploited for selecting the most important bands.
Journal ArticleDOI

Towards resource-frugal deep convolutional neural networks for hyperspectral image segmentation

TL;DR: This paper introduces resource-frugal quantized convolutional neural networks, and greatly reduce their size without adversely affecting the classification capability, and leads to much smaller and still well-generalizing deep models.