Y
Yuri Boykov
Researcher at University of Waterloo
Publications - 124
Citations - 32510
Yuri Boykov is an academic researcher from University of Waterloo. The author has contributed to research in topics: Image segmentation & Cut. The author has an hindex of 44, co-authored 124 publications receiving 29588 citations. Previous affiliations of Yuri Boykov include Carnegie Mellon University & University of Western Ontario.
Papers
More filters
Journal ArticleDOI
Constrained-CNN losses for weakly supervised segmentation.
TL;DR: A differentiable penalty is proposed, which enforces inequality constraints directly in the loss function, avoiding expensive Lagrangian dual iterates and proposal generation and has the potential to close the gap between weakly and fully supervised learning in semantic medical image segmentation.
Book ChapterDOI
Graph Cuts in Vision and Graphics: Theories and Applications
Yuri Boykov,Olga Veksler +1 more
TL;DR: Graph-cuts work as a powerful energy minimization tool for a fairly wide class of binary and nonbinary energies that frequently occur in early vision and in some cases graph cuts produce globally optimal solutions.
Proceedings ArticleDOI
What metrics can be approximated by geo-cuts, or global optimization of length/area and flux
Vladimir Kolmogorov,Yuri Boykov +1 more
TL;DR: This work addresses the "shrinking" problem of graph cuts, improve segmentation of long thin objects, and introduce useful shape constraints in the global optimization framework of graph cut methods in vision.
Book ChapterDOI
A continuous max-flow approach to potts model
TL;DR: This work proposes a novel convex formulation with a continous 'max-flow' functional of the Potts model, which avoids extra computational load in enforcing the simplex constraints and naturally allows parallel computations over different labels.
Book ChapterDOI
On Regularized Losses for Weakly-supervised CNN Segmentation
Meng Tang,Federico Perazzi,Abdelaziz Djelouah,Ismail Ben Ayed,Christopher Schroers,Yuri Boykov +5 more
TL;DR: The authors integrate standard regularizers directly into the loss functions over partial input, which simplifies weakly supervised training by avoiding extra MRF/CRF inference steps or layers explicitly generating full masks, while improving both the quality and efficiency of training.