scispace - formally typeset
M

Michael Stark

Researcher at University of Adelaide

Publications -  207
Citations -  9737

Michael Stark is an academic researcher from University of Adelaide. The author has contributed to research in topics: Medicine & Pregnancy. The author has an hindex of 42, co-authored 181 publications receiving 7617 citations. Previous affiliations of Michael Stark include Max Planck Society & Boston Children's Hospital.

Papers
More filters
Proceedings ArticleDOI

3D Object Representations for Fine-Grained Categorization

TL;DR: This paper lifts two state-of-the-art 2D object representations to 3D, on the level of both local feature appearance and location, and shows their efficacy for estimating 3D geometry from images via ultra-wide baseline matching and 3D reconstruction.
Proceedings ArticleDOI

Image retrieval using scene graphs

TL;DR: A conditional random field model that reasons about possible groundings of scene graphs to test images and shows that the full model can be used to improve object localization compared to baseline methods and outperforms retrieval methods that use only objects or low-level image features.
Proceedings ArticleDOI

Evaluating knowledge transfer and zero-shot learning in a large-scale setting

TL;DR: An extensive evaluation of three popular approaches to KT on a recently proposed large-scale data set, the ImageNet Large Scale Visual Recognition Competition 2010 dataSet, finding none of the KT methods can improve over one-vs-all classification but prove valuable for zero-shot learning, especially hierarchical and direct similarity based KT.
Proceedings ArticleDOI

What helps where – and why? Semantic relatedness for knowledge transfer

TL;DR: This work addresses the question of how to automatically decide which information to transfer between classes without the need of any human intervention and taps into linguistic knowledge bases to provide the semantic link between sources (what) and targets (where) of knowledge transfer.
Proceedings ArticleDOI

Teaching 3D geometry to deformable part models

TL;DR: This paper extends the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints, and experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t.