J
Jamie Shotton
Researcher at Microsoft
Publications - 180
Citations - 37983
Jamie Shotton is an academic researcher from Microsoft. The author has contributed to research in topics: Pose & Random forest. The author has an hindex of 66, co-authored 178 publications receiving 33842 citations. Previous affiliations of Jamie Shotton include University of Cambridge & Toshiba.
Papers
More filters
Book ChapterDOI
High Resolution Zero-Shot Domain Adaptation of Synthetically Rendered Face Images
TL;DR: In this article, the authors proposed an algorithm that matches a non-photorealistic, synthetically generated image to a latent vector of a pretrained StyleGAN2 model which, in turn, maps the vector to a photorealistic image of a person of the same pose, expression, hair, and lighting.
Patent
Hand tracking for user interface operation at-a-distance
Jamie Shotton,Andrew Fitzgibbon,Jonathan Taylor,Richard M. Banks,David Sweeney,Robert Corish,Abigail Sellen,Eduardo Alberto Soto,Arran Haig Topalian,Benjamin Luff +9 more
TL;DR: In this paper, a user interface comprises a display controller configured to render graphical data on a display, and a memory configured to receive captured sensor data depicting at least one hand of a user operating the user interface without touching the interface.
Patent
Multi-centroid compression for probability distribution cloud
TL;DR: In this paper, a multi-centroid compression for probability distribution clouds is proposed, where a centroid is generated for each cluster, which usually results for a plurality of centroids for each distinguished object.
Patent
Using high-level attributes to guide image processing
TL;DR: Using high-level attributes to guide image processing is described in this article, where one or more random decision forests are trained using images where global variable values such as player height are known in addition to ground-truth data appropriate for the image processing task concerned.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
Bjoern H. Menze,Andras Jakab,Stefan Bauer,Jayashree Kalpathy-Cramer,Keyvan Farahani,Justin Kirby,Yuliya Burren,N Porz,Johannes Slotboom,Roland Wiest,Levente Lanczi,Elizabeth R. Gerstner,Marc-André Weber,Tal Arbel,Brian B. Avants,Nicholas Ayache,Patricia Buendia,D. Louis Collins,Nicolas Cordier,Jason J. Corso,Antonio Criminisi,Tilak Das,Hervé Delingette,Çağatay Demiralp,Christopher R. Durst,Michel Dojat,Senan Doyle,Joana Festa,Florence Forbes,Ezequiel Geremia,Ben Glocker,Polina Golland,Xiaotao Guo,Andac Hamamci,Khan M. Iftekharuddin,Raj Jena,Nigel M. John,Ender Konukoglu,Danial Lashkari,José Mariz,Raphael Meier,Sérgio Pereira,Doina Precup,Stephen J. Price,Tammy Riklin Raviv,Syed M. S. Reza,Michael Ryan,Duygu Sarikaya,Lawrence H. Schwartz,Hoo-Chang Shin,Jamie Shotton,Carlos A. Silva,Nuno Sousa,Nagesh K. Subbanna,Gábor Székely,Thomas J. Taylor,Owen M. Thomas,Nicholas J. Tustison,Gozde Unal,Flor Vasseur,Max Wintermark,Dong Hye Ye,Liang Zhao,Binsheng Zhao,Darko Zikic,Marcel Prastawa,Mauricio Reyes,Koen Van Leemput +67 more
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as discussed by the authors was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.