scispace - formally typeset
P

Philip Andrew Mansfield

Researcher at Google

Publications -  41
Citations -  1396

Philip Andrew Mansfield is an academic researcher from Google. The author has contributed to research in topics: Structured document & Set (abstract data type). The author has an hindex of 16, co-authored 37 publications receiving 1055 citations. Previous affiliations of Philip Andrew Mansfield include Apple Inc. & Mansfield University of Pennsylvania.

Papers
More filters
Proceedings ArticleDOI

Speaker Diarization with LSTM

TL;DR: In this paper, the authors combine LSTM-based d-vector audio embeddings with recent work in nonparametric clustering to obtain a state-of-the-art speaker diarization system.
Patent

Method, system, and graphical user interface for text entry with partial word display

TL;DR: A computer-implemented method for text entry includes receiving entered text from a user, selecting a set of candidate sequences for completing or continuing the sequence, and presenting the candidate sequences to the user, wherein the candidate sequence include partial words.
Posted Content

Speaker Diarization with LSTM

TL;DR: This work combines LSTM-based d-vector audio embeddings with recent work in nonparametric clustering to obtain a state-of-the-art speaker diarization system that achieves a 12.0% diarization error rate on NIST SRE 2000 CALLHOME, while the model is trained with out- of-domain data from voice search logs.
Journal ArticleDOI

Large Language Models Encode Clinical Knowledge

TL;DR: The authors proposed a human evaluation framework for model answers along multiple axes including factuality, comprehension, reasoning, possible harm and bias, and showed that comprehension, knowledge recall and reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine.
Posted Content

Contrastive Learning for Label-Efficient Semantic Segmentation

TL;DR: A simple and effective contrastive learning-based training strategy in which the network is pretrain the network using a pixel-wise, label-based contrastive loss, and then fine-tune it using the cross-entropy loss, which increases intra-class compactness and inter-class separability, thereby resulting in a better pixel classifier.