scispace - formally typeset
D

Derek Howard

Researcher at University of Toronto

Publications -  4
Citations -  112

Derek Howard is an academic researcher from University of Toronto. The author has contributed to research in topics: Bipolar disorder & Autism spectrum disorder. The author has an hindex of 2, co-authored 4 publications receiving 46 citations.

Papers
More filters
Journal ArticleDOI

Virtual Histology of Cortical Thickness and Shared Neurobiology in 6 Psychiatric Disorders

Yash Patel, +303 more
- 01 Jan 2021 - 
TL;DR: In this article, the authors used T1-weighted magnetic resonance images to determine neurobiologic correlates of group differences in cortical thickness between cases and controls in 6 disorders: attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), bipolar disorder (BD), major depressive disorder (MDD), obsessive-compulsive disorder (OCD), and schizophrenia.
Journal ArticleDOI

Virtual Histology of Cortical Thickness Reveals Shared Neurobiology Across Six Psychiatric Disorders

TL;DR: In this paper, the authors determined neurobiologic correlates of group differences in cortical thickness between cases and controls in 6 disorders: attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), bipolar disorder (BD), major depressive disorder (MDD), obsessive-compulsive disorder (OCD), and schizophrenia.
Journal Article

Virtual Histology of Cortical Thickness Reveals Shared Neurobiology Across Six Psychiatric Disorders

TL;DR: In this article, the authors determined neurobiologic correlates of group differences in cortical thickness between cases and controls in 6 disorders: attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), bipolar disorder (BD), major depressive disorder (MDD), obsessive-compulsive disorder (OCD), and schizophrenia.
Posted Content

Application of Transfer Learning for Automatic Triage of Social Media Posts

TL;DR: It is shown that transfer learning is an effective strategy for predicting risk with relatively little labeled data and finetuning of pretrained language models provides further gains when large amounts of unlabeled text is available.