scispace - formally typeset
S

Satrajit S. Ghosh

Researcher at Harvard University

Publications -  189
Citations -  14959

Satrajit S. Ghosh is an academic researcher from Harvard University. The author has contributed to research in topics: Computer science & Neuroimaging. The author has an hindex of 44, co-authored 175 publications receiving 10291 citations. Previous affiliations of Satrajit S. Ghosh include Boston University & Massachusetts Institute of Technology.

Papers
More filters
Journal ArticleDOI

Situating the default-mode network along a principal gradient of macroscale cortical organization

TL;DR: An overarching organization of large-scale connectivity that situates the default-mode network at the opposite end of a spectrum from primary sensory and motor regions is described, suggesting that the role of the DMN in cognition might arise from its position at one extreme of a hierarchy, allowing it to process transmodal information that is unrelated to immediate sensory input.
Journal ArticleDOI

Nipype: A Flexible, Lightweight and Extensible Neuroimaging Data Processing Framework in Python

TL;DR: Nipype solves issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows, and provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, and reduces the learning Curve.
Journal ArticleDOI

The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments.

TL;DR: The Brain Imaging Data Structure (BIDS) is developed, a standard for organizing and describing MRI datasets that uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.
Journal ArticleDOI

Neural modeling and imaging of the cortical interactions underlying syllable production.

TL;DR: The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas, and its ability to account for compensation to lip and jaw perturbations during speech is verified.