scispace - formally typeset
Search or ask a question

Showing papers by "Paul Sajda published in 2022"


Journal ArticleDOI
TL;DR: In this article , a deep learning approach was used to detect non-AMD vs. non-neovascular (NNV) vs. NV AMD from a combination of OCTA, OCT structure, 2D b-scan flow images, and high definition (HD) 5-line bscan cubes; DL also detects ocular biomarkers indicative of AMD risk.
Abstract: Within the next 1.5 decades, 1 in 7 U.S. adults is anticipated to suffer from age-related macular degeneration (AMD), a degenerative retinal disease which leads to blindness if untreated. Optical coherence tomography angiography (OCTA) has become a prime technique for AMD diagnosis, specifically for late-stage neovascular (NV) AMD. Such technologies generate massive amounts of data, challenging to parse by experts alone, transforming artificial intelligence into a valuable partner. We describe a deep learning (DL) approach which achieves multi-class detection of non-AMD vs. non-neovascular (NNV) AMD vs. NV AMD from a combination of OCTA, OCT structure, 2D b-scan flow images, and high definition (HD) 5-line b-scan cubes; DL also detects ocular biomarkers indicative of AMD risk. Multimodal data were used as input to 2D-3D Convolutional Neural Networks (CNNs). Both for CNNs and experts, choroidal neovascularization and geographic atrophy were found to be important biomarkers for AMD. CNNs predict biomarkers with accuracy up to 90.2% (positive-predictive-value up to 75.8%). Just as experts rely on multimodal data to diagnose AMD, CNNs also performed best when trained on multiple inputs combined. Detection of AMD and its biomarkers from OCTA data via CNNs has tremendous potential to expedite screening of early and late-stage AMD patients.

11 citations


Journal ArticleDOI
TL;DR: In this paper , the authors developed a closed-loop system that delivers personalized EEG-triggered repetitive TMS to patients undergoing treatment for major depressive disorder, where patients were randomly assigned to either a synchronized or unsynchronized treatment group, where synchronization of rTMS was to their prefrontal EEG quasi-alpha rhythm.

8 citations


Journal ArticleDOI
TL;DR: It is shown that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements, which leads to faster and more accurate multisensory decisions.
Abstract: Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance. SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.

5 citations


Posted ContentDOI
23 Aug 2022-bioRxiv
TL;DR: The results provide the first demonstration in humans of how phasic pupil-linked arousal relates to the reduction of response inhibition, an inference which otherwise would remain hidden without the help of simultaneous multi-modal acquisitions.
Abstract: Attention reorienting is a critical cognitive function which drives how we respond to novel and unexpected stimuli. In recent years, arousal has been linked to attention reorienting. The timing and spatial organization of the interactions between the arousal and reorienting systems, however, remain only partially revealed. Here, we investigate the dynamics between the two systems through simultaneous recordings of pupillometry, EEG, and fMRI of healthy human subjects while they performed an auditory target detection task. We used pupil diameter and activity in the noradrenergic locus coeruleus to infer arousal, and found these measures linked to distinct cortical activity at various temporal stages of the reorienting response. Specifically, our results provide the first demonstration in humans of how phasic pupil-linked arousal relates to the reduction of response inhibition, an inference which otherwise would remain hidden without the help of simultaneous multi-modal acquisitions.

1 citations


Posted ContentDOI
01 Feb 2022-bioRxiv
TL;DR: In this article , the salience network (SN), dorsal attention network (DAN), and locus coeruleus-norepinephrine (LC-NE) neuromodulatory system were investigated.
Abstract: The processing of salient stimuli involves a wide range of both bottom-up and top-down processes. Previous neuroimaging studies have identified multiple brain areas and networks for salience processing, including the salience network (SN), dorsal attention network (DAN), and the locus coeruleus-norepinephrine (LC-NE) neuromodulatory system. However, interactions among these networks and the cortico-subcortical systems in salience processing remain unclear. Here, we simultaneously recorded pupillometry, electroencephalogram (EEG), and functional magnetic resonance imaging (fMRI) during an auditory oddball paradigm. Using EEG-informed fMRI analysis, we temporally dissociated the target stimulus evoked activation, allowing us to identify the cascades of cortical areas associated with salience processing. Furthermore, functional connectivity analysis uncovered spatiotemporal functional network organizations of these salience processing neural correlates. Using pupillometry as a psychophysiological marker of LC-NE activity, we also assessed brain-pupil relationships. With state-space modeling of target modulated effective connectivity, we found that the target evoked pupillary response is associated with the network causal couplings from late to early subsystems, as well as the network switching initiated by the SN. These findings indicate that the SN might cooperate with pupil-indexed brainstem neuromodulatory systems, such as the LC-NE system, in the reorganization and dynamic switching of cortical networks, and shed light on the implications of their integrative framework in various cognitive processes and neurological diseases.

1 citations


Journal ArticleDOI
01 Mar 2022
TL;DR: In this paper , the Apollo Distributed Control Task (ADCT) is described, where individuals, via the single independent degree-of-freedom control and limited environmental views, must work together to guide a virtual spacecraft back to Earth.
Abstract: Assessing and tracking physiological and cognitive states of multiple individuals interacting in virtual environments is of increasing interest to the virtual reality (VR) community. In this paper, we describe a team-based VR task termed the Apollo Distributed Control Task (ADCT), where individuals, via the single independent degree-of-freedom control and limited environmental views, must work together to guide a virtual spacecraft back to Earth. Novel to the experiment is that 1) we simultaneously collect multiple physiological measures including electroencephalography (EEG), pupillometry, speech signals, and individual's actions, 2) we regulate the the difficulty of the task and the type of communication between the teammates. Focusing on the analysis of pupil dynamics, which have been linked to a number of cognitive and physiological processes such as arousal, cognitive control, and working memory, we find that pupil diameter changes are predictive of multiple task-related dimensions, including the difficulty of the task, the role of the team member, and the type of communication.

1 citations


Posted ContentDOI
16 Feb 2022-bioRxiv
TL;DR: Results demonstrate that TMS-evoked top-down influences vary as a function of the prefrontal alpha rhythm, and suggest clinical applications whereby TMS is synchronized to the brain’s internal rhythms in order to more efficiently engage deep therapeutic targets.
Abstract: BACKGROUND The communication through coherence model posits that brain rhythms are synchronized across different frequency bands and that effective connectivity strength between interacting regions depends on their phase relation. Evidence to support the model comes mostly from electrophysiological recordings in animals while evidence from human data is limited. METHODS Here, an fMRI-EEG-TMS (fET) instrument capable of acquiring simultaneous fMRI and EEG during noninvasive single pulse TMS applied to dorsolateral prefrontal cortex (DLPFC) was used to test whether prefrontal EEG alpha phase moderates TMS-evoked top-down influences on subgenual, rostral and dorsal anterior cingulate cortex (ACC). Results in healthy volunteers (n=11) were compared to those from patients with major depressive disorder (MDD) (n=17) collected as part of a ongoing clinical trial investigation. RESULTS In both groups, TMS-evoked functional connectivity between DLPFC and subgenual ACC (sgACC) depended on the EEG alpha phase. TMS-evoked DLPFC to sgACC effective connectivity (EC) was moderated by EEG alpha phase in healthy volunteers, but not in the MDD patients. Top-down EC was inhibitory for TMS onsets during the upward slope of the alpha wave relative to TMS timed to the downward slope of the alpha wave. Prefrontal EEG alpha phase dependent effects on TMS-evoked fMRI BOLD activation of the rostral anterior cingulate cortex were detected in the MDD patient group, but not in the healthy volunteer group. DISCUSSION Results demonstrate that TMS-evoked top-down influences vary as a function of the prefrontal alpha rhythm, and suggest clinical applications whereby TMS is synchronized to the brain’s internal rhythms in order to more efficiently engage deep therapeutic targets.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a cyclic convolutional transcoder was used to decode EEG to fMRI and vice versa, without any prior knowledge of either the hemodynamic response function or the lead-feld matrix.
Abstract: Simultaneous EEG-fMRI is a multi-modal neuroimaging technique that provides complementary spatial and temporal resolution. Challenging has been developing principled and interpretable approaches for fusing the modalities, specifically approaches enabling inference of latent source spaces representative of neural activity. In this paper, we address this inference problem within the framework of transcoding – mapping from a specific encoding (modality) to a decoding (the latent source space) and then encoding the latent source space to the other modality. Specifically, we develop a symmetric method consisting of a cyclic convolutional transcoder that transcodes EEG to fMRI and vice versa. Without any prior knowledge of either the hemodynamic response function or lead field matrix, the complete data-driven method exploits the temporal and spatial relationships between the modalities and latent source spaces to learn these mappings. We quantify, for both the simulated and real EEG-fMRI data, how well the modalities can be transcoded from one to another as well as the source spaces that are recovered, all evaluated on unseen data. In addition to enabling a new way to symmetrically infer a latent source space, the

Book ChapterDOI
01 Jan 2022
TL;DR: In this paper , a low-cost, portable OCT (p-OCT) device was used for detection of ophthalmic diseases, such as glaucoma, age-related macular degeneration and diabetic retinopathy.
Abstract: Optical coherence tomography (OCT) is widely used for detection of ophthalmic diseases, such as glaucoma, age-related macular degeneration (AMD), and diabetic retinopathy. Using a low-coherence-length light source, OCT is able to achieve high axial resolution in biological samples; this depth information is used by ophthalmologists to assess retinal structures and characterize disease states. However, OCT systems are often bulky and expensive, costing tens of thousands of dollars and weighing on the order of 50 pounds or more. Such constraints make it difficult for OCT to be accessible in low-resource settings. In the U.S. alone, only 15.3% of diabetic patients meet the recommendation of obtaining annual eye exams; the situation is even worse for minority/under-served populations. In this study, we focus on data acquired with a low-cost, portable OCT (p-OCT) device, characterized by lower resolution, scanning rate, and imaging depth than a commercial OCT system. We use generative adversarial networks (GANs) to enhance the quality of this p-OCT data and then assess the impact of this enhancement on downstream performance of artificial intelligence (AI) algorithms for AMD detection. Using GANs trained on simulated p-OCT data generated from paired commercial OCT data degraded with the point spread function (PSF) of the p-OCT device, we observe improved AI performance on p-OCT data after single-image super-resolution. We also achieve denoising after image-to-image translation. By exhibiting proof-of-principle AI-based AMD detection even on low-quality p-OCT data, this study stimulates future work toward low-cost, portable imaging+AI systems for eye disease detection.

Proceedings ArticleDOI
01 Jul 2022
TL;DR: The experimental results indicate that deep metric learning can be used as an additional refinement step to learn representations of fMRI data, that significantly improves performance on downstream modeling tasks.
Abstract: With growing size of resting state fMRI datasets and advances in deep learning methods, there are ever increasing opportunities to leverage progress in deep learning to solve challenging tasks in neuroimaging. In this work, we build upon recent advances in deep metric learning, to learn embeddings of rs-fMRI data, which can then be potentially used for several downstream tasks. We propose an efficient training method for our model and compare our method with other widely used models. Our experimental results indicate that deep metric learning can be used as an additional refinement step to learn representations of fMRI data, that significantly improves performance on downstream modeling tasks.

Proceedings ArticleDOI
01 Jul 2022
TL;DR: Qualitative analysis suggests that the Multimodal Neurophysiological Transformer (MNT) is able to model neural influences on autonomic activity in predicting arousal and has the potential to be fine-tuned to a variety of downstream tasks, including for BCI systems.
Abstract: Understanding neural function often requires multiple modalities of data, including electrophysiogical data, imaging techniques, and demographic surveys. In this paper, we introduce a novel neurophysiological model to tackle major challenges in modeling multimodal data. First, we avoid non-alignment issues between raw signals and extracted, frequency-domain features by addressing the issue of variable sampling rates. Second, we encode modalities through “cross-attention” with other modalities. Lastly, we utilize properties of our parent transformer architecture to model long-range dependencies between segments across modalities and assess intermediary weights to better understand how source signals affect prediction. We apply our Multimodal Neurophysiological Transformer (MNT) to predict valence and arousal in an existing open-source dataset. Experiments on non-aligned multimodal time-series show that our model performs similarly and, in some cases, outperforms existing methods in classification tasks. In addition, qualitative analysis suggests that MNT is able to model neural influences on autonomic activity in predicting arousal. Our architecture has the potential to be fine-tuned to a variety of downstream tasks, including for BCI systems.