scispace - formally typeset
Search or ask a question

Showing papers by "Florida Atlantic University published in 2020"


Journal ArticleDOI
TL;DR: This Review provides an overview of the microbial ecology of the plastisphere in the context of its diversity and function, as well as suggesting areas for further research.
Abstract: The plastisphere, which comprises the microbial community on plastic debris, rivals that of the built environment in spanning multiple biomes on Earth. Although human-derived debris has been entering the ocean for thousands of years, microplastics now numerically dominate marine debris and are primarily colonized by microbial and other microscopic life. The realization that this novel substrate in the marine environment can facilitate microbial dispersal and affect all aquatic ecosystems has intensified interest in the microbial ecology and evolution of this biotope. Whether a ‘core’ plastisphere community exists that is specific to plastic is currently a topic of intense investigation. This Review provides an overview of the microbial ecology of the plastisphere in the context of its diversity and function, as well as suggesting areas for further research.

532 citations


Journal ArticleDOI
21 Mar 2020-Cureus
TL;DR: A case of a 74-year-old patient who traveled from Europe to the United States and presented with encephalopathy and COVID-19 is reported, indicating a pandemic of coronavirus disease 2019.
Abstract: Coronavirus disease 2019 (COVID-19) is a pandemic. Neurological complications of COVID-19 have not been reported. Encephalopathy has not been described as a presenting symptom or complication of COVID-19. We report a case of a 74-year-old patient who traveled from Europe to the United States and presented with encephalopathy and COVID-19.

496 citations


Journal ArticleDOI
TL;DR: Network representation learning as discussed by the authors is a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information.
Abstract: With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study.

494 citations


Journal ArticleDOI
TL;DR: This work uses qualitative visualizations of emulated coughs and sneezes to examine how material- and design-choices impact the extent to which droplet-laden respiratory jets are blocked, and outlines the procedure for setting up simple visualization experiments using easily available materials.
Abstract: The use of face masks in public settings has been widely recommended by public health officials during the current COVID-19 pandemic. The masks help mitigate the risk of cross-infection via respiratory droplets; however, there are no specific guidelines on mask materials and designs that are most effective in minimizing droplet dispersal. While there have been prior studies on the performance of medical-grade masks, there are insufficient data on cloth-based coverings, which are being used by a vast majority of the general public. We use qualitative visualizations of emulated coughs and sneezes to examine how material- and design-choices impact the extent to which droplet-laden respiratory jets are blocked. Loosely folded face masks and bandana-style coverings provide minimal stopping-capability for the smallest aerosolized respiratory droplets. Well-fitted homemade masks with multiple layers of quilting fabric, and off-the-shelf cone style masks, proved to be the most effective in reducing droplet dispersal. These masks were able to curtail the speed and range of the respiratory jets significantly, albeit with some leakage through the mask material and from small gaps along the edges. Importantly, uncovered emulated coughs were able to travel notably farther than the currently recommended 6-ft distancing guideline. We outline the procedure for setting up simple visualization experiments using easily available materials, which may help healthcare professionals, medical researchers, and manufacturers in assessing the effectiveness of face masks and other personal protective equipment qualitatively.

276 citations


Journal ArticleDOI
TL;DR: Information, insights, and recommended approaches to COVID‐19 in the long‐term facility setting are provided and the situation is fluid and changing rapidly.
Abstract: The pandemic of coronavirus disease of 2019 (COVID-19) is having a global impact unseen since the 1918 worldwide influenza epidemic. All aspects of life have changed dramatically for now. The group most susceptible to COVID-19 are older adults and those with chronic underlying medical disorders. The population residing in long-term care facilities generally are those who are both old and have multiple comorbidities. In this article we provide information, insights, and recommended approaches to COVID-19 in the long-term facility setting. Because the situation is fluid and changing rapidly, readers are encouraged to access frequently the resources cited in this article. J Am Geriatr Soc 68:912-917, 2020.

271 citations


Journal ArticleDOI
TL;DR: This survey takes an interdisciplinary approach to cover studies related to CatBoost in a single work, and provides researchers an in-depth understanding to help clarify proper application of Cat boost in solving problems.
Abstract: Gradient Boosted Decision Trees (GBDT’s) are a powerful tool for classification and regression tasks in Big Data. Researchers should be familiar with the strengths and weaknesses of current implementations of GBDT’s in order to use them effectively and make successful contributions. CatBoost is a member of the family of GBDT machine learning ensemble techniques. Since its debut in late 2018, researchers have successfully used CatBoost for machine learning studies involving Big Data. We take this opportunity to review recent research on CatBoost as it relates to Big Data, and learn best practices from studies that cast CatBoost in a positive light, as well as studies where CatBoost does not outshine other techniques, since we can learn lessons from both types of scenarios. Furthermore, as a Decision Tree based algorithm, CatBoost is well-suited to machine learning tasks involving categorical, heterogeneous data. Recent work across multiple disciplines illustrates CatBoost’s effectiveness and shortcomings in classification and regression tasks. Another important issue we expose in literature on CatBoost is its sensitivity to hyper-parameters and the importance of hyper-parameter tuning. One contribution we make is to take an interdisciplinary approach to cover studies related to CatBoost in a single work. This provides researchers an in-depth understanding to help clarify proper application of CatBoost in solving problems. To the best of our knowledge, this is the first survey that studies all works related to CatBoost in a single publication.

247 citations



Journal ArticleDOI
TL;DR: The pandemic of viral infection with the severe acute respiratory syndrome coronavirus‐2 that causes COVID‐19 disease has put the nursing home industry in crisis and there is an opportunity to improve nursing homes to protect residents and their caregivers ahead of the next storm.
Abstract: The pandemic of viral infection with the severe acute respiratory syndrome coronavirus-2 that causes COVID-19 disease has put the nursing home industry in crisis. The combination of a vulnerable population that manifests nonspecific and atypical presentations of COVID-19, staffing shortages due to viral infection, inadequate resources for and availability of rapid, accurate testing and personal protective equipment, and lack of effective treatments for COVID-19 among nursing home residents have created a "perfect storm" in our country's nursing homes. This perfect storm will continue as society begins to reopen, resulting in more infections among nursing home staff and clinicians who acquire the virus outside of work, remain asymptomatic, and unknowingly perpetuate the spread of the virus in their workplaces. Because of the elements of the perfect storm, nursing homes are like a tinderbox, and it only takes one person to start a fire that could cause many deaths in a single facility. Several public health interventions and health policy strategies, adequate resources, and focused clinical quality improvement initiatives can help calm the storm. The saddest part of this perfect storm is that many years of inaction on the part of policy makers contributed to its impact. We now have an opportunity to improve nursing homes to protect residents and their caregivers ahead of the next storm. It is time to reimagine how we pay for and regulate nursing home care to achieve this goal. J Am Geriatr Soc 68:2153-2162, 2020.

228 citations


Journal ArticleDOI
TL;DR: This study provides a starting point for research in determining which techniques for preparing qualitative data for use with neural networks are best, and is the first in-depth look at techniques for working with categorical data in neural networks.
Abstract: This survey investigates current techniques for representing qualitative data for use as input to neural networks. Techniques for using qualitative data in neural networks are well known. However, researchers continue to discover new variations or entirely new methods for working with categorical data in neural networks. Our primary contribution is to cover these representation techniques in a single work. Practitioners working with big data often have a need to encode categorical values in their datasets in order to leverage machine learning algorithms. Moreover, the size of data sets we consider as big data may cause one to reject some encoding techniques as impractical, due to their running time complexity. Neural networks take vectors of real numbers as inputs. One must use a technique to map qualitative values to numerical values before using them as input to a neural network. These techniques are known as embeddings, encodings, representations, or distributed representations. Another contribution this work makes is to provide references for the source code of various techniques, where we are able to verify the authenticity of the source code. We cover recent research in several domains where researchers use categorical data in neural networks. Some of these domains are natural language processing, fraud detection, and clinical document automation. This study provides a starting point for research in determining which techniques for preparing qualitative data for use with neural networks are best. It is our intention that the reader should use these implementations as a starting point to design experiments to evaluate various techniques for working with qualitative data in neural networks. The third contribution we make in this work is a new perspective on techniques for using categorical data in neural networks. We organize techniques for using categorical data in neural networks into three categories. We find three distinct patterns in techniques that identify a technique as determined, algorithmic, or automated. The fourth contribution we make is to identify several opportunities for future research. The form of the data that one uses as an input to a neural network is crucial for using neural networks effectively. This work is a tool for researchers to find the most effective technique for working with categorical data in neural networks, in big data settings. To the best of our knowledge this is the first in-depth look at techniques for working with categorical data in neural networks.

217 citations


Journal ArticleDOI
TL;DR: The empirical study quantified the increase in training time when dropout and batch normalization are used, as well as the increaseIn prediction time (important for constrained environments, such as smartphones and low-powered IoT devices) and showed that a non-adaptive optimizer can outperform adaptive optimizers, but only at the cost of a significant amount of training times to perform hyperparameter tuning.
Abstract: Overfitting and long training time are two fundamental challenges in multilayered neural network learning and deep learning in particular. Dropout and batch normalization are two well-recognized approaches to tackle these challenges. While both approaches share overlapping design principles, numerous research results have shown that they have unique strengths to improve deep learning. Many tools simplify these two approaches as a simple function call, allowing flexible stacking to form deep learning architectures. Although their usage guidelines are available, unfortunately no well-defined set of rules or comprehensive studies to investigate them concerning data input, network configurations, learning efficiency, and accuracy. It is not clear when users should consider using dropout and/or batch normalization, and how they should be combined (or used alternatively) to achieve optimized deep learning outcomes. In this paper we conduct an empirical study to investigate the effect of dropout and batch normalization on training deep learning models. We use multilayered dense neural networks and convolutional neural networks (CNN) as the deep learning models, and mix dropout and batch normalization to design different architectures and subsequently observe their performance in terms of training and test CPU time, number of parameters in the model (as a proxy for model size), and classification accuracy. The interplay between network structures, dropout, and batch normalization, allow us to conclude when and how dropout and batch normalization should be considered in deep learning. The empirical study quantified the increase in training time when dropout and batch normalization are used, as well as the increase in prediction time (important for constrained environments, such as smartphones and low-powered IoT devices). It showed that a non-adaptive optimizer (e.g. SGD) can outperform adaptive optimizers, but only at the cost of a significant amount of training times to perform hyperparameter tuning, while an adaptive optimizer (e.g. RMSProp) performs well without much tuning. Finally, it showed that dropout and batch normalization should be used in CNNs only with caution and experimentation (when in doubt and short on time to experiment, use only batch normalization).

207 citations


01 Jan 2020
TL;DR: In this article, the authors outline some key methodological issues for the uses of MLM in IB, including criteria, sample size, and measure equivalence issues, and examine promising directions for future multilevel IB research considering comparative opportunities at nation, multiple-nation cluster, and within-nation region levels.
Abstract: Multiple-level (or mixed linear) modeling (MLM) can simultaneously test hypotheses at several levels of analysis (usually two or three), or control for confounding effects at one level while testing hypotheses at others. Advances in multi-level modeling allow increased precision in quantitative international business (IB) research, and open up new methodological and conceptual possibilities. However, they create new challenges, and they are still not frequently used in IB research. In this editorial we outline some key methodological issues for the uses of MLM in IB, including criteria, sample size, and measure equivalence issues. We then examine promising directions for future multilevel IB research considering comparative opportunities at nation, multiple-nation cluster, and within-nation region levels, including large multilevel databases. We also consider its promise for MNE research about semi-globalization, interorganizational effects across nations, clusters within nations, and teams and subsidiaries within MNEs.

Journal ArticleDOI
TL;DR: In this article, a meta-analysis reported in this article integrates findings from 231 samples and more than 75,000 consumers to extend understanding of the relationship between impulse buying and its determinants, associated with several internal and external factors.
Abstract: Impulse buying by consumers has received considerable attention in consumer research The phenomenon is interesting because it is not only prompted by a variety of internal psychological factors but also influenced by external, market-related stimuli The meta-analysis reported in this article integrates findings from 231 samples and more than 75,000 consumers to extend understanding of the relationship between impulse buying and its determinants, associated with several internal and external factors Traits (eg, sensation-seeking, impulse buying tendency), motives (eg, utilitarian, hedonic), consumer resources (eg, time, money), and marketing stimuli emerge as key triggers of impulse buying Consumers’ self-control and mood states mediate and explain the affective and cognitive psychological processes associated with impulse buying By establishing these pathways and processes, this study helps clarify factors contributing to impulse buying and the role of factors in resisting such impulses It also explains the inconsistent findings in prior research by highlighting the context-dependency of various determinants Specifically, the results of a moderator analysis indicate that the impacts of many determinants depend on the consumption context (eg, product’s identity expression, price level in the industry)

Journal ArticleDOI
TL;DR: In patients with refractory hypercholesterolemia, the use of evinacumab significantly reduced the LDL cholesterol level, by more than 50% at the maximum dose.
Abstract: Background Patients with refractory hypercholesterolemia, who have high low-density lipoprotein (LDL) cholesterol levels despite treatment with lipid-lowering therapies at maximum tolerate...

Journal ArticleDOI
TL;DR: In this paper, the authors hypothesize that AON forces the entrepreneur to bear greater risk and encourages crowdfunders to pledge more capital enabling entrepreneurs to set larger goals, and further hypothesize AON is a costly signal of commitment for entrepreneurs yielding a separate equilibrium with higher quality and more innovative projects with greater success rates.
Abstract: Reward‐based crowdfunding campaigns are commonly offered in one of two models via fundraising goals set by an entrepreneur: “Keep‐It‐All” (KIA), where the entrepreneur keeps the entire amount raised regardless of achieving the goal, and “All‐Or‐Nothing” (AON), where the entrepreneur keeps nothing unless the goal is achieved. We hypothesize that AON forces the entrepreneur to bear greater risk and encourages crowdfunders to pledge more capital enabling entrepreneurs to set larger goals. We further hypothesize that AON is a costly signal of commitment for entrepreneurs yielding a separate equilibrium with higher quality and more innovative projects with greater success rates. Empirical tests support both hypotheses.

Journal ArticleDOI
TL;DR: Aerosols from singing, speaking and breathing in a zero-background environment are measured, allowing unequivocal attribution of aerosol production to specific vocalisations and Guidelines should create recommendations based on the volume and duration of the vocalisation, the number of participants and the environment in which the activity occurs.
Abstract: The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has resulted in an unprecedented shutdown in social and economic activity with the cultural sector particularly severely affected Restrictions on performance have arisen from a perception that there is a significantly higher risk of aerosol production from singing than speaking based upon high-profile examples of clusters of COVID-19 following choral rehearsals However, no direct comparison of aerosol generation from singing and speaking has been reported Here, we measure aerosols from singing, speaking and breathing in a zero-background environment, allowing unequivocal attribution of aerosol production to specific vocalisations Speaking and singing show steep increases in mass concentration with increase in volume (spanning a factor of 20-30 across the dynamic range measured, p-5) At the quietest volume (50 to 60 dB), neither singing (p=0 19) or speaking (p=0 20) were significantly different to breathing At the loudest volume (90 to 100 dB), a statistically significant difference (p-5) is observed between singing and speaking, but with singing only generating a factor of between 1 5 and 3 4 more aerosol mass Guidelines should create recommendations based on the volume and duration of the vocalisation, the number of participants and the environment in which the activity occurs, rather than the type of vocalisation Mitigations such as the use of amplification and increased attention to ventilation should be employed where practicable

Journal ArticleDOI
TL;DR: The similarities between Ercc1−/∆ and aged WT mice support the conclusion that the DNA repair‐deficient mice accurately model the age‐related accumulation of senescent cells, albeit six‐times faster.
Abstract: Senescent cells accumulate with age in vertebrates and promote aging largely through their senescence-associated secretory phenotype (SASP). Many types of stress induce senescence, including genotoxic stress. ERCC1-XPF is a DNA repair endonuclease required for multiple DNA repair mechanisms that protect the nuclear genome. Humans or mice with reduced expression of this enzyme age rapidly due to increased levels of spontaneous, genotoxic stress. Here, we asked whether this corresponds to an increased level of senescent cells. p16Ink4a and p21Cip1 mRNA were increased ~15-fold in peripheral lymphocytes from 4- to 5-month-old Ercc1-/∆ and 2.5-year-old wild-type (WT) mice, suggesting that these animals exhibit a similar biological age. p16Ink4a and p21Cip1 mRNA were elevated in 10 of 13 tissues analyzed from 4- to 5-month-old Ercc1-/∆ mice, indicating where endogenous DNA damage drives senescence in vivo. Aged WT mice had similar increases of p16Ink4a and p21Cip1 mRNA in the same 10 tissues as the mutant mice. Senescence-associated β-galactosidase activity and p21Cip1 protein also were increased in tissues of the progeroid and aged mice, while Lamin B1 mRNA and protein levels were diminished. In Ercc1-/Δ mice with a p16Ink4a luciferase reporter, bioluminescence rose steadily with age, particularly in lung, thymus, and pancreas. These data illustrate where senescence occurs with natural and accelerated aging in mice and the relative extent of senescence among tissues. Interestingly, senescence was greater in male mice until the end of life. The similarities between Ercc1-/∆ and aged WT mice support the conclusion that the DNA repair-deficient mice accurately model the age-related accumulation of senescent cells, albeit six-times faster.

Journal ArticleDOI
TL;DR: In this article, the performance of face shields and exhalation valves in impeding the spread of aerosol-sized droplets is examined. And the authors suggest that to minimize the community spread of COVID-19, it may be preferable to use high quality cloth or surgical masks that are of a plain design, instead of face shield and masks equipped with exhale valves.
Abstract: Several places across the world are experiencing a steep surge in COVID-19 infections. Face masks have become increasingly accepted as one of the most effective means for combating the spread of the disease when used in combination with social-distancing and frequent hand-washing. However, there is an increasing trend of people substituting regular cloth or surgical masks with clear plastic face shields and with masks equipped with exhalation valves. One of the factors driving this increased adoption is improved comfort compared to regular masks. However, there is a possibility that widespread public use of these alternatives to regular masks could have an adverse effect on mitigation efforts. To help increase public awareness regarding the effectiveness of these alternative options, we use qualitative visualizations to examine the performance of face shields and exhalation valves in impeding the spread of aerosol-sized droplets. The visualizations indicate that although face shields block the initial forward motion of the jet, the expelled droplets can move around the visor with relative ease and spread out over a large area depending on light ambient disturbances. Visualizations for a mask equipped with an exhalation port indicate that a large number of droplets pass through the exhale valve unfiltered, which significantly reduces its effectiveness as a means of source control. Our observations suggest that to minimize the community spread of COVID-19, it may be preferable to use high quality cloth or surgical masks that are of a plain design, instead of face shields and masks equipped with exhale valves.

Journal ArticleDOI
Samantha Joel1, Paul W. Eastwick2, Colleen J. Allison3, Ximena B. Arriaga4, Zachary G. Baker5, Eran Bar-Kalifa6, Sophie Bergeron7, Gurit E. Birnbaum8, Rebecca L. Brock9, Claudia Chloe Brumbaugh10, Cheryl L. Carmichael10, Serena Chen11, Jennifer Clarke12, Rebecca J. Cobb13, Michael K. Coolsen14, Jody L. Davis15, David C. de Jong16, Anik Debrot17, Eva C. DeHaas3, Jaye L. Derrick5, Jami Eller18, Marie Joelle Estrada19, Ruddy Faure20, Eli J. Finkel21, R. Chris Fraley22, Shelly L. Gable23, Reuma Gadassi-Polack24, Yuthika U. Girme3, Amie M. Gordon25, Courtney L. Gosnell26, Matthew D. Hammond27, Peggy A. Hannon28, Cheryl Harasymchuk29, Wilhelm Hofmann30, Andrea B. Horn31, Emily A. Impett32, Jeremy P. Jamieson19, Dacher Keltner10, James J. Kim32, Jeffrey L. Kirchner33, Esther S. Kluwer34, Esther S. Kluwer35, Madoka Kumashiro36, Grace M. Larson37, Gal Lazarus38, Jill M. Logan3, Laura B. Luchies39, Geoff MacDonald32, Laura V. Machia40, Michael R. Maniaci41, Jessica A. Maxwell42, Moran Mizrahi43, Amy Muise44, Sylvia Niehuis13, Brian G. Ogolsky22, C. Rebecca Oldham13, Nickola C. Overall42, Meinrad Perrez45, Brett J. Peters46, Paula R. Pietromonaco47, Sally I. Powers47, Thery Prok23, Rony Pshedetzky-Shochat38, Eshkol Rafaeli48, Eshkol Rafaeli38, Erin L. Ramsdell9, Maija Reblin49, Michael Reicherts45, Alan Reifman13, Harry T. Reis19, Galena K. Rhoades50, William S. Rholes51, Francesca Righetti20, Lindsey M. Rodriguez49, Ron Rogge19, Natalie O. Rosen52, Darby E. Saxbe53, Haran Sened38, Jeffry A. Simpson18, Erica B. Slotter54, Scott M. Stanley50, Shevaun L. Stocker55, Cathy Surra56, Hagar Ter Kuile35, Allison A. Vaughn57, Amanda M. Vicary58, Mariko L. Visserman44, Mariko L. Visserman32, Scott T. Wolf33 
University of Western Ontario1, University of California, Davis2, Simon Fraser University3, Purdue University4, University of Houston5, Ben-Gurion University of the Negev6, Université de Montréal7, Interdisciplinary Center Herzliya8, University of Nebraska–Lincoln9, City University of New York10, University of California, Berkeley11, University of Colorado Colorado Springs12, Texas Tech University13, Shippensburg University of Pennsylvania14, Virginia Commonwealth University15, Western Carolina University16, University of Lausanne17, University of Minnesota18, University of Rochester19, VU University Amsterdam20, Northwestern University21, University of Illinois at Urbana–Champaign22, University of California, Santa Barbara23, Yale University24, University of Michigan25, Pace University26, Victoria University of Wellington27, University of Washington28, Carleton University29, Ruhr University Bochum30, University of Zurich31, University of Toronto32, University of North Carolina at Chapel Hill33, Radboud University Nijmegen34, Utrecht University35, Goldsmiths, University of London36, University of Cologne37, Bar-Ilan University38, Calvin University39, Syracuse University40, Florida Atlantic University41, University of Auckland42, Ariel University43, York University44, University of Fribourg45, Ohio University46, University of Massachusetts Amherst47, Barnard College48, University of South Florida49, University of Denver50, Texas A&M University51, Dalhousie University52, University of Southern California53, Villanova University54, University of Wisconsin–Superior55, University of Texas at Austin56, San Diego State University57, Illinois Wesleyan University58
TL;DR: The findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation byindividual differences and moderation by partner-reports may be quite small.
Abstract: Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner's ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person's own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships.

Journal ArticleDOI
TL;DR: Two cerebellum–thalamus–mPFC pathways in mice that regulate social and repetitive behavior are described, raising the possibility that these circuits might provide neuromodulatory targets for the treatment of ASD.
Abstract: Cerebellar dysfunction has been demonstrated in autism spectrum disorders (ASDs); however, the circuits underlying cerebellar contributions to ASD-relevant behaviors remain unknown. In this study, we demonstrated functional connectivity between the cerebellum and the medial prefrontal cortex (mPFC) in mice; showed that the mPFC mediates cerebellum-regulated social and repetitive/inflexible behaviors; and showed disruptions in connectivity between these regions in multiple mouse models of ASD-linked genes and in individuals with ASD. We delineated a circuit from cerebellar cortical areas Right crus 1 (Rcrus1) and posterior vermis through the cerebellar nuclei and ventromedial thalamus and culminating in the mPFC. Modulation of this circuit induced social deficits and repetitive behaviors, whereas activation of Purkinje cells (PCs) in Rcrus1 and posterior vermis improved social preference impairments and repetitive/inflexible behaviors, respectively, in male PC-Tsc1 mutant mice. These data raise the possibility that these circuits might provide neuromodulatory targets for the treatment of ASD.

Journal ArticleDOI
TL;DR: The search strategy used in systematic reviews is an important consideration, as the comprehensiveness and representativeness of studies identified influences the quality of conclusions derived from the review as mentioned in this paper. But despite the importance of this step, little in the way of best practice recommendations exist.

Journal ArticleDOI
01 Feb 2020
TL;DR: A deep FCN, called U-Net, was developed to segment the ICH regions from the CT scans in a fully automated manner and achieved a Dice coefficient of 0.31 for the I CH segmentation based on 5-fold cross-validation.
Abstract: Traumatic brain injuries may cause intracranial hemorrhages (ICH). ICH could lead to disability or death if it is not accurately diagnosed and treated in a time-sensitive procedure. The current clinical protocol to diagnose ICH is examining Computerized Tomography (CT) scans by radiologists to detect ICH and localize its regions. However, this process relies heavily on the availability of an experienced radiologist. In this paper, we designed a study protocol to collect a dataset of 82 CT scans of subjects with a traumatic brain injury. Next, the ICH regions were manually delineated in each slice by a consensus decision of two radiologists. The dataset is publicly available online at the PhysioNet repository for future analysis and comparisons. In addition to publishing the dataset, which is the main purpose of this manuscript, we implemented a deep Fully Convolutional Networks (FCNs), known as U-Net, to segment the ICH regions from the CT scans in a fully-automated manner. The method as a proof of concept achieved a Dice coefficient of 0.31 for the ICH segmentation based on 5-fold cross-validation.

Journal ArticleDOI
TL;DR: The COVID-19 pandemic resulted in significant social and economic impacts throughout the world as discussed by the authors, in addition to the health consequences, the impacts on travel behavior have also been sudden...
Abstract: The COVID-19 pandemic resulted in significant social and economic impacts throughout the world. In addition to the health consequences, the impacts on travel behavior have also been sudden ...

Journal ArticleDOI
TL;DR: It is determined that the best performance scores for each study were unexpectedly high overall, which may be due to overfitting, and that information on the data cleaning of CSE-CIC-IDS2018 was inadequate across the board, a finding that may indicate problems with reproducibility of experiments.
Abstract: The exponential growth in computer networks and network applications worldwide has been matched by a surge in cyberattacks. For this reason, datasets such as CSE-CIC-IDS2018 were created to train predictive models on network-based intrusion detection. These datasets are not meant to serve as repositories for signature-based detection systems, but rather to promote research on anomaly-based detection through various machine learning approaches. CSE-CIC-IDS2018 contains about 16,000,000 instances collected over the course of ten days. It is the most recent intrusion detection dataset that is big data, publicly available, and covers a wide range of attack types. This multi-class dataset has a class imbalance, with roughly 17% of the instances comprising attack (anomalous) traffic. Our survey work contributes several key findings. We determined that the best performance scores for each study, where available, were unexpectedly high overall, which may be due to overfitting. We also found that most of the works did not address class imbalance, the effects of which can bias results in a big data study. Lastly, we discovered that information on the data cleaning of CSE-CIC-IDS2018 was inadequate across the board, a finding that may indicate problems with reproducibility of experiments. In our survey, major research gaps have also been identified.

Journal ArticleDOI
TL;DR: A general and fast strategy to design small molecules from sequence to bind an RNA and subsequently cause its destruction was proven to destroy a cancer-causing RNA in a mouse model thereby inhibiting metastasis and more deeply evaluating the potential of small-molecule therapeutics targeting RNAs.
Abstract: As the area of small molecules interacting with RNA advances, general routes to provide bioactive compounds are needed as ligands can bind RNA avidly to sites that will not affect function. Small-molecule targeted RNA degradation will thus provide a general route to affect RNA biology. A non-oligonucleotide-containing compound was designed from sequence to target the precursor to oncogenic microRNA-21 (pre-miR-21) for enzymatic destruction with selectivity that can exceed that for protein-targeted medicines. The compound specifically binds the target and contains a heterocycle that recruits and activates a ribonuclease to pre-miR-21 to substoichiometrically effect its cleavage and subsequently impede metastasis of breast cancer to lung in a mouse model. Transcriptomic and proteomic analyses demonstrate that the compound is potent and selective, specifically modulating oncogenic pathways. Thus, small molecules can be designed from sequence to have all of the functional repertoire of oligonucleotides, including inducing enzymatic degradation, and to selectively and potently modulate RNA function in vivo.

Proceedings ArticleDOI
20 Apr 2020
TL;DR: A novel approach, unsupervised domain adaptive graph convolutional networks (UDA-GCN), for domain adaptation learning for graphs, which jointly exploits local and global consistency for feature aggregation and facilitates knowledge transfer between graphs.
Abstract: Graph convolutional networks (GCNs) have achieved impressive success in many graph related analytics tasks. However, most GCNs only work in a single domain (graph) incapable of transferring knowledge from/to other domains (graphs), due to the challenges in both graph representation learning and domain adaptation over graph structures. In this paper, we present a novel approach, unsupervised domain adaptive graph convolutional networks (UDA-GCN), for domain adaptation learning for graphs. To enable effective graph representation learning, we first develop a dual graph convolutional network component, which jointly exploits local and global consistency for feature aggregation. An attention mechanism is further used to produce a unified representation for each node in different graphs. To facilitate knowledge transfer between graphs, we propose a domain adaptive learning module to optimize three different loss functions, namely source classifier loss, domain classifier loss, and target classifier loss as a whole, thus our model can differentiate class labels in the source domain, samples from different domains, the class labels from the target domain, respectively. Experimental results on real-world datasets in the node classification task validate the performance of our method, compared to state-of-the-art graph neural network algorithms.

Journal ArticleDOI
TL;DR: It is shown that overexpression of translation initiation factor eIF4E in microglia results in autism-like behaviour in male, but not female, mice, and it is proposed that functional perturbation of malemicroglia is an important cause for sex-biased ASD.
Abstract: Mutations that inactivate negative translation regulators cause autism spectrum disorders (ASD), which predominantly affect males and exhibit social interaction and communication deficits and repetitive behaviors. However, the cells that cause ASD through elevated protein synthesis resulting from these mutations remain unknown. Here we employ conditional overexpression of translation initiation factor eIF4E to increase protein synthesis in specific brain cells. We show that exaggerated translation in microglia, but not neurons or astrocytes, leads to autism-like behaviors in male mice. Although microglial eIF4E overexpression elevates translation in both sexes, it only increases microglial density and size in males, accompanied by microglial shift from homeostatic to a functional state with enhanced phagocytic capacity but reduced motility and synapse engulfment. Consequently, cortical neurons in the mice have higher synapse density, neuroligins, and excitation-to-inhibition ratio compared to control mice. We propose that functional perturbation of male microglia is an important cause for sex-biased ASD.

Journal ArticleDOI
TL;DR: The positive power and potential of AI must be harnessed in the fight to slow the spread of COVID-19 in order to save lives and limit the economic havoc due to this horrific disease.
Abstract: The COVID-19 pandemic is taking a colossal toll in human suffering and lives. A significant amount of new scientific research and data sharing is underway due to the pandemic which is still rapidly spreading. There is now a growing amount of coronavirus related datasets as well as published papers that must be leveraged along with artificial intelligence (AI) to fight this pandemic by driving news approaches to drug discovery, vaccine development, and public awareness. AI can be used to mine this avalanche of new data and papers to extract new insights by cross-referencing papers and searching for patterns that AI algorithms could help discover new possible treatments or help in vaccine development. Drug discovery is not a trivial task and AI technologies like deep learning can help accelerate this process by helping predict which existing drugs, or brand-new drug-like molecules could treat COVID-19. AI techniques can also help disseminate vital information across the globe and reduce the spread of false information about COVID-19. The positive power and potential of AI must be harnessed in the fight to slow the spread of COVID-19 in order to save lives and limit the economic havoc due to this horrific disease.

Proceedings ArticleDOI
09 Jul 2020
TL;DR: Dual-Regularized Graph Convolutional Networks (DR-GCN) is proposed to handle multi-class imbalanced graphs, where two types of regularization are imposed to tackle class imbalanced representation learning.
Abstract: Networked data often demonstrate the Pareto principle (i.e., 80/20 rule) with skewed class distributions, where most vertices belong to a few majority classes and minority classes only contain a handful of instances. When presented with imbalanced class distributions, existing graph embedding learning tends to bias to nodes from majority classes, leaving nodes from minority classes under-trained. In this paper, we propose Dual-Regularized Graph Convolutional Networks (DR-GCN) to handle multi-class imbalanced graphs, where two types of regularization are imposed to tackle class imbalanced representation learning. To ensure that all classes are equally represented, we propose a class-conditioned adversarial training process to facilitate the separation of labeled nodes. Meanwhile, to maintain training equilibrium (i.e., retaining quality of fit across all classes), we force unlabeled nodes to follow a similar latent distribution to the labeled nodes by minimizing their difference in the embedding space. Experiments on real-world imbalanced graphs demonstrate that DR-GCN outperforms the state-of-the-art methods in node classification, graph clustering, and visualization.

Journal ArticleDOI
TL;DR: To identify county and facility factors associated with severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) outbreaks in skilled nursing facilities (SNFs) outbreaks, data are collected on patients, staff, and facilities in SARS-CoV outbreaks in SNFs.
Abstract: OBJECTIVE To identify county and facility factors associated with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreaks in skilled nursing facilities (SNFs). DESIGN Cross-sectional study linking county SARS-CoV-2 prevalence data, administrative data, state reports of SNF outbreaks, and data from Genesis HealthCare, a large multistate provider of post-acute and long-term care. State data are reported as of April 21, 2020; Genesis data are reported as of May 4, 2020. SETTING AND PARTICIPANTS The Genesis sample consisted of 341 SNFs in 25 states, including a subset of 64 SNFs that underwent universal testing of all residents. The non-Genesis sample included all other SNFs (n = 3,016) in the 12 states where Genesis operates that released the names of SNFs with outbreaks. MEASUREMENTS For Genesis and non-Genesis SNFs: any outbreak (one or more residents testing positive for SARS-CoV-2). For Genesis SNFs only: number of confirmed cases, SNF case fatality rate, and prevalence after universal testing. RESULTS One hundred eighteen (34.6%) Genesis SNFs and 640 (21.2%) non-Genesis SNFs had outbreaks. A difference in county prevalence of 1,000 cases per 100,000 (1%) was associated with a 33.6 percentage point (95% confidence interval (CI) = 9.6-57.7 percentage point; P = .008) difference in the probability of an outbreak for Genesis and non-Genesis SNFs combined, and a difference of 12.5 cases per facility (95% CI = 4.4-20.8 cases; P = .003) for Genesis SNFs. A 10-bed difference in facility size was associated with a 0.9 percentage point (95% CI = 0.6-1.2 percentage point; P < .001) difference in the probability of outbreak. We found no consistent relationship between Nursing Home Compare Five-Star ratings or past infection control deficiency citations and probability or severity of outbreak. CONCLUSIONS Larger SNFs and SNFs in areas of high SARS-CoV-2 prevalence are at high risk for outbreaks and must have access to universal testing to detect cases, implement mitigation strategies, and prevent further potentially avoidable cases and related complications. J Am Geriatr Soc 68:2167-2173, 2020.

Journal ArticleDOI
TL;DR: This study explored the prevalence of sextortion behaviors among a nationally representative sample of 5,568 U.S. middle and high school students and found that males and nonheterosexual youth were more likely to be targeted, and males were morelikely to target others.
Abstract: Sextortion is the threatened dissemination of explicit, intimate, or embarrassing images of a sexual nature without consent, usually for the purpose of procuring additional images, sexual acts, money, or something else. Despite increased public interest in this behavior, it has yet to be empirically examined among adolescents. The current study fills this gap by exploring the prevalence of sextortion behaviors among a nationally representative sample of 5,568 U.S. middle and high school students. Approximately 5% of students reported that they had been the victim of sextortion, while about 3% admitted to threatening others who had shared an image with them in confidence. Males and nonheterosexual youth were more likely to be targeted, and males were more likely to target others. Moreover, youth who threatened others with sextortion were more likely to have been victims themselves. Implications for future research, as well as the preventive role that youth-serving professionals can play, are discussed.