scispace - formally typeset
Search or ask a question

Showing papers by "SRI International published in 2018"


Journal ArticleDOI
TL;DR: This article provides an update on the developments in MetaCyc during the past two years, including the expansion of data and addition of new features.
Abstract: MetaCyc (https://MetaCyc.org) is a comprehensive reference database of metabolic pathways and enzymes from all domains of life. It contains more than 2570 pathways derived from >54 000 publications, making it the largest curated collection of metabolic pathways. The data in MetaCyc is strictly evidence-based and richly curated, resulting in an encyclopedic reference tool for metabolism. MetaCyc is also used as a knowledge base for generating thousands of organism-specific Pathway/Genome Databases (PGDBs), which are available in the BioCyc (https://BioCyc.org) and other PGDB collections. This article provides an update on the developments in MetaCyc during the past two years, including the expansion of data and addition of new features.

657 citations


Posted Content
TL;DR: The ChauffeurNet model can handle complex situations in simulation, and the perturbations then provide an important signal for these losses and lead to robustness of the learned model.
Abstract: Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.

452 citations


Proceedings ArticleDOI
24 Apr 2018
TL;DR: This paper introduces Kyber, a portfolio of post-quantum cryptographic primitives built around a key-encapsulation mechanism (KEM), based on hardness assumptions over module lattices, and introduces a CPA-secure public-key encryption scheme and eventually construct, in a black-box manner, CCA-secure encryption, key exchange, and authenticated-key-exchange schemes.
Abstract: Rapid advances in quantum computing, together with the announcement by the National Institute of Standards and Technology (NIST) to define new standards for digitalsignature, encryption, and key-establishment protocols, have created significant interest in post-quantum cryptographic schemes. This paper introduces Kyber (part of CRYSTALS – Cryptographic Suite for Algebraic Lattices – a package submitted to NIST post-quantum standardization effort in November 2017), a portfolio of post-quantum cryptographic primitives built around a key-encapsulation mechanism (KEM), based on hardness assumptions over module lattices. Our KEM is most naturally seen as a successor to the NEWHOPE KEM (Usenix 2016). In particular, the key and ciphertext sizes of our new construction are about half the size, the KEM offers CCA instead of only passive security, the security is based on a more general (and flexible) lattice problem, and our optimized implementation results in essentially the same running time as the aforementioned scheme. We first introduce a CPA-secure public-key encryption scheme, apply a variant of the Fujisaki–Okamoto transform to create a CCA-secure KEM, and eventually construct, in a black-box manner, CCA-secure encryption, key exchange, and authenticated-key-exchange schemes. The security of our primitives is based on the hardness of Module-LWE in the classical and quantum random oracle models, and our concrete parameters conservatively target more than 128 bits of postquantum security.

370 citations


Book ChapterDOI
17 Apr 2018
TL;DR: An efficient range estimation algorithm that iterates between an expensive global combinatorial search using mixed-integer linear programming problems, and a relatively inexpensive local optimization that repeatedly seeks a local optimum of the function represented by the NN is presented.
Abstract: Given a neural network (NN) and a set of possible inputs to the network described by polyhedral constraints, we aim to compute a safe over-approximation of the set of possible output values. This operation is a fundamental primitive enabling the formal analysis of neural networks that are extensively used in a variety of machine learning tasks such as perception and control of autonomous systems. Increasingly, they are deployed in high-assurance applications, leading to a compelling use case for formal verification approaches. In this paper, we present an efficient range estimation algorithm that iterates between an expensive global combinatorial search using mixed-integer linear programming problems, and a relatively inexpensive local optimization that repeatedly seeks a local optimum of the function represented by the NN. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate applications of our approach to computing flowpipes for neural network-based feedback controllers. We show that the use of local search in conjunction with mixed-integer linear programming solvers effectively reduces the combinatorial search over possible combinations of active neurons in the network by pruning away suboptimal nodes.

289 citations


Journal ArticleDOI
TL;DR: It is shown that replacing truck delivery by drones can reduce greenhouse gas emissions and energy use when the drone size and additional warehousing requirements are limited, and if carefully deployed, drone-based delivery could reduce greenhouseGas emissions andEnergy use in the freight sector.
Abstract: The use of automated, unmanned aerial vehicles (drones) to deliver commercial packages is poised to become a new industry, significantly shifting energy use in the freight sector. Here we find the current practical range of multi-copters to be about 4 km with current battery technology, requiring a new network of urban warehouses or waystations as support. We show that, although drones consume less energy per package-km than delivery trucks, the additional warehouse energy required and the longer distances traveled by drones per package greatly increase the life-cycle impacts. Still, in most cases examined, the impacts of package delivery by small drone are lower than ground-based delivery. Results suggest that, if carefully deployed, drone-based delivery could reduce greenhouse gas emissions and energy use in the freight sector. To realize the environmental benefits of drone delivery, regulators and firms should focus on minimizing extra warehousing and limiting the size of drones.

288 citations


Journal ArticleDOI
14 Feb 2018
TL;DR: This paper presents the lattice-based signature scheme Dilithium, which is a component of the CRYSTALS (Cryptographic Suite for Algebraic Lattices) suite that was submitted to NIST’s call for post-quantum cryptographic standards.
Abstract: In this paper, we present the lattice-based signature scheme Dilithium, which is a component of the CRYSTALS (Cryptographic Suite for Algebraic Lattices) suite that was submitted to NIST’s call for post-quantum cryptographic standards. The design of the scheme avoids all uses of discrete Gaussian sampling and is easily implementable in constant-time. For the same security levels, our scheme has a public key that is 2.5X smaller than the previously most efficient lattice-based schemes that did not use Gaussians, while having essentially the same signature size. In addition to the new design, we significantly improve the running time of the main component of many lattice-based constructions – the number theoretic transform. Our AVX2-based implementation results in a speed-up of roughly a factor of 2 over the previously best algorithms that appear in the literature. The techniques for obtaining this speed-up also have applications to other lattice-based schemes.

279 citations


Posted ContentDOI
Donald J. Hagler1, Sean N. Hatton1, Carolina Makowski2, M. Daniela Cornejo3, Damien A. Fair3, Anthony Steven Dick4, Matthew T. Sutherland4, B. J. Casey5, M Deanna6, Michael P. Harms6, Richard Watts5, James M. Bjork7, Hugh Garavan8, Laura Hilmer1, Christopher J. Pung1, Chelsea S. Sicat1, Joshua M. Kuperman1, Hauke Bartsch1, Feng Xue1, Mary M. Heitzeg9, Angela R. Laird4, Thanh T. Trinh1, Raul Gonzalez4, Susan F. Tapert1, Michael C. Riedel4, Lindsay M. Squeglia10, Luke W. Hyde9, Monica D. Rosenberg5, Eric Earl3, Katia D. Howlett11, Fiona C. Baker12, Mary E. Soules9, Jazmin Diaz1, Octavio Ruiz de Leon1, Wesley K. Thompson1, Michael C. Neale7, Megan M. Herting13, Elizabeth R. Sowell13, Ruben P. Alvarez14, Samuel W. Hawes4, Mariana Sanchez4, Jerzy Bodurka15, Florence J. Breslin15, Amanda Sheffield Morris15, Martin P. Paulus15, W. Kyle Simmons15, Jonathan R. Polimeni16, Andre van der Kouwe16, Andrew S. Nencka17, Kevin M. Gray10, Carlo Pierpaoli14, John A. Matochik14, Antonio Noronha14, Will M. Aklin11, Kevin P. Conway11, Meyer D. Glantz11, Elizabeth Hoffman11, Roger Little11, Marsha F. Lopez11, Vani Pariyadath11, Susan R.B. Weiss11, Dana L. Wolff-Hughes, Rebecca DelCarmen-Wiggins, Sarah W. Feldstein Ewing3, Oscar Miranda-Dominguez3, Bonnie J. Nagel3, Anders Perrone3, Darrick Sturgeon3, Aimee Goldstone12, Adolf Pfefferbaum12, Kilian M. Pohl12, Devin Prouty12, Kristina A. Uban1, Susan Y. Bookheimer1, Mirella Dapretto1, Adriana Galván1, Kara Bagot1, Jay N. Giedd1, M. Alejandra Infante1, Joanna Jacobus1, Kevin Patrick1, Paul D. Shilling1, Rahul S. Desikan1, Yi Li1, Leo P. Sugrue1, Marie T. Banich18, Naomi P. Friedman18, John K. Hewitt18, Christian J. Hopfer18, Joseph T. Sakai18, Jody Tanabe18, Linda B. Cottler19, Sara Jo Nixon19, Linda Chang20, Christine C. Cloak20, Thomas Ernst20, Gloria Reeves20, David N. Kennedy21, Steve Heeringa9, Scott Peltier9, John E. Schulenberg9, Chandra Sripada9, Robert A. Zucker9, William G. Iacono22, Monica Luciana22, Finnegan J. Calabro23, Duncan B. Clark23, David A. Lewis23, Beatriz Luna23, Claudiu Schirda23, Tufikameni Brima24, John J. Foxe24, Edward G. Freedman24, Daniel W. Mruzek24, Michael J. Mason25, Rebekah S. Huber26, Erin McGlade26, Andrew P. Prescot26, Perry F. Renshaw26, Deborah A. Yurgelun-Todd26, Nicholas Allgaier8, Julie A. Dumas8, Masha Y. Ivanova8, Alexandra Potter8, Paul Florsheim27, Christine L. Larson27, Krista M. Lisdahl27, Michael E. Charness28, Bernard F. Fuemmeler7, John M. Hettema7, Joel L. Steinberg7, Andrey P. Anokhin6, Paul E.A. Glaser6, Andrew C. Heath6, Pamela A. F. Madden6, Arielle R. Baskin-Sommers5, R. Todd Constable5, Steven Grant11, Gayathri J. Dowling11, Sandra A. Brown1, Terry L. Jernigan1, Anders M. Dale1 
04 Nov 2018-bioRxiv
TL;DR: The baseline neuroimaging processing and subject-level analysis methods used by the ABCD DAIC in the centralized processing and extraction of neuroanatomical and functional imaging phenotypes are described.
Abstract: The Adolescent Brain Cognitive Development (ABCD) Study is an ongoing, nationwide study of the effects of environmental influences on behavioral and brain development in adolescents. The ABCD Study is a collaborative effort, including a Coordinating Center, 21 data acquisition sites across the United States, and a Data Analysis and Informatics Center (DAIC). The main objective of the study is to recruit and assess over eleven thousand 9-10-year-olds and follow them over the course of 10 years to characterize normative brain and cognitive development, the many factors that influence brain development, and the effects of those factors on mental health and other outcomes. The study employs state-of-the-art multimodal brain imaging, cognitive and clinical assessments, bioassays, and careful assessment of substance use, environment, psychopathological symptoms, and social functioning. The data will provide a resource of unprecedented scale and depth for studying typical and atypical development. Here, we describe the baseline neuroimaging processing and subject-level analysis methods used by the ABCD DAIC in the centralized processing and extraction of neuroanatomical and functional imaging phenotypes. Neuroimaging processing and analyses include modality-specific corrections for distortions and motion, brain segmentation and cortical surface reconstruction derived from structural magnetic resonance imaging (sMRI), analysis of brain microstructure using diffusion MRI (dMRI), task-related analysis of functional MRI (fMRI), and functional connectivity analysis of resting-state fMRI.

276 citations


Book ChapterDOI
08 Sep 2018
TL;DR: The problem of zero-shot object detection (ZSD), which aims to detect object classes which are not observed during training, is introduced and the problems associated with selecting a background class are discussed and motivate two background-aware approaches for learning robust detectors.
Abstract: We introduce and tackle the problem of zero-shot object detection (ZSD), which aims to detect object classes which are not observed during training. We work with a challenging set of object classes, not restricting ourselves to similar and/or fine-grained categories as in prior works on zero-shot classification. We present a principled approach by first adapting visual-semantic embeddings for ZSD. We then discuss the problems associated with selecting a background class and motivate two background-aware approaches for learning robust detectors. One of these models uses a fixed background class and the other is based on iterative latent assignments. We also outline the challenge associated with using a limited number of training classes and propose a solution based on dense sampling of the semantic label space using auxiliary data with a large number of categories. We propose novel splits of two standard detection datasets – MSCOCO and VisualGenome, and present extensive empirical results in both the traditional and generalized zero-shot settings to highlight the benefits of the proposed methods. We provide useful insights into the algorithm and conclude by posing some open questions to encourage further research.

178 citations


Posted Content
TL;DR: Zhang et al. as mentioned in this paper adapted visual-semantic embeddings for zero-shot object detection, which aims to detect object classes which are not observed during training, and proposed two background-aware approaches for learning robust detectors, one using a fixed background class and the other based on iterative latent assignments.
Abstract: We introduce and tackle the problem of zero-shot object detection (ZSD), which aims to detect object classes which are not observed during training. We work with a challenging set of object classes, not restricting ourselves to similar and/or fine-grained categories as in prior works on zero-shot classification. We present a principled approach by first adapting visual-semantic embeddings for ZSD. We then discuss the problems associated with selecting a background class and motivate two background-aware approaches for learning robust detectors. One of these models uses a fixed background class and the other is based on iterative latent assignments. We also outline the challenge associated with using a limited number of training classes and propose a solution based on dense sampling of the semantic label space using auxiliary data with a large number of categories. We propose novel splits of two standard detection datasets - MSCOCO and VisualGenome, and present extensive empirical results in both the traditional and generalized zero-shot settings to highlight the benefits of the proposed methods. We provide useful insights into the algorithm and conclude by posing some open questions to encourage further research.

110 citations


Journal ArticleDOI
TL;DR: It is shown that Trivium, whose security has been firmly established for over a decade, and the new variant Kreyvium has excellent performance, and a second construction, based on exponentiation in binary fields, which is impractical but sets the lowest depth record to 8 for 128-bit security is described.
Abstract: In typical applications of homomorphic encryption, the first step consists for Alice of en-crypting some plaintext m under Bob's public key pk and of sending the ciphertext c = HE pk (m) to some third-party evaluator Charlie. This paper specifically considers that first step, i.e. the problem of transmitting c as efficiently as possible from Alice to Charlie. As others suggested before, a form of compression is achieved using hybrid encryption. Given a symmetric encryption scheme E, Alice picks a random key k and sends a much smaller ciphertext c = (HE pk (k), E k (m)) that Charlie decompresses homomorphically into the original c using a decryption circuit C E −1. In this paper, we revisit that paradigm in light of its concrete implementation constraints; in particular E is chosen to be an additive IV-based stream cipher. We investigate the performances offered in this context by Trivium, which belongs to the eSTREAM portfolio, and we also propose a variant with 128-bit security: Kreyvium. We show that Trivium, whose security has been firmly established for over a decade, and the new variant Kreyvium have excellent performance. We also describe a second construction, based on exponentiation in binary fields, which is impractical but sets the lowest depth record to 8 for 128-bit security.

95 citations


Posted Content
TL;DR: Voices Obscured In Complex Environmental Settings (VOICES) as mentioned in this paper is a large-scale dataset of speech recorded by far-field microphones in noisy room conditions, where audio was recorded in furnished rooms with background noise played in conjunction with foreground speech selected from the LibriSpeech corpus.
Abstract: This paper introduces the Voices Obscured In Complex Environmental Settings (VOICES) corpus, a freely available dataset under Creative Commons BY 4.0. This dataset will promote speech and signal processing research of speech recorded by far-field microphones in noisy room conditions. Publicly available speech corpora are mostly composed of isolated speech at close-range microphony. A typical approach to better represent realistic scenarios, is to convolve clean speech with noise and simulated room response for model training. Despite these efforts, model performance degrades when tested against uncurated speech in natural conditions. For this corpus, audio was recorded in furnished rooms with background noise played in conjunction with foreground speech selected from the LibriSpeech corpus. Multiple sessions were recorded in each room to accommodate for all foreground speech-background noise combinations. Audio was recorded using twelve microphones placed throughout the room, resulting in 120 hours of audio per microphone. This work is a multi-organizational effort led by SRI International and Lab41 with the intent to push forward state-of-the-art distant microphone approaches in signal processing and speech recognition.

Journal ArticleDOI
TL;DR: The authors show that, in mouse models, activated macrophages create an “eat me” signal via calreticulin secretion on neutrophils during peritonitis and on cancer cells, determining in both cases clearance by PrCR.
Abstract: Macrophage-mediated programmed cell removal (PrCR) is a process essential for the clearance of unwanted (damaged, dysfunctional, aged, or harmful) cells. The detection and recognition of appropriate target cells by macrophages is a critical step for successful PrCR, but its molecular mechanisms have not been delineated. Here using the models of tissue turnover, cancer immunosurveillance, and hematopoietic stem cells, we show that unwanted cells such as aging neutrophils and living cancer cells are susceptible to "labeling" by secreted calreticulin (CRT) from macrophages, enabling their clearance through PrCR. Importantly, we identified asialoglycans on the target cells to which CRT binds to regulate PrCR, and the availability of such CRT-binding sites on cancer cells correlated with the prognosis of patients in various malignancies. Our study reveals a general mechanism of target cell recognition by macrophages, which is the key for the removal of unwanted cells by PrCR in physiological and pathophysiological processes.

Journal ArticleDOI
TL;DR: Hormone therapy alleviates subjective sleep disturbances, particularly if vasomotor symptoms are present, however, because of contraindications, other options should be considered.

Proceedings ArticleDOI
03 Sep 2018
TL;DR: This work developed Trimmer, an application specialization tool that leverages user-provided configuration data to specialize an application to its deployment context and demonstrates Trimmer can effectively reduce code bloat.
Abstract: With the proliferation of new hardware architectures and ever-evolving user requirements, the software stack is becoming increasingly bloated. In practice, only a limited subset of the supported functionality is utilized in a particular usage context, thereby presenting an opportunity to eliminate unused features. In the past, program specialization has been proposed as a mechanism for enabling automatic software debloating. In this work, we show how existing program specialization techniques lack the analyses required for providing code simplification for real-world programs. We present an approach that uses stronger analysis techniques to take advantage of constant configuration data, thereby enabling more effective debloating. We developed Trimmer, an application specialization tool that leverages user-provided configuration data to specialize an application to its deployment context. The specialization process attempts to eliminate the application functionality that is unused in the user-defined context. Our evaluation demonstrates Trimmer can effectively reduce code bloat. For 13 applications spanning various domains, we observe a mean binary size reduction of 21% and a maximum reduction of 75%. We also show specialization reduces the surface for code-reuse attacks by reducing the number of exploitable gadgets. For the evaluated programs, we observe a 20% mean reduction in the total gadget count and a maximum reduction of 87%.

Journal ArticleDOI
09 Feb 2018
TL;DR: The state-of-the-art and future developments of CubeSat radar missions for Earth remote sensing and the implications for NASA’s current and future Earth Science program are reviewed.
Abstract: Space-based radar observations have transformed our understanding of Earth over the last several decades. Driven by increasingly complex science questions, space radar missions have grown ever more sophisticated with costs rising often to hundreds of millions of dollars. At the other end of the cost and complexity spectrum, CubeSats have emerged in recent years as a disruptive innovation in the satellite sector and are now considered a means to address targeted science questions in a rapid and affordable manner. CubeSats enable new kinds of constellation-based Earth science observations not previously affordable with traditional spacecraft. Constellations of low-cost sensors provide both global spatial and high temporal coverage. As such, CubeSats are not only viable platforms to address current Earth science goals, but they also open a new realm of possibilities for science advancement and unique applications. Radar instruments have often been regarded as unsuitable for small satellite platforms due to their traditionally large size, weight, and power (SWaP). Burgeoning missions such as Radar in a CubeSat (RainCube) and CubeSat Imaging Radar for Earth Science (CIRES), being developed by Jet Propulsion Laboratory and SRI International, respectively, and funded by NASA’s Earth Science Technology Office (ESTO), are slated to dispel this notion. The key to the simplification and miniaturization of the radar subsystems in a manner that still offers compelling science and applications is 1) component technological advancement; and 2) an integrated instrument architecture and mission design that exploits the capabilities offered by CubeSat platforms. This paper reviews the state-of-the-art and future developments of CubeSat radar missions for Earth remote sensing and the implications for NASA’s current and future Earth Science program. The key enabling technologies for radio frequency (RF), digital, and antennas are surveyed, as well as the evolution of the CubeSat avionics, in the aspects that mostly impact radar development, namely power, volume, and attitude control and knowledge and precision orbit determination (POD). We investigate various radar applications that could benefit from low-cost CubeSat platforms, such as altimetry, sounding, precipitation profiling, scatterometry, synthetic aperture radar (SAR), and interferometric SAR (InSAR). We also explore the science motivation and impact of future missions that are based on these technological advancements.

Journal ArticleDOI
TL;DR: Menstrual cycle phase and menstrual-related disorders should be considered when assessing women's sleep complaints.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: A model based causality inference technique for audit logging that does not require any application instrumentation or kernel modification is developed and applied to attack investigation shows that the system-wide attack causal graphs are highly precise and concise, having better quality than the state-of-the-art.
Abstract: In this paper, we develop a model based causality inference technique for audit logging that does not require any application instrumentation or kernel modification. It leverages a recent dynamic analysis, dual execution (LDX), that can infer precise causality between system calls but unfortunately requires doubling the resource consumption such as CPU time and memory consumption. For each application, we use LDX to acquire precise causal models for a set of primitive operations. Each model is a sequence of system calls that have inter-dependences, some of them caused by memory operations and hence implicit at the system call level. These models are described by a language that supports various complexity such as regular, context-free, and even context-sensitive. In production run, a novel parser is deployed to parse audit logs (without any enhancement) to model instances and hence derive causality. Our evaluation on a set of real-world programs shows that the technique is highly effective. The generated models can recover causality with 0% false-positives (FP) and false-negatives (FN) for most programs and only 8.3% FP and 5.2% FN in the worst cases. The models also feature excellent composibility, meaning that the models derived from primitive operations can be composed together to describe causality for large and complex real world missions. Applying our technique to attack investigation shows that the system-wide attack causal graphs are highly precise and concise, having better quality than the state-of-the-art.


Book ChapterDOI
16 Sep 2018
TL;DR: A simple 3D Convolutional Neural Networks model is proposed and exploited to tailor the end-to-end architecture for the diagnosis of Alzheimer's disease (AD), which can diagnose AD with an accuracy of 94.1% on the popular ADNI dataset using only MRI data, which outperforms the previous state-of-the-art.
Abstract: As shown in computer vision, the power of deep learning lies in automatically learning relevant and powerful features for any perdition task, which is made possible through end-to-end architectures. However, deep learning approaches applied for classifying medical images do not adhere to this architecture as they rely on several pre- and post-processing steps. This shortcoming can be explained by the relatively small number of available labeled subjects, the high dimensionality of neuroimaging data, and difficulties in interpreting the results of deep learning methods. In this paper, we propose a simple 3D Convolutional Neural Networks and exploit its model parameters to tailor the end-to-end architecture for the diagnosis of Alzheimer’s disease (AD). Our model can diagnose AD with an accuracy of 94.1% on the popular ADNI dataset using only MRI data, which outperforms the previous state-of-the-art. Based on the learned model, we identify the disease biomarkers, the results of which were in accordance with the literature. We further transfer the learned model to diagnose mild cognitive impairment (MCI), the prodromal stage of AD, which yield better results compared to other methods.

Journal ArticleDOI
TL;DR: The proposed learning approach uses a receding horizon formulation that samples from the initial states and disturbances to enforce properties such as reachability, safety and stability and an over-approximate reachability analysis over the system, supported by range analysis for feedforward neural networks.

Journal ArticleDOI
TL;DR: This paper proposes a probabilistic extension of temporal logic, named Chance Constrained Temporal Logic (C2TL), that can be used to specify correctness requirements in presence of uncertainty, and presents a novel automated synthesis technique that compiles C2TL specification into mixed integer constraints.
Abstract: Autonomous vehicles have found wide-ranging adoption in aerospace, terrestrial as well as marine use These systems often operate in uncertain environments and in the presence of noisy sensors, and use machine learning and statistical sensor fusion algorithms to form an internal model of the world that is inherently probabilistic Autonomous vehicles need to operate using this uncertain world-model, and hence, their correctness cannot be deterministically specified Even once probabilistic correctness is specified, proving that an autonomous vehicle will operate correctly is a challenging problem In this paper, we address these challenges by proposing a correct-by-synthesis approach to autonomous vehicle control We propose a probabilistic extension of temporal logic, named Chance Constrained Temporal Logic (C2TL), that can be used to specify correctness requirements in presence of uncertainty C2TL extends temporal logic by including chance constraints as predicates in the formula which allows modeling of perception uncertainty while retaining its ease of reasoning We present a novel automated synthesis technique that compiles C2TL specification into mixed integer constraints, and uses second-order (quadratic) cone programming to synthesize optimal control of autonomous vehicles subject to the C2TL specification We also present a risk distribution approach that enables synthesis of plans with lower cost without increasing the overall risk We demonstrate the effectiveness of the proposed approach on a diverse set of illustrative examples

BookDOI
01 Jan 2018
TL;DR: In this article, Linn et al. argue that autonomous inquiry capabilities empower all citizens to take charge of their lives and demonstrate how these technologies can capture class performance and inform teachers of student progress.
Abstract: Author(s): Linn, MC; McElhaney, KW; Gerard, L; Matuk, C | Abstract: To synthesize research on inquiry learning, we integrate advances in theory, instructional design, and technology. We illustrate how inquiry instruction can exploit the multiple, often conflicting ideas that students have about personal, societal, and environmental dilemmas and promote coherent arguments about economic disparity or health decision-making. We show how technologies such as natural language processing, interactive simulations, games, collaborative tools, and personalized guidance can support students to become autonomous learners. We discuss how these technologies can capture class performance and inform teachers of student progress. We highlight autonomous learning from (a) student-initiated investigations of thorny, contemporary problems using modeling and visualization tools, (b) design projects featuring analysis of alternatives, testing prototypes, and iteratively refining solutions in complex disciplines, and (c) personalized guidance that encourages gathering evidence from multiple sources and refining ideas. We argue that autonomous inquiry capabilities empower all citizens to take charge of their lives.

Proceedings ArticleDOI
21 Feb 2018
TL;DR: The findings will help educators of introductory computing be more cognizant of how best to leverage the programming environments they are using, and what aspects they need to focus on as they attempt to address the learning needs of all in "CS For All."
Abstract: Block-based programming environments such as Scratch, App Inventor, and Alice are a key part of introductory K-12 computer science (CS) experiences. Free-choice, open-ended projects are encouraged to promote learner agency and leverage the affordances of these novice-programming environments that also support creative engagement in CS. This mixed methods research examines what we can learn about student learning from such programming artifacts. Using an extensive rubric created to evaluate these projects along several dimensions, we coded a sample of ~80 Scratch and App Inventor projects randomly selected from 20 middle school classrooms in a diverse urban school district in the US. We present key elements of our rubric, and report on noteworthy trends including the types of artifacts created and which key programming constructs are or are not commonly used. We also report on how factors such as students' gender, grade, and teachers' teaching experience influenced students' projects. We discuss differences between programming environments in terms of artifacts created, use of computing constructs, complexity of projects, and use of features of the environment for creativity, interactivity, and engagement. Our findings will help educators of introductory computing be more cognizant of how best to leverage the programming environments they are using, and what aspects they need to focus on as they attempt to address the learning needs of all in "CS For All."

Journal ArticleDOI
TL;DR: These data reveal new functions for microglia and suggest a novel risk factor for AD, and are the first, to the authors' knowledge, to show regional differences in humanmicroglia.

Journal ArticleDOI
TL;DR: Reductions in GM volume and cortical thickness likely indicate synaptic pruning and myelination, and suggest that diminished SWA in older, more mature adolescents may be driven by such processes within a number of frontal and parietal brain regions.
Abstract: During the course of adolescence, reductions occur in cortical thickness and gray matter (GM) volume, along with a 65% reduction in slow-wave (delta) activity during sleep (SWA) but empirical data linking these structural brain and functional sleep differences, is lacking. Here, we investigated specifically whether age-related differences in cortical thickness and GM volume and cortical thickness accounted for the typical age-related difference in slow-wave (delta) activity (SWA) during sleep. 132 healthy participants (age 12–21 years) from the National Consortium on Alcohol and NeuroDevelopment in Adolescence study were included in this cross-sectional analysis of baseline polysomnographic, electroencephalographic, and magnetic resonance imaging data. By applying mediation models, we identified a large, direct effect of age on SWA in adolescents, which explained 45% of the variance in ultra-SWA (0.3–1 Hz) and 52% of the variance in delta-SWA (1 to <4 Hz), where SWA was lower in older adolescents, as has been reported previously. In addition, we provide evidence that the structure of several, predominantly frontal, and parietal brain regions, partially mediated this direct age effect, models including measures of brain structure explained an additional 3–9% of the variance in ultra-SWA and 4–5% of the variance in delta-SWA, with no differences between sexes. Replacing age with pubertal status in models produced similar results. As reductions in GM volume and cortical thickness likely indicate synaptic pruning and myelination, these results suggest that diminished SWA in older, more mature adolescents may largely be driven by such processes within a number of frontal and parietal brain regions.

Journal ArticleDOI
TL;DR: A combination of preclinical and translationally driven studies has solidified TAAR1 as a key node in the regulation of dopaminergic signaling and holds great promise as a therapeutic target for mental illness, addiction, and sleep disorders.
Abstract: Introduction: The trace amines, endogenous amines closely related to the biogenic amine neurotransmitters, have been known to exert physiological and neurological effects for decades. The recent id...

Journal ArticleDOI
TL;DR: In this paper, the remaining kinetic uncertainties of FFCM-1 are examined using a perfectly stirred reactor (PSR) as the relevant model platform for which reliable experiments under the conditions tested are unavailable.

Proceedings Article
01 Jan 2018
TL;DR: In this paper, the problem of inferring Boolean non-Markovian rewards (also known as logical trace properties or specifications) from demonstrations provided by an agent operating in an uncertain, stochastic environment was formulated as a maximum a posteriori (MAP) probability inference problem, and applied the principle of maximum entropy to derive an analytic demonstration likelihood model.
Abstract: Real-world applications often naturally decompose into several sub-tasks. In many settings (e.g., robotics) demonstrations provide a natural way to specify the sub-tasks. However, most methods for learning from demonstrations either do not provide guarantees that the artifacts learned for the sub-tasks can be safely recombined or limit the types of composition available. Motivated by this deficit, we consider the problem of inferring Boolean non-Markovian rewards (also known as logical trace properties or specifications) from demonstrations provided by an agent operating in an uncertain, stochastic environment. Crucially, specifications admit well-defined composition rules that are typically easy to interpret. In this paper, we formulate the specification inference task as a maximum a posteriori (MAP) probability inference problem, apply the principle of maximum entropy to derive an analytic demonstration likelihood model and give an efficient approach to search for the most likely specification in a large candidate pool of specifications. In our experiments, we demonstrate how learning specifications can help avoid common problems that often arise due to ad-hoc reward composition.

Journal ArticleDOI
TL;DR: The Rényi divergence is a measure of closeness of two probability distributions that can often be used as an alternative to the statistical distance in security proofs for lattice-based cryptography and may be used in the case of distinguishing problems.
Abstract: The Renyi divergence is a measure of closeness of two probability distributions. We show that it can often be used as an alternative to the statistical distance in security proofs for lattice-based cryptography. Using the Renyi divergence is particularly suited for security proofs of primitives in which the attacker is required to solve a search problem (e.g., forging a signature). We show that it may also be used in the case of distinguishing problems (e.g., semantic security of encryption schemes), when they enjoy a public sampleability property. The techniques lead to security proofs for schemes with smaller parameters, and sometimes to simpler security proofs than the existing ones.

Journal ArticleDOI
TL;DR: The Internet of Battlefield Things (IoBT) might be one of the most expensive cyber-physical systems of the next decade, yet much research remains to develop its fundamental enablers.
Abstract: The Internet of Battlefield Things (IoBT) might be one of the most expensive cyber-physical systems of the next decade, yet much research remains to develop its fundamental enablers. A challenge that distinguishes the IoBT from its civilian counterparts is resilience to a much larger spectrum of threats.