scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 2018"


Journal ArticleDOI
TL;DR: Synthea, an open-source software package that simulates the lifespans of synthetic patients, modeling the 10 most frequent reasons for primary care encounters and the 10 chronic conditions with the highest morbidity in the United States is developed.

191 citations


Proceedings ArticleDOI
01 Oct 2018
TL;DR: A brief introduction to Named Data Networking's basic concepts and operations is offered, together with an extensive reference list for the design and development of NDN for readers interested in further exploration of the subject.
Abstract: As a proposed Internet architecture, Named Data Networking (NDN) is designed to network the world of computing devices by naming data instead of naming data containers as IP does today. With this change, NDN brings a number of benefits to network communication, including built-in multicast, in-network caching, multipath forwarding, and securing data directly. NDN also enables resilient communication in intermittently connected and mobile ad hoc environments, which is difficult to achieve by today's TCP/IP architecture. This paper offers a brief introduction to NDN's basic concepts and operations, together with an extensive reference list for the design and development of NDN for readers interested in further exploration of the subject.

90 citations


Journal ArticleDOI
TL;DR: In this paper, an agent-based model for analyzing the vulnerability of the financial system to asset-and funding-based fire sales is presented, which can illuminate the pathways for the propagation of key crisis dynamics such as fire sales and funding runs.
Abstract: This study addresses a critical regulatory shortfall by developing a platform to extend stress testing from a microprudential approach to a dynamic, macroprudential approach. This paper describes the ensuing agent-based model for analyzing the vulnerability of the financial system to asset- and funding-based fire sales. The model captures the dynamic interactions of agents in the financial system extending from the suppliers of funding through the intermediation and transformation functions of the bank/dealers to the financial institutions that use the funds to trade in the asset markets. The model replicates the key finding that it is the reaction to initial losses, rather than the losses themselves, that determine the extent of a crisis. By building on a detailed mapping of the transformations and dynamics of the financial system, the agent-based model provides an avenue toward risk management that can illuminate the pathways for the propagation of key crisis dynamics such as fire sales and funding runs.

52 citations


Journal ArticleDOI
TL;DR: In this article, a mechanically reconfigurable patch antenna with polarization diversity is presented, which is composed of a fixed L-probe feed on a bottom substrate layer and a patch with truncated corners on an upper substrate layer that can rotate in azimuth.
Abstract: A mechanically reconfigurable patch antenna with polarization diversity is presented. The antenna is composed of a fixed L-probe feed on a bottom substrate layer and a patch with truncated corners on an upper substrate layer that can rotate in azimuth. The layers are held together with a thin bolt going through the center of both layers. The handedness of the circular polarization (right-hand circular polarization or left-hand circular polarization) may be changed by simply rotating the upper substrate by 90° with respect to the fixed capacitively coupled feed. Additionally, since the upper layer is not permanently attached, the resonant frequency may be modified by swapping out the patch with one having different dimensions. The L-probe presented in this letter is impedance-matched to patch sizes resonating in the range of 1.0–1.8 GHz. The antenna design is presented along with measurement results, which are shown to be in excellent agreement with simulations. This simple and low-cost reconfigurable design is ideal for applications or systems where the antenna requirements may be variable.

37 citations


Journal ArticleDOI
TL;DR: This paper proposes an algorithm that builds the citation graph from a document and automatically labels each edge according to its purpose and analyzed the effectiveness of different clustering methods such as K-means and support vector machine to automatically label each citation with the corresponding label.
Abstract: A large number of cross-references to various bodies of text are used in legal texts, each serving a different purpose. It is often necessary for authorities and companies to look into certain types of these citations. Yet, there is a lack of automatic tools to aid in this process. Recently, citation graphs have been used to improve the intelligibility of complex rule frameworks. We propose an algorithm that builds the citation graph from a document and automatically labels each edge according to its purpose. Our method uses the citing text only and thus works only on citations who’s purpose can be uniquely identified by their surrounding text. This framework is then applied to the US code. This paper includes defining and evaluating a standard gold set of labels that cover a vast majority of citation types which appear in the “US Code” but are still short enough for practical use. We also proposed a novel linear-chain conditional random field model that extracts the features required for labeling the citations from the surrounding text. We then analyzed the effectiveness of different clustering methods such as K-means and support vector machine to automatically label each citation with the corresponding label. Besides this, we talk about the practical difficulties of this task and give a comparison of human accuracy compared to our end-to-end algorithm.

33 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of band-limited white Gaussian noise (BLWGN) on electromagnetically induced transparency (EIT) and Autler-Townes (AT) splitting, when performing atom-based continuous-wave (CW) radio-frequency (RF) electric (E) field strength measurements with Rydberg atoms in an atomic vapor was investigated.
Abstract: We investigate the effect of band-limited white Gaussian noise (BLWGN) on electromagnetically induced transparency (EIT) and Autler-Townes (AT) splitting, when performing atom-based continuous-wave (CW) radio-frequency (RF) electric (E) field strength measurements with Rydberg atoms in an atomic vapor. This EIT/AT-based E-field measurement approach is currently being investigated by several groups around the world as a means to develop a new International System of Units traceable RF E-field measurement technique. For this to be a useful technique, it is important to understand the influence of BLWGN. We perform EIT/AT based E-field experiments with BLWGN centered on the RF transition frequency and for the BLWGN blue-shifted and red-shifted relative to the RF transition frequency. The EIT signal can be severely distorted for certain noise conditions (bandwidth, center frequency, and noise power), hence altering the ability to accurately measure a CW RF E-field strength. We present a model to predict the line shifts and broadenings in the EIT signal in the presence of noise. This model includes AC Stark shifts and on resonance transitions associated with the noise source. The results of this model are compared to the experimental data, and we find very good agreement between the two.We investigate the effect of band-limited white Gaussian noise (BLWGN) on electromagnetically induced transparency (EIT) and Autler-Townes (AT) splitting, when performing atom-based continuous-wave (CW) radio-frequency (RF) electric (E) field strength measurements with Rydberg atoms in an atomic vapor. This EIT/AT-based E-field measurement approach is currently being investigated by several groups around the world as a means to develop a new International System of Units traceable RF E-field measurement technique. For this to be a useful technique, it is important to understand the influence of BLWGN. We perform EIT/AT based E-field experiments with BLWGN centered on the RF transition frequency and for the BLWGN blue-shifted and red-shifted relative to the RF transition frequency. The EIT signal can be severely distorted for certain noise conditions (bandwidth, center frequency, and noise power), hence altering the ability to accurately measure a CW RF E-field strength. We present a model to predict the l...

32 citations


Journal ArticleDOI
TL;DR: The methods and models that compose Cyber Security Game, a method that has been implemented in software that quantitatively identifies cyber security risks and uses this metric to determine the optimal employment of security methods for any given investment level are discussed.
Abstract: This paper describes the Cyber Security Game (CSG). Cyber Security Game is a method that has been implemented in software that quantitatively identifies cyber security risks and uses this metric to determine the optimal employment of security methods for any given investment level. Cyber Security Game maximizes a system’s ability to operate in today’s contested cyber environment by minimizing its mission risk. The risk score is calculated by using a mission impact model to compute the consequences of cyber incidents and combining that with the likelihood that attacks will succeed. The likelihood of attacks succeeding is computed by applying a threat model to a system topology model and defender model. Cyber Security Game takes into account the widespread interconnectedness of cyber systems, where defenders must defend all multi-step attack paths and an attacker only needs one to succeed. It employs a game theoretic solution using a game formulation that identifies defense strategies to minimize the maximu...

31 citations


Journal ArticleDOI
20 Feb 2018-JAMA
TL;DR: Major themes in the framework for APM classification scheme are reviewed, demonstrating how they are applied in the LAN classification scheme and the LAN national goals for payment reform are reviewed.
Abstract: The way physicians, hospitals, andotherhealthcareprofessionalsarepaidinfluencespatientcarebecausepayment methods affect business models that clinicians and health care facilities use to prioritize investments, establish infrastructure,anddesigncareprocesses.Fee-for-servicemedicine and its volume-based financial incentives can lead to overuse of low-value services and suboptimal care. A consensusisemergingamongpatients,healthcareprofessionals, payers, and purchasers that transitioning to alternative payment models (APMs) that better incentivize value for patients is essential for improving the quality and affordability of health care. This is evidenced by bipartisan supportfortheMedicareAccessandCHIPReauthorizationAct (MACRA);numerousmodelsfromtheCMSInnovationCenter;theactivitiesofthePhysician-FocusedPaymentModel Technical Advisory Committee; and the achievements of states, private plans, and hospitals and physicians nationally. As health care organizations prioritize the transition to APMs, it is increasingly important to use consistent terminology about stages along the APM pathway and common methodsformeasuringprogress.Frameworksthatclassify APMs serve both of these purposes. In 2014, the CMS published a system for classifying APMs.1 In the interim, the Health Care Payment Learning and Action Network (LAN), a public-private partnership drivingmultistakeholderconsensusandcoordinatedaction to accelerate the transition to APMs, built on, expanded, and refined the original system. The LAN was created through the CMS Alliance to Modernize Healthcare FederallyFundedResearchandDevelopmentCenter,whichisoperatedbyMITRECorp.TheLANincorporateddiverseviewpoints by convening 2 multistakeholder groups consisting of patient and consumer representatives, health care professionals, payers, and purchasers to develop an original framework2 in 2016 and to update it in 2017. The revised framework,3 as developed by the LAN, reflects consensus on principles for APM design and classification along with recent, practical insights about implementing APMs (Box). Recent revisions have also brought the framework into alignment with MACRA (which repealed the Standard Growth Rate and streamlined multiple quality programs), insofar as all “advanced” APMs under MACRA fall into category 3 or higher. This Viewpoint reviews major themes in theframeworkprinciplesasrevisedbyLAN,illustratinghow they are applied in the LAN classification scheme,4 and the LAN national goals for payment reform.

28 citations


Journal ArticleDOI
TL;DR: The motivation, concept of operations, and structure of the LORELEI Program is described to provide some background for the design decisions made within the LoReHLT evaluation.
Abstract: The initial impetus for the establishment of the Low Resource Human Language Technology (LoReHLT) evaluation was for LoReHLT to serve as a program evaluation for the Defense Advanced Research Projects Agency (DARPA) Low Resource Languages for Emergent Incidents (LORELEI) research program. As evaluation planning developed, however, NIST and DARPA decided to open the program evaluation to the rest of the national and international research communities, in order to encourage research progress in the LORELEI-related problem space. This article describes the motivation, concept of operations, and structure of the LORELEI Program to provide some background for the design decisions made within the LoReHLT evaluation.

25 citations


Proceedings ArticleDOI
01 Jun 2018
TL;DR: The techniques explored range from simple bag-of-ngrams classifiers to neural architectures with varied attention and alignment mechanisms to Logistic regression, which answers reading comprehension questions with 82.27% accuracy.
Abstract: This paper describes MITRE’s participation in SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge. The techniques explored range from simple bag-of-ngrams classifiers to neural architectures with varied attention and alignment mechanisms. Logistic regression ties the systems together into an ensemble submitted for evaluation. The resulting system answers reading comprehension questions with 82.27% accuracy.

22 citations


Journal ArticleDOI
TL;DR: The Falcon Telescope Network (FTN) as mentioned in this paper is composed of 12 observatories in the United States, Chile, Germany, and Australia, with a potential site in South Africa, with the main objective that once in operation, the telescopes will be capable of working together to perform simultaneous and/or continuous observations of a single object in the sky.
Abstract: We present a new global array of small aperture optical telescopes designed to study artificial satellites and the nearby universe: the Falcon Telescope Network (FTN). Developed by the Center for Space Situational Awareness Research in the Department of Physics at the United States Air Force Academy (USAFA), the FTN is composed of 12 observatories in the United States, Chile, Germany, and Australia, with a potential site in South Africa. The observatory sites were strategically selected with the main objective that once in operation, the telescopes will be capable of working together to perform simultaneous and/or continuous observations of a single object in the sky. This capability allows the observation of artificial satellites from different baselines in a wide range of orbits, continuous data acquisition of variable astronomical sources, and rapid response observations of transient phenomena that require almost immediate follow-up (gamma-ray bursts, novae, or supernovae, etc.). Consisting of commercially available equipment, each observatory is equipped with a 0.5 m primary mirror telescope, a CCD camera, photometric filters, including a special filter to detect exoplanets, and a diffraction grating. The FTN is designed for remote and robotic operation with a host of automation software and services housed on the site computers and at USAFA. FTN partners will have access to a web-based interface where both the observation application as well as the raw data obtained by any of the Falcon nodes will be available. The FTN is a collaborative effort between the USAFA and educational or research institutions on four continents, demonstrating that, through the cooperation of multiple institutions of different levels and capabilities, high-level scientific and educational programs can be carried out, regardless of the geographic location of the various network members.

Journal ArticleDOI
01 Sep 2018
TL;DR: Three pediatric intensive care units adapted a novel 5-part improvement framework and successfully reduced blood culture use in critically ill children, demonstrating that different providers and practice environments can adapt diagnostic stewardship programs.
Abstract: Introduction Single center work demonstrated a safe reduction in unnecessary blood culture use in critically ill children. Our objective was to develop and implement a customizable quality improvement framework to reduce unnecessary blood culture testing in critically ill children across diverse clinical settings and various institutions. Methods Three pediatric intensive care units (14 bed medical/cardiac; 28 bed medical; 22 bed cardiac) in 2 institutions adapted and implemented a 5-part Blood Culture Improvement Framework, supported by a coordinating multidisciplinary team. Blood culture rates were compared for 24 months preimplementation to 24 months postimplementation. Results Blood culture rates decreased from 13.3, 13.5, and 11.5 cultures per 100 patient-days preimplementation to 6.4, 9.1, and 8.3 cultures per 100 patient-days postimplementation for Unit A, B, and C, respectively; a decrease of 32% (95% confidence interval, 25-43%; P < 0.001) for the 3 units combined. Postimplementation, the proportion of total blood cultures drawn from central venous catheters decreased by 51% for the 3 units combined (95% confidence interval, 29-66%; P < 0.001). Notable difference between units included the identity and involvement of the project champion, adaptions of the clinical tools, and staff monitoring and communication of project progress. Qualitative data also revealed a core set of barriers and facilitators to behavior change around pediatric intensive care unit blood culture practices. Conclusions Three pediatric intensive units adapted a novel 5-part improvement framework and successfully reduced blood culture use in critically ill children, demonstrating that different providers and practice environments can adapt diagnostic stewardship programs.

Book ChapterDOI
Steven Noel1
01 Jan 2018
TL;DR: A number of key developments in graph-based methods for assessing and improving the security of operational computer networks, maintaining situational awareness, and assuring organizational missions are reviewed, and are placed within the context of a number of complementary dimensions.
Abstract: There is a line of research extending over the last 20+ years applying graph-based methods for assessing and improving the security of operational computer networks, maintaining situational awareness, and assuring organizational missions. This chapter reviews a number of key developments in these areas, and places them within the context of a number of complementary dimensions. These dimensions are oriented to the requirements of operational security, to help guide practitioners towards matching their use cases with existing technical approaches. One dimension we consider is the phase of security operations (prevention, detection, and reaction) to which an approach applies. Another dimension is the operational layer (network infrastructure, security posture, cyberspace threats, mission dependencies) that an approach spans. We also examine the mathematical underpinnings of the various approaches as they apply to security requirements. Finally, we describe architectural aspects of various approaches, especially as they contribute to scalability and performance.

Journal ArticleDOI
TL;DR: This paper model this problem as a two-stage stochastic program and solve it with a column-generation-based heuristic, showing this method only needs 3 min to produce solutions within 6% of a true lower bound of the optimal for 99% of over 150 test cases.
Abstract: Motivated by a cybersecurity workforce optimization problem, this paper investigates optimizing staffing and shift scheduling decisions given unknown demand and multiple on-call staffing options at a 24/7 firm with three shifts per day, three analyst types, and several staffing and scheduling constraints. We model this problem as a two-stage stochastic program and solve it with a column-generation-based heuristic. Our computational study shows this method only needs 3 min to produce solutions within 6% of a true lower bound of the optimal for 99% of over 150 test cases.

Journal ArticleDOI
TL;DR: In this paper, a single mechanically sheared Marcellus shale fracture was investigated computationally and experimentally using computed tomography (CT) scans with resolution 26.8μm.
Abstract: Flow through a single mechanically sheared Marcellus shale fracture was investigated computationally and experimentally. To provide a better understanding of the variation of hydraulic and geometrical characteristics of a fracture subjected to shearing, coupled shear flow tests on the fracture for four shearing displacement steps under constant normal stress were performed. At the end of each shearing step, computed tomography (CT) scans with resolution 26.8 μm were obtained and the corresponding fracture geometries were evaluated. The CT images were used to generate full aperture maps of the fracture configuration. In addition, average aperture maps were also created by averaging the full-resolution data over 10 × 10 pixels, smoothing out fine structural details. Computational modeling of water flow through the fractures at different shearing steps was performed using a modified local cubic law approach and the 3D full Navier–Stokes equations with the use of the ANSYS-Fluent software. Both the average aperture maps and full maps were used in these simulations. The experimental pressure drops of the fracture at shearing step 1, which has very small apertures, poorly matched the numerical results, quite likely because the fracture structure was inadequately captured by the scanning resolution. Shearing typically increased the aperture height of the fracture, whose features were then better captured by the CT scan. Good agreement between the experimental data and the numerical results of the full map for shearing step 2 was observed. The simulations were performed for both full and average aperture maps, and the effects of scan resolution and surface roughness on the accuracy of the results were studied. The modified local cubic law and full Navier–Stokes simulations of the averaged map fracture were found to be in good agreement. It was conjectured that this was because the nonlinear losses were insignificant for the smoothed out averaged map fracture. Similar comparisons with those of the full map showed agreement in trends, but there were some quantitative differences. The averaged fracture map simulations also predicted lower pressure drops compared to the full map, particularly for high flow rates. These differences were due to the fine-scale geometrical complexity (surface roughness) of fracture geometry that affects the fluid flow in the fracture. An improved cubic law model was also proposed, and its accuracy was verified by comparing its predictions with those of the Navier–Stokes simulations.

Proceedings ArticleDOI
03 Mar 2018
TL;DR: A MBSE approach for technical measurement that begins with a set of mission objectives derived from stakeholder concerns is defined and examples are provided which illustrate the application of this approach to a CubeSat.
Abstract: While much has been written about technical measurement and Model-Based Systems Engineering (MBSE), very little literature exists that ties the two together. What does exist treats the topic in a general manner and is void of details. Given the vital role that technical measurement plays in the systems engineering process, and the ever increasing adoption of the MBSE approach, there is a growing need to define how technical measurement would be implemented as part of a MBSE approach. The purpose of this paper is to address that need. Technical measurement is defined as the set of measurement activities used to provide insight into the progress made in the definition and development of the technical solution and the associated risks and issues [1]. Technical measures are used to: determine if the technical solution will meet stakeholder needs, provide early indications if the development effort is not progressing as needed to meet key milestones, predict the likelihood of the delivered solution to meet performance requirements, monitor high risk items, and assess the effectiveness of risk mitigation actions. MBSE is defined as the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases [2]. The benefits of using an MBSE approach over a traditional document-based systems engineering approach are: enhanced communications, reduced development risk, improved quality, and enhanced knowledge transfer. This paper defines a MBSE approach for technical measurement that begins with a set of mission objectives derived from stakeholder concerns. The objectives and concerns are represented as elements captured in the system model. Next, Measures of Effectiveness (MOEs) are derived from the mission objectives. Initially, these MOEs are captured in a special model element that allows for the MOEs to be described in a natural language format that stakeholders will understand. Those initial MOEs are then quantified and captured as value properties of the Enterprise block. The MOEs are traced back to their originating source in the mission objectives. Next, Measures of Performance (MOPs) are derived from the enterprise-level MOEs and captured as value properties of the System block. The derivation of the MOPs is captured through the development of constraint blocks and parametric diagrams. This provides for traceability between MOPs and MOEs and supports performance analysis of the MOPs to predict if the MOEs will be met. MOPs are also traced to system requirements captured in the system model. Next, the process steps at the system-level are repeated at the subsystem-level to derive Technical Performance Measures (TPMs). These TPMs are traced back to MOPs and subsystem requirements in the same manner as described for MOPs. Examples are provided throughout the paper which illustrate the application of this approach to a CubeSat. Using a CubeSat example is appropriate given the historically high failure rate and rapid growth of these missions and the role technical measurement could play in increasing their success.

Journal ArticleDOI
TL;DR: This research presents the first examination of national eCQM data to identify physician and practice-level characteristics associated with performance, and suggests patterns that may inform steps to improve performance.

Book ChapterDOI
01 Jan 2018
TL;DR: The goal of this chapter is to highlight the current state of password cracking techniques, as well as discuss some of the cutting edge approaches that may become more prevalent in the near future.
Abstract: At its heart, a password cracking attack is a modeling problem. An attacker makes guesses about a user’s password until they guess correctly or they give up. While the defender may limit the number of guesses an attacker is allowed, a password’s strength often depends on how hard it is for an attacker to model and reproduce the way in which a user created their password. If humans were effective at practicing unique habits or generating and remembering random values, cracking passwords would be a near impossible task. That is not the case, though. A vast majority of people still follow common patterns, from capitalizing the first letter of their password to putting numbers at the end. While people have remained mostly the same, the password security field has undergone major changes in an ongoing arms race between the attackers and defenders. The goal of this chapter is to highlight the current state of password cracking techniques, as well as discuss some of the cutting edge approaches that may become more prevalent in the near future.

DOI
15 Apr 2018
TL;DR: A collection of position papers of the participating experts supporting their viewpoints represented in the discussion, asking how the combination of various simulation paradigms, methods - so-called hybrid simulation - can be utilized regarding complexity, intelligence, and adaptability of cyber physical systems.
Abstract: During the Spring Simulation Multi-Conference 2017, a group of invited experts discussed challenges in M&S of cyber physical systems. This 2018 panel is a follow-on activity, asking how the combination of various simulation paradigms, methods - so-called hybrid simulation - can be utilized regarding complexity, intelligence, and adaptability of cyber physical systems. This paper is a collection of position papers of the participating experts, supporting their viewpoints represented in the discussion.


Journal ArticleDOI
29 Apr 2018
TL;DR: This paper proposes an approach for generating a small number of diverse, feasible solutions for further evaluation by traffic managers, using a variation on Dijkstra’s shortest-path algorithm for reroutes designed for one or more flights.
Abstract: Decision support capabilities that provide flight-specific reroutes around constraints can enable more flexible and agile management of the airspace. For this benefit to be realized, automation must reliably provide operationally acceptable alternatives to traffic managers. This Paper proposes an approach for generating a small number of diverse, feasible solutions for further evaluation by traffic managers. Using a variation on Dijkstra’s shortest-path algorithm, reroutes are designed for one or more flights, in which multiflight problems promote the active design of reroute flows. A multi-objective genetic algorithm is employed to evaluate tradeoffs between multiple criteria of operational acceptability, removing the need to predefine relative metric weightings. Finally, a combination of principal components analysis and spectral clustering is used to identify distinct groups of solutions and representative reroutes that capture different tradeoffs between metrics of operational acceptability. Results a...

Journal ArticleDOI
09 Aug 2018
TL;DR: In this article, the authors make predictions of airspace and airport capacities several hours into the future and particularly in response to forecasted constraints, yet, forecasted flight capacities are not yet available.
Abstract: Strategic air traffic flow management requires predictions of airspace and airport capacities several hours into the future and particularly in response to forecasted constraints. Yet, forecasted w...

Journal ArticleDOI
TL;DR: In this paper, the authors used constrained density-functional theory to investigate the mechanism of charge state conversion from the bright neutral charge state of the divacancy defect to the positive and negative charge states including corresponding recovery of the neutral charge states.
Abstract: Optical charge state switching was previously observed in photoluminescence experiments for the divacancy defect in $4H$-SiC. The participating dark charge state could not be identified with certainty. We use constrained density-functional theory to investigate the mechanism of charge state conversion from the bright neutral charge state of the divacancy defect to the positive and negative charge states including corresponding recovery of the neutral charge state. While we can confirm that the positive charge state is dark, we do not find evidence that the negative charge state is dark. We compute similar absorption energies required for conversion of the neutral defect to both charge states. However, the formation of the positive charge state requires a series of excitations involving a 2-photon excitation, while the creation of the negative charge state is achieved through a single 2-photon process. Calculated absorption energies for the recovery of the neutral defect from the positive charge state fit the experimental value better than those from the negative charge state. Defect formation energies as a function of the Fermi energy show a very small Fermi energy range in which the negative charge state is most stable, while the positive charge state exhibits a wide stability range. Overall, our computational results give more support to the identification of the dark charge state as the positive over the negative charge state in the mechanism of optical charge state switching.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: This paper describes how NDN enables software design patterns that have been successful in a variety of Internet applications to be used in battlefield scenarios that involve highly dynamic and disrupted network conditions where implementations over the TCP/IP architecture have struggled.
Abstract: Named Data Networking (NDN) is a network architecture that forwards data directly based on application-defined names. This paper describes how NDN enables software design patterns that have been successful in a variety of Internet applications to be used in battlefield scenarios. These scenarios involve highly dynamic and disrupted network conditions where implementations over the TCP/IP architecture have struggled. The six patterns include: host-independent abstractions, multicast communication, pervasive network-accessible storage, opportunistic communication, namespace synchronization as transport, and data-centric security. The patterns are motivated by previous research into applications using NDN, which is briefly summarized.

Journal ArticleDOI
TL;DR: The peer‐reviewed literature on second‐order split‐plot designs is scarce, limited in examples, and often provides limited or too general guidelines.
Abstract: The fundamental principles of experiment design are factorization, replication, randomization, and local control of error. In many industrial experiments, however, departure from these principles is commonplace. Often in our experiments, complete randomization is not feasible because factor level settings are hard, impractical, or inconvenient to change, or the resources available to execute under homogeneous conditions are limited. These restrictions in randomization result in split-plot experiments. Also, we are often interested in fitting second-order models, which lead to second-order split-plot experiments. Although response surface methodology has experienced a phenomenal growth since its inception, second-order split-plot design has received only modest attention relative to other topics during the same period. Many graduate textbooks either ignore or only provide a relatively basic treatise of this subject. The peer-reviewed literature on second-order split-plot designs, especially with blocking, is scarce, limited in examples, and often provides limited or too general guidelines. This deficit of information leaves practitioners ill-prepared to face the many challenges associated with these types of designs. This article seeks to provide an overview of recent literature on response surface split-plot designs to help practitioners in dealing with these types of designs.

Proceedings ArticleDOI
01 Aug 2018
TL;DR: This paper proposes an automated, device-level solution that can be deployed on a single board computer to effectively detect, and provide response strategies that deflect malicious signals and remediate infected devices when network-based cyber-attacks are successful.
Abstract: The communications infrastructure for building automation systems was not originally designed to be resilient, and is susceptible to network attacks. Adversaries can exploit out-of-date legacy systems, insecure open protocols, exposure to the public internet, and outdated firmware to cause harm. To improve the defense strategies, significant efforts to provide defense through network detection have been conducted. However, the existing solutions require human intervention, such as analyst or an incident responder to investigate breaches and mitigate possible damages or data loss. Instead, this paper proposes an automated, device-level solution that can be deployed on a single board computer to effectively detect, and provide response strategies that deflect malicious signals and remediate infected devices when network-based cyber-attacks are successful. The solution monitors critical control networks, analyzes packet data, and actively detects and responds to attacks using an unsupervised artificial neural network.

Journal ArticleDOI
TL;DR: In this paper, a spectral shape algorithm applied to the derivative was found to be most successful in classifying these species in microscope scenes, and further work is required to determine if the spectral characterization of cyanobacterial genera can be scaled up to remote sensing applications.
Abstract: Cyanobacterial blooms are a nuisance and a potential hazard in freshwater systems worldwide. Remote sensing has been used to detect cyanobacterial blooms, but few studies have distinguished among genera of cyanobacteria. Because some genera are more likely to be toxic than others, this is a useful distinction. Hyperspectral imaging reflectance microscopy was used to examine cyanobacteria from Upper Klamath Lake, Oregon, at high spatial and spectral resolution to determine if two species found commonly in the lake, Aphanizomenon flos-aquae and Microcystis aeruginosa, can be separated spectrally. Of the analytical methods applied, a spectral shape algorithm applied to the derivative was found to be most successful in classifying these species in microscope scenes. Further work is required to determine if the spectral characterization of cyanobacterial genera can be scaled up to remote sensing applications.

Proceedings ArticleDOI
15 Apr 2018
TL;DR: The traditional definition for hybrid simulation is expanded to include recent research on multi-paradigm and other multi-faceted modeling approaches, and the focus of simulation support lies first in providing a virtual environment supporting development and testing, and second on utilizing simulation as part of the cyber physical system.
Abstract: This paper evaluates the state of the art of hybrid simulation support for cyber physical systems. The traditional definition for hybrid simulation is expanded to include recent research on multi-paradigm and other multi-faceted modeling approaches. Based on the review of current approaches, the focus of simulation support lies first in providing a virtual environment supporting development and testing, and second on utilizing simulation as part of the cyber physical system. A literature research on support of cyber physical systems within the simulation community shows common trends towards a common formalism, but an aligned research agenda has not been established.

Journal ArticleDOI
TL;DR: The results support the claim that agent-based modeling is particularly well-suited for representing the complex organizational and spatial structures inherent to military operations, and are urged to incorporate the key elements of the framework into existing modeling tools when performing studies of cyber attacks on mobile tactical networks and corresponding cybersecurity measures.
Abstract: Mobile tactical networks facilitate communication, coordination, and information dissemination between soldiers in the field. Their increasing use provides important benefits, yet also makes them a...