scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 2015"


Journal ArticleDOI
TL;DR: Using human evaluation of 100,000 words spread across 24 corpora in 10 languages diverse in origin and culture, evidence of a deep imprint of human sociality in language is presented, observing that the words of natural human language possess a universal positivity bias.
Abstract: Using human evaluation of 100,000 words spread across 24 corpora in 10 languages diverse in origin and culture, we present evidence of a deep imprint of human sociality in language, observing that (i) the words of natural human language possess a universal positivity bias, (ii) the estimated emotional content of words is consistent between languages under translation, and (iii) this positivity bias is strongly independent of frequency of word use. Alongside these general regularities, we describe interlanguage variations in the emotional spectrum of languages that allow us to rank corpora. We also show how our word evaluations can be used to construct physical-like instruments for both real-time and offline measurement of the emotional content of large-scale texts.

291 citations


Journal ArticleDOI
TL;DR: A baseline multiple fault and multiconstellation advanced receiver autonomous integrity monitoring user algorithm for vertical guidance and a list of possible algorithm improvements and simplifications is presented.
Abstract: We present a baseline multiple fault and multiconstellation advanced receiver autonomous integrity monitoring user algorithm for vertical guidance. After reviewing the navigation requirements for localizer performance with vertical guidance down to 200 feet, we describe in detail how to process the pseudorange measurements, the nominal error models, and the prior fault probabilities to obtain the protection levels and other figures of merit. In particular, we show how to determine which fault modes must be monitored and a method for performing fault exclusion. Finally, we present a list of possible algorithm improvements and simplifications.

152 citations


Journal ArticleDOI
TL;DR: A general introduction to MouseMine is presented, examples of its use are presented, and the potential for further integration into the MGI interface is discussed.
Abstract: MouseMine (www.mousemine.org) is a new data warehouse for accessing mouse data from Mouse Genome Informatics (MGI). Based on the InterMine software framework, MouseMine supports powerful query, reporting, and analysis capabilities, the ability to save and combine results from different queries, easy integration into larger workflows, and a comprehensive Web Services layer. Through MouseMine, users can access a significant portion of MGI data in new and useful ways. Importantly, MouseMine is also a member of a growing community of online data resources based on InterMine, including those established by other model organism databases. Adopting common interfaces and collaborating on data representation standards are critical to fostering cross-species data analysis. This paper presents a general introduction to MouseMine, presents examples of its use, and discusses the potential for further integration into the MGI interface.

116 citations


Proceedings ArticleDOI
05 Jan 2015
TL;DR: The Sense and Avoid Science and Research Panel (SARP) brought together key experts, determined guiding principles for UAS Well Clear, and aligned research efforts from NASA, Massachusetts Institute of Technology (MIT) Lincoln Laboratory, and the U.S. Air Force Research Laboratory to evaluate three UAS well clear candidates.
Abstract: A key challenge associated with integration of Unmanned Aircraft Systems (UAS) is developing a means to sense and avoid (SAA) other aircraft. One of the main functions of SAA is to remain “well clear” of other aircraft. While human pilots determine “well clear” subjectively, SAA systems need a clear quantitative definition of UAS Well Clear. The Sense and Avoid Science and Research Panel (SARP) brought together key experts, determined guiding principles for UAS Well Clear, and aligned research efforts from NASA, Massachusetts Institute of Technology (MIT) Lincoln Laboratory, and the U.S. Air Force Research Laboratory to evaluate three UAS Well Clear candidates in four modelling and simulation environments. The three Well Clear candidates were evaluated against eight agreed evaluation metrics. The result of the SARP evaluation process led to a recommended quantitative definition for UAS Well Clear, promising to close one of the most urgent research gaps for UAS SAA.

97 citations


Journal ArticleDOI
TL;DR: The proposed framework offers a novel approach for comprehensively studying the elements of cyber-physical system attacks, including the attacker objectives, cyber exploitation, control-theoretic properties and physical system properties.

74 citations


Journal ArticleDOI
TL;DR: This work summarizes the strengths and weaknesses of existing schemas, and proposes the open-source CybOX schema as a foundation for storing and sharing digital forensic information and introduces and leverages initial steps of a Unified Cyber Ontology (UCO) effort to abstract and express concepts/constructs that are common across the cyber domain.

64 citations


Proceedings ArticleDOI
06 Dec 2015
TL;DR: This panel is for leading researchers to identify and discuss their views on conceptual modeling and will debate the definition, purpose and benefits of conceptual modeling for the field of simulation.
Abstract: Over the last decade there has been a growing interest in ‘conceptual modeling’ for simulation. This is signified by a greater intensity of research and volume of papers on the topic. What is becoming apparent, however, is that when it comes to conceptual modeling there are quite different views and opinions. These differences may be beneficial for creating a debate that takes the field forward, but they can also lead to confusion. The purpose of this panel is for leading researchers to identify and discuss their views on conceptual modeling. In particular we will debate the definition, purpose and benefits of conceptual modeling for the field of simulation. Through the discussion we hope to highlight common ground and key areas of difference.

64 citations


Journal ArticleDOI
TL;DR: Evidence for climate adaptation and its role in colonization of novel environments in the model species, Arabidopsis thaliana, and genotypic mean correlations of fitness across plantings suggest the genetic basis of fitness in Rhode Island differs between spring and autumn cohorts, and from previous fitness measurements in European field sites.
Abstract: Understanding the genetic mechanisms that contribute to range expansion and colonization success within novel environments is important for both invasion biology and predicting species-level responses to changing environments. If populations are adapted to local climates across a species' native range, then climate matching may predict which genotypes will successfully establish in novel environments. We examine evidence for climate adaptation and its role in colonization of novel environments in the model species, Arabidopsis thaliana. We review phenotypic and genomic evidence for climate adaptation within the native range and describe new analyses of fitness data from European accessions introduced to Rhode Island, USA, in spring and fall plantings. Accessions from climates similar to the Rhode Island site had higher fitness indicating a potential role for climate pre-adaptation in colonization success. A genomewide association study (GWAS), and genotypic mean correlations of fitness across plantings suggest the genetic basis of fitness in Rhode Island differs between spring and autumn cohorts, and from previous fitness measurements in European field sites. In general, these observations suggest a scenario of conditional neutrality for loci contributing to colonization success, although there was evidence of a fitness trade-off between fall plantings in Norwich, UK, and Rhode Island. GWAS suggested that antagonistic pleiotropy at a few specific loci may contribute to this trade-off, but this conclusion depended upon the accessions included in the analysis. Increased genomic information and phenotypic information make A. thaliana a model system to test for the genetic basis of colonization success in novel environments.

48 citations


Journal ArticleDOI
Steven R. Best1
TL;DR: In this article, a lower bound on the quality factor (Q) of an electrically small antenna that fully occupies a spherical volume is defined by the well-known Chu limit.
Abstract: The fundamental limitation (lower bound) on the quality factor (Q) of an electrically small antenna that fully occupies a spherical volume is defined by the well-known Chu limit. More recently, the subject of lower bounds for small antennas of arbitrarily shaped volumes has been given considerable attention in the literature. Gustafsson et al. were the first to present lower bounds for the Q of antennas of arbitrary volume. More significantly, they presented details on the optimum aspect ratios for minimizing the Q of electrically small antennas that fully occupy cylindrical volumes and planar areas. In previous works, we have described several electrically small-wire antenna designs that fully occupy spherical and cylindrical volumes and that are impedance matched and efficient and exhibit Qs that most closely approach the Chu limit and the Gustafsson limit, respectively.

44 citations


Journal ArticleDOI
01 Jan 2015-Database
TL;DR: This study aims to investigate the feasibility of scaling drug indication annotation through a crowdsourcing technique where an unknown network of workers can be recruited through the technical environment of Amazon Mechanical Turk, and concludes that the crowdsourcing approach not only results in significant cost and time saving, but also leads to accuracy comparable to that of domain experts.
Abstract: Motivated by the high cost of human curation of biological databases, there is an increasing interest in using computational approaches to assist human curators and accelerate the manual curation process. Towards the goal of cataloging drug indications from FDA drug labels, we recently developed LabeledIn, a human-curated drug indication resource for 250 clinical drugs. Its development required over 40 h of human effort across 20 weeks, despite using well-defined annotation guidelines. In this study, we aim to investigate the feasibility of scaling drug indication annotation through a crowdsourcing technique where an unknown network of workers can be recruited through the technical environment of Amazon Mechanical Turk (MTurk). To translate the expert-curation task of cataloging indications into human intelligence tasks (HITs) suitable for the average workers on MTurk, we first simplify the complex task such that each HIT only involves a worker making a binary judgment of whether a highlighted disease, in context of a given drug label, is an indication. In addition, this study is novel in the crowdsourcing interface design where the annotation guidelines are encoded into user options. For evaluation, we assess the ability of our proposed method to achieve high-quality annotations in a time-efficient and cost-effective manner. We posted over 3000 HITs drawn from 706 drug labels on MTurk. Within 8 h of posting, we collected 18 775 judgments from 74 workers, and achieved an aggregated accuracy of 96% on 450 control HITs (where gold-standard answers are known), at a cost of $1.75 per drug label. On the basis of these results, we conclude that our crowdsourcing approach not only results in significant cost and time saving, but also leads to accuracy comparable to that of domain experts. Database URL: ftp://ftp.ncbi.nlm.nih.gov/pub/lu/LabeledIn/Crowdsourcing/.

37 citations


Journal ArticleDOI
TL;DR: The results of the Track 3 evaluation show that the adoptability of the five participating clinical NLP systems has a great margin for improvement.

Journal ArticleDOI
TL;DR: This video describes how in January 2012, MITRE performed a real-time, red team/blue team cyber-wargame experiment that presented the opportunity to blend cyber-warfare with traditional mission planning and execution, including denial and deception tradecraft.
Abstract: As attack techniques evolve, cybersystems must also evolve to provide the best continuous defense. Leveraging classical denial and deception techniques to understand the specifics of adversary attacks enables an organization to build an active, threat-based cyber defense. The Web extra at https://youtu.be/9g_HLNXiLto is a video that describes how in January 2012, MITRE performed a real-time, red team/blue team cyber-wargame experiment that presented the opportunity to blend cyber-warfare with traditional mission planning and execution, including denial and deception tradecraft.

Journal ArticleDOI
TL;DR: A literature search was conducted using multiple terms directly and indirectly associated with fatigue as part of an effort to identify the gaps in research on fatigue and performance in air traffic control as mentioned in this paper.
Abstract: Fatigue has been on the National Transportation Safety Board’s (NTSB) “most wanted list” since 1990 and remains a topic of active investigation. The focus of this article is summarizing the effect of fatigue on air traffic controllers. A literature search was conducted using multiple terms directly and indirectly associated with fatigue as part of an effort to identify the gaps in research on fatigue and performance in air traffic control. Additionally, direct outreach was conducted to identify current research that would not yet be reflected in the literature. This article describes the approach used, discusses the identified research, and synthesizes the body of knowledge on air traffic controller fatigue. This can be used to guide future research as well as develop fatigue risk management systems for air traffic controllers.

Journal ArticleDOI
Steven Estes1
TL;DR: Good initial evidence for the existence of a workload curve is provided and the results support further study in applied settings and other facets of workload (e.g., temporal workload).
Abstract: OBJECTIVE: In this paper I begin looking for evidence of a subjective workload curve. BACKGROUND: Results from subjective mental workload assessments are often interpreted linearly. However, I hypothesized that ratings of subjective mental workload increase nonlinearly with unitary increases in working memory load. METHOD: Two studies were conducted. In the first, the participant provided ratings of the mental difficulty of a series of digit span recall tasks. In the second study, participants provided ratings of mental difficulty associated with recall of visual patterns. The results of the second study were then examined using a mathematical model of working memory. RESULTS: An S curve, predicted a priori, was found in the results of both the digit span and visual pattern studies. A mathematical model showed a tight fit between workload ratings and levels of working memory activation. CONCLUSION: This effort provides good initial evidence for the existence of a workload curve. The results support further study in applied settings and other facets of workload (e.g., temporal workload). APPLICATION: Measures of subjective workload are used across a wide variety of domains and applications. These results bear on their interpretation, particularly as they relate to workload thresholds. Language: en

Proceedings ArticleDOI
29 Oct 2015
TL;DR: The "truth about Big Data" is there are no fundamentally new DQ issues in Big Data analytics projects, and the key findings of this study reinforce that the primary factors affecting Big Data reside in the limitations and complexities involved with handling Big Data while maintaining its integrity.
Abstract: A USAF sponsored MITRE research team undertook four separate, domain-specific case studies about Big Data applications. Those case studies were initial investigations into the question of whether or not data quality issues encountered in Big Data collections are substantially different in cause, manifestation, or detection than those data quality issues encountered in more traditionally sized data collections. The study addresses several factors affecting Big Data Quality at multiple levels, including collection, processing, and storage. Though not unexpected, the key findings of this study reinforce that the primary factors affecting Big Data reside in the limitations and complexities involved with handling Big Data while maintaining its integrity. These concerns are of a higher magnitude than the provenance of the data, the processing, and the tools used to prepare, manipulate, and store the data. Data quality is extremely important for all data analytics problems. From the study's findings, the "truth about Big Data" is there are no fundamentally new DQ issues in Big Data analytics projects. Some DQ issues exhibit return-s-to-scale effects, and become more or less pronounced in Big Data analytics, though. Big Data Quality varies from one type of Big Data to another and from one Big Data technology to another.

Proceedings ArticleDOI
06 Dec 2015
TL;DR: Questions are found to be asking ourselves frequently, and this panel paper provided a good opportunity to stimulate a discussion along these lines and to open it up to the M&S community.
Abstract: Hybrid Simulation (HS) is not new. However there is contention in academic discourse as to what qualifies as HS? Is there a distinction between multi-method, multi-paradigm and HS? How do we integrate methods from disciplines like OR and computer science that contribute to the success of a M&S study? How do we validate a hybrid model when the whole (the combined model) is greater than the sum of its parts (the individual models)? Most dynamic simulations have a notion of time, how do we realize a unified representation of simulation time across methodologies, techniques and packages, and how do we prevent causality during inter-model message exchange? These are but some of the questions which we found to be asking ourselves frequently, and this panel paper provided a good opportunity to stimulate a discussion along these lines and to open it up to the M&S community.

Patent
03 Apr 2015
TL;DR: A system for machine learning model parameters for image compression, including partitioning image files into a first set of regions, determining a first sets of machine learned model parameters based on the regions, and constructing a representation of each of the regions based on each set of machine-learned model parameters as discussed by the authors.
Abstract: A system for machine learning model parameters for image compression, including partitioning image files into a first set of regions, determining a first set of machine learned model parameters based on the regions, the first set of machine learned model parameters representing a first level of patterns in the image files, constructing a representation of each of the regions based on the first set of machine learned model parameters, constructing representations of the image files by combining the representations of the regions in the first set of regions, partitioning the representations of the image files into a second set of regions, and determining a second set of machine learned model parameters based on the second set of regions, the second set of machine learned model parameters representing a second level of patterns in the image files.

Book ChapterDOI
21 Sep 2015
TL;DR: A real-time host-based intrusion detection system that can be used to passively detect malfeasance against applications within Linux containers running in a standalone or in a cloud multi-tenancy environment is presented in this article.
Abstract: Linux containers are gaining increasing traction in both individual and industrial use, and as these containers get integrated into mission-critical systems, real-time detection of malicious cyber attacks becomes a critical operational requirement. This paper introduces a real-time host-based intrusion detection system that can be used to passively detect malfeasance against applications within Linux containers running in a standalone or in a cloud multi-tenancy environment. The demonstrated intrusion detection system uses bags of system calls monitored from the host kernel for learning the behavior of an application running within a Linux container and determining anomalous container behavior. Performance of the approach using a database application was measured and results are discussed.

Proceedings ArticleDOI
01 Jun 2015
TL;DR: This paper describes MITRE’s participation in the Paraphrase and Semantic Similarity in Twitter task (SemEval-2015 Task 1) and details the approaches explored including mixtures of string matching metrics, alignments using tweet-specific distributed word representations, recurrent neural networks for modeling similarity with those alignments, and distance measurements on pooled latent semantic features.
Abstract: This paper describes MITRE’s participation in the Paraphrase and Semantic Similarity in Twitter task (SemEval-2015 Task 1). This effort placed first in Semantic Similarity and second in Paraphrase Identification with scores of Pearson’s r of 61.9%, F1 of 66.7%, and maxF1 of 72.4%. We detail the approaches we explored including mixtures of string matching metrics, alignments using tweet-specific distributed word representations, recurrent neural networks for modeling similarity with those alignments, and distance measurements on pooled latent semantic features. Logistic regression is used to tie the systems together into the ensembles submitted for evaluation.

Journal ArticleDOI
Kevin Burns1
TL;DR: The empirical results show that humans do judge creativeness as a product of surprise and meaning, consistent with the computational model of arousal and appraisal, and with respect to advancing artificial intelligence in the arts as well as improving the computational evaluation of creativity in engineering and design.
Abstract: How do humans judge the creativeness of an artwork or other artifact? This article suggests that such judgments are based on the pleasures of an aesthetic experience, which can be modeled as a mathematical product of psychological arousal and appraisal. The arousal stems from surprise, and is computed as a marginal entropy using information theory. The appraisal assigns meaning, by which the surprise is resolved, and is computed as a posterior probability using Bayesian theory. This model is tested by obtaining human ratings of surprise, meaning, and creativeness for artifacts in a domain of advertising design. The empirical results show that humans do judge creativeness as a product of surprise and meaning, consistent with the computational model of arousal and appraisal. Implications of the model are discussed with respect to advancing artificial intelligence in the arts as well as improving the computational evaluation of creativity in engineering and design.

Patent
12 Feb 2015
TL;DR: In this article, a plurality of anti-spoofing techniques are presented to detect interference with data provided by one or more navigation devices for different types of threats, such as false positives, false negatives, and false positives.
Abstract: Disclosed herein are system, method, and computer program product embodiments for detecting spoofing of a navigation device. A plurality of anti-spoofing techniques are provided. The plurality of anti-spoofing techniques detect interference with data provided by one or more navigation devices for a plurality of threat situations. Positioning, timing and frequency characteristics associated with the one or more navigation devices are analyzed in order to identify a threat situation among the plurality of threat situations. Based on the identified threat situation one or more of the anti-spoofing techniques are executed. The one or more anti-spoofing techniques can be executed in parallel in order to provide various anti-spoofing detection techniques at the same time.

Proceedings ArticleDOI
14 Apr 2015
TL;DR: This work has developed its own Cyber Mission Impact Business Process Modeling tool, which implements only a functional subset of the business process modeling notation (BPMN) and has, unlike the more generic COTS tools, been specifically designed for the representation of cyber processes, resources, and cyber incident effects.
Abstract: The promise of practicing mission assurance is to be able to leverage an understanding of how mission objectives and outcomes are dependent on supporting cyber resources. This makes it possible to analyze, monitor, and manage your cyber resources in a mission context. In previous work, we demonstrated how process modeling tools can simulate mission systems to allow us to dynamically compute the mission impacts of cyber events. We demonstrated the value of using this approach, but unfortunately practical deployment of our work was hampered by limitations of existing commercial off-the-shelf (COTS) tools for process modeling. To address this deficiency, we have developed our own Cyber Mission Impact Business Process Modeling tool. Although it implements only a functional subset of the business process modeling notation (BPMN), it has, unlike the more generic COTS tools, been specifically designed for the representation of cyber processes, resources, and cyber incident effects. The method and tool are described in this paper.

Journal ArticleDOI
TL;DR: The ongoing efforts to standardize spectrum consumption models to ensure that they can effectively capture the boundaries of spectrum use in all of its dimensions by devices and systems of devices and provide a stable definition on how to arbitrate the compatibility of SCMs are described.
Abstract: Spectrum consumption models attempt to capture spectral, spatial, and temporal consumption of spectrum of any specific RF transmitter, receiver, system, or collection of systems. The information contained in the models enables better spectrum management practices and allows for the identification of spectrum reuse opportunities. The characteristics and structure of spectrum consumption models (SCMs) are being standardized within the IEEE DySPAN-SC P1900.5.2 group. This paper presents and discusses how SCMs can be used to enable spectrum sharing and new spectrum management interactions. We describe the ongoing efforts to standardize SCMs to ensure that they can: 1) effectively capture the boundaries of spectrum use in all of its dimensions by devices and systems of devices and 2) provide a stable definition on how to arbitrate the compatibility of SCMs. By achieving these objectives, SCMs will enable innovation in spectrum sharing and in the development of dynamic spectrum access systems.

Journal ArticleDOI
TL;DR: The research investigated whether military officers experience a fit between the simulations with which they were trained and their tasks, as reflected by their perception of whether their performance improved, and whether the simulation supported their individual activities.
Abstract: The purpose of this research is to determine if there is task-technology fit for the use of simulation in training. In the context of military training, the research investigated whether military officers experience a fit between the simulations with which they were trained and their tasks (e.g., strategy or operational procedures), as reflected by their perception of whether their performance improved, and whether the simulation supported their individual activities. The research method followed the qualitative research tradition of transcendental phenomenology to query the experience of those who were trained with simulation. This research utilized interview questions that elicit respondents' experience in the broad areas of task-technology fit, and through phenomenological analysis, identified the salient variables within each of these constructs. The analysis resulted in the identification of themes of meaning which were compared with the existing task-technology fit variables. The research identified...

Journal ArticleDOI
TL;DR: In this paper, the authors developed a prototype evolutionary algorithm designed to generate potential schemes of the inflated basis type described above, taking as inputs a collection of asset types and tax entities, together with a rule set governing asset exchanges between these entities.
Abstract: The U.S. tax gap is estimated to exceed $450 billion, most of which arises from non-compliance on the part of individual taxpayers (GAO 2012; IRS 2006). Much is hidden in innovative tax shelters combining multiple business structures such as partnerships, trusts, and S-corporations into complex transaction networks designed to reduce and obscure the true tax liabilities of their individual shareholders. One known gambit employed by these shelters is to offset real gains in one part of a portfolio by creating artificial capital losses elsewhere through the mechanism of “inflated basis” (TaxAnalysts 2005), a process made easier by the relatively flexible set of rules surrounding “pass-through” entities such as partnerships (IRS 2009). The ability to anticipate the likely forms of emerging evasion schemes would help auditors develop more efficient methods of reducing the tax gap. To this end, we have developed a prototype evolutionary algorithm designed to generate potential schemes of the inflated basis type described above. The algorithm takes as inputs a collection of asset types and tax entities, together with a rule-set governing asset exchanges between these entities. The schemes produced by the algorithm consist of sequences of transactions within an ownership network of tax entities. Schemes are ranked according to a “fitness function” (Goldberg in Genetic algorithms in search, optimization, and machine learning. Addison-Wesley, Boston, 1989); the very best schemes are those that afford the highest reduction in tax liability while incurring the lowest expected penalty.

Book ChapterDOI
12 Dec 2015
TL;DR: Several approaches to using predictive models to identify salient phrases in the predictive texts are explored and a design for incorporating this information into a decision-support tool is proposed.
Abstract: Administrative adjudications are the most common form of legal decisions in many countries, so improving the efficiency, accuracy, and consistency of administrative processes could significantly benefit agencies and citizens alike. We explore the hypothesis that predictive models induced from previous administrative decisions can improve subsequent decision-making processes. This paper describes three datasets for exploring this hypothesis: motion-rulings, Board of Veterans Appeals (BVA) decisions; and World Intellectual Property Organization (WIPO) domain name dispute decisions. Three different approaches for prediction in these domains were tested: maximum entropy over token n-grams; SVM over token n-grams; and a Hierarchical Attention Network (HAN) applied to the full text. Each approach was capable of predicting outcomes, with the simpler WIPO cases appearing to be much more predictable than BVA or motion-ruling cases. We explore several approaches to using predictive models to identify salient phrases in the predictive texts (i.e., motion or contentions and factual background) and propose a design for incorporating this information into a decision-support tool.

Book ChapterDOI
Gaurav Thakur1
TL;DR: An overview of the theory and stability properties of Synchrosqueezing, as well as applications of the technique to topics in cardiology, climate science, and economics are presented.
Abstract: The Synchrosqueezing transform is a time-frequency analysis method that can decompose complex signals into time-varying oscillatory components. It is a form of time-frequency reassignment that is both sparse and invertible, allowing for the recovery of the signal. This article presents an overview of the theory and stability properties of Synchrosqueezing, as well as applications of the technique to topics in cardiology, climate science, and economics.

Proceedings ArticleDOI
14 Apr 2015
TL;DR: The Department of Homeland Security, Science and Technology Directorate (DHS/S&T) is exploring the feasibility to geolocate pollen grains found on goods or people for compliance with U.S. import laws and criminal forensics.
Abstract: The Department of Homeland Security, Science and Technology Directorate (DHS/S&T) is exploring the feasibility to geolocate pollen grains found on goods or people for compliance with U.S. import laws and criminal forensics. A multi-disciplinary team built the Pollen Identification and Geolocation Technology (PIGLT) system to help users identify pollen samples and perform geolocation. Identification is performed using either traditional family, genus and species information, or a morphological ID system based on an existing database of herbaria samples. As the user makes morphological decisions, visual aids help exclude pollen taxa that lack given attributes. The user systematically lowers the number of matches until the number is small enough for visual identification. PIGLT has ∼5 images per sample, but experiments with Z-stack imagery may positively affect human identification. Given grain identities, geolocation proceeds using distributions developed using Maxent. The database is implemented in PostgreSQL and the user-interface uses Django, a high-level Python Web framework.

Proceedings ArticleDOI
17 May 2015
TL;DR: This paper discusses use of the Object Management Group (OMG) Unified Profile for DoDAF and MODAF (UPDM) for architecture modeling to support a standards-based, layered “model of models” (MOM) approach.
Abstract: Organizations are changing their emphasis from “We need a new system” to “We need to achieve a specific outcome.” As these outcomes become more difficult to define and the associated systems more complex, the management, modeling and simulation of these SoS becomes equally challenging. Often, the SoS is modeled in all its complexity, at a single level of abstraction or level of detail. Instead of a “mega-model” approach, a standards-based, layered “model of models” (MOM) approach is what is necessary. This paper discusses use of the Object Management Group (OMG) Unified Profile for DoDAF and MODAF (UPDM) for architecture modeling. UPDM supports a MOM approach by enabling the development of integrated model layers such as an outcomes model layer and a component layer. An integrated, layered MOM is in keeping with the Model-Based Systems Engineering (MBSE) approach. The model layers can be referenced when detailed analysis is required, or hidden when a SoS viewpoint is required.

Journal ArticleDOI
TL;DR: Significant aspects of this work include close integration of CTA and design thinking efforts, designing for an “envisioned world” of interaction with highly autonomous helicopter systems, and the importance of knowledge elicitation early in system design.
Abstract: Ensuring that unmanned aerial systems’ (UAS) control stations include a tight coupling of systems engineering with human factors, cognitive analysis, and design is key to their success. We describe...