scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 2012"


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the classification of human, bot, and cyborg accounts on Twitter and conduct a set of large-scale measurements with a collection of over 500,000 accounts.
Abstract: Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.

600 citations


Journal ArticleDOI
TL;DR: It is shown that an alternative to dynamic face matcher selection is to train face recognition algorithms on datasets that are evenly distributed across demographics, as this approach offers consistently high accuracy across all cohorts.
Abstract: This paper studies the influence of demographics on the performance of face recognition algorithms. The recognition accuracies of six different face recognition algorithms (three commercial, two nontrainable, and one trainable) are computed on a large scale gallery that is partitioned so that each partition consists entirely of specific demographic cohorts. Eight total cohorts are isolated based on gender (male and female), race/ethnicity (Black, White, and Hispanic), and age group (18-30, 30-50, and 50-70 years old). Experimental results demonstrate that both commercial and the nontrainable algorithms consistently have lower matching accuracies on the same cohorts (females, Blacks, and age group 18-30) than the remaining cohorts within their demographic. Additional experiments investigate the impact of the demographic distribution in the training set on the performance of a trainable face recognition algorithm. We show that the matching accuracy for race/ethnicity and age cohorts can be improved by training exclusively on that specific cohort. Operationally, this leads to a scenario, called dynamic face matcher selection, where multiple face recognition algorithms (each trained on different demographic cohorts) are available for a biometric system operator to select based on the demographic information extracted from a probe image. This procedure should lead to improved face recognition accuracy in many intelligence and law enforcement face recognition scenarios. Finally, we show that an alternative to dynamic face matcher selection is to train face recognition algorithms on datasets that are evenly distributed across demographics, as this approach offers consistently high accuracy across all cohorts.

426 citations


Journal ArticleDOI
TL;DR: Why incorporating an understanding of human behavior into cyber security products and processes can lead to more effective technology is described and how behavioral science offers the potential for significant increases in the effectiveness of cyber security is illustrated.

190 citations


Journal ArticleDOI
01 Jan 2012-Database
TL;DR: Analysis of interviews and survey provide a set of requirements for the integration of text mining into the biocuration workflow that can guide the identification of common needs across curated databases and encourage joint experimentation involving biocurators, text mining developers and the larger biomedical research community.
Abstract: Molecular biology has become heavily dependent on biological knowledge encoded in expert curated biological databases. As the volume of biological literature increases, biocurators need help in keeping up with the literature; (semi-) automated aids for biocuration would seem to be an ideal application for natural language processing and text mining. However, to date, there have been few documented successes for improving biocuration throughput using text mining. Our initial investigations took place for the workshop on ‘Text Mining for the BioCuration Workflow’ at the third International Biocuration Conference (Berlin, 2009). We interviewed biocurators to obtain workflows from eight biological databases. This initial study revealed high-level commonalities, including (i) selection of documents for curation; (ii) indexing of documents with biologically relevant entities (e.g. genes); and (iii) detailed curation of specific relations (e.g. interactions); however, the detailed workflows also showed many variabilities. Following the workshop, we conducted a survey of biocurators. The survey identified biocurator priorities, including the handling of full text indexed with biological entities and support for the identification and prioritization of documents for curation. It also indicated that two-thirds of the biocuration teams had experimented with text mining and almost half were using text mining at that time. Analysis of our interviews and survey provide a set of requirements for the integration of text mining into the biocuration workflow. These can guide the identification of common needs across curated databases and encourage joint experimentation involving biocurators, text mining developers and the larger biomedical research community.

163 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of the past and current work on advance reservation for optical networks is provided and there have been many variations of the advance reservation concept proposed, so a broad classification is provided.
Abstract: Traditionally, research on routing and wavelength assignment over wavelength-routed WDM networks is concerned with immediate reservation (IR) demands. An IR demand typically does not specify a holding time for data transmission and the start time of the data transmission is assumed to be immediate (i.e. when the connection request arrives). The concept of advance reservation (AR) has recently been gaining attention for optical networks. An AR demand typically specifies information about the start of the data transmission or a deadline, as well as the holding time of the transmission. AR has several important applications for both wide-area networks and Grid networks. For example, AR can be used for adjusting virtual topologies to adapt to predictable peak hour traffic usage. It can be used to provide high-bandwidth services such as video conferencing and in Grid applications requiring the scheduled distribution of large files and for co-allocation of network and grid resources. AR can also be beneficial to the network by allowing the network operator to better plan resource usage and therefore increase utilization. Knowledge of the holding time can lead to more optimal decisions for resource allocation. This translates to better quality of service for users. In this paper we provide a comprehensive survey of the past and current work on advance reservation for optical networks. There have been many variations of the advance reservation concept proposed, so we will also provide a broad classification. In addition to the survey, we will discuss what we believe are important areas of future work and open challenges for advance reservation on optical networks.

133 citations


Journal ArticleDOI
TL;DR: Optically detected magnetic resonance in a nitrogen-vacancy center within an individual diamond nanocrystal is used to investigate the composition and spin dynamics of the particle-hosted spin bath.
Abstract: Semiconductor nanoparticles host a number of paramagnetic point defects and impurities, many of them adjacent to the surface, whose response to external stimuli could help probe the complex dynamics of the particle and its local, nanoscale environment. Here, we use optically detected magnetic resonance in a nitrogen-vacancy (NV) center within an individual diamond nanocrystal to investigate the composition and spin dynamics of the particle-hosted spin bath. For the present sample, a ∼45 nm diamond crystal, NV-assisted dark-spin spectroscopy reveals the presence of nitrogen donors and a second, yet-unidentified class of paramagnetic centers. Both groups share a common spin lifetime considerably shorter than that observed for the NV spin, suggesting some form of spatial clustering, possibly on the nanoparticle surface. Using double spin resonance and dynamical decoupling, we also demonstrate control of the combined NV center–spin bath dynamics and attain NV coherence lifetimes comparable to those reported f...

127 citations


01 Jan 2012
TL;DR: A description of the potential ontologies and standards that could be utilized to extend the Cyber ontology from its initially constrained malware focus and some proposed next steps in the iterative evolution of the ontology development methodology are proposed.
Abstract: This paper reports on a trade study we performed to support the development of a Cyber ontology from an initial malware ontology. The goals of the Cyber ontology effort are first described, followed by a discussion of the ontology development methodology used. The main body of the paper then follows, which is a description of the potential ontologies and standards that could be utilized to extend the Cyber ontology from its initially constrained malware focus. These resources include, in particular, Cyber and malware standards, schemas, and terminologies that directly contributed to the initial malware ontology effort. Other resources are upper (sometimes called 'foundational') ontologies. Core concepts that any Cyber ontology will extend have already been identified and rigorously defined in these foundational ontologies. However, for lack of space, this section is profoundly reduced. In addition, utility ontologies that are focused on time, geospatial, person, events, and network operations are briefly described. These utility ontologies can be viewed as specialized super-domain or even mid-level ontologies, since they span many, if not most, ontologies -including any Cyber ontology. An overall view of the ontological architecture used by the trade study is also given. The report on the trade study concludes with some proposed next steps in the iterative evolution of the

125 citations


Proceedings ArticleDOI
20 May 2012
TL;DR: This system, similar to Pioneer but built with relaxed assumptions, successfully detects attacks on code integrity over 10 links of an enterprise network, despite an average of just 1.7% time overhead for the attacker.
Abstract: In this paper we present a comprehensive timing-based attestation system suitable for typical enterprise use, and evidence of that system's performance. This system, similar to Pioneer [20] but built with relaxed assumptions, successfully detects attacks on code integrity over 10 links of an enterprise network, despite an average of just 1.7% time overhead for the attacker. We also present the first implementation and evaluation of a Trusted Platform Module (TPM) hardware timing-based attestation protocol. We describe the design and results of a set of experiments showing the effectiveness of our timing-based system, thereby providing further evidence of the practicality of timing-based attestation in real-world settings. While system measurement itself is a worthwhile goal, and timing-based attestation systems can provide measurements that are equally as trustworthy as hardware-based attestation systems, we feel that Time Of Check, Time Of Use (TOCTOU) attacks have not received appropriate attention in the literature. To address this topic, we present the three conditions required to execute such an attack, and how past attacks and defenses relate to these conditions.

113 citations


Journal ArticleDOI
TL;DR: This work shows that the algorithm can be interpreted as an iterative latent semantic analysis process, which allows for extensions to handle networks with actor attributes and within-mode interactions, and suggests its generality in capturing evolving groups in networks with heterogeneous entities and complex relationships.
Abstract: A multimode network consists of heterogeneous types of actors with various interactions occurring between them. Identifying communities in a multimode network can help understand the structural properties of the network, address the data shortage and unbalanced problems, and assist tasks like targeted marketing and finding influential actors within or between groups. In general, a network and its group structure often evolve unevenly. In a dynamic multimode network, both group membership and interactions can evolve, posing a challenging problem of identifying these evolving communities. In this work, we try to address this problem by employing the temporal information to analyze a multimode network. A temporally regularized framework and its convergence property are carefully studied. We show that the algorithm can be interpreted as an iterative latent semantic analysis process, which allows for extensions to handle networks with actor attributes and within-mode interactions. Experiments on both synthetic data and real-world networks demonstrate the efficacy of our approach and suggest its generality in capturing evolving groups in networks with heterogeneous entities and complex relationships.

96 citations


Journal ArticleDOI
05 Jan 2012-Virology
TL;DR: The diversity of subtypes and genetic lineages in SOIV cases highlights the importance of continued surveillance at the animal-human interface.

76 citations


Journal ArticleDOI
TL;DR: This paper presents a complete solution for dynamically changing system membership in a large-scale Byzantine-fault-tolerant system, including a service that tracks system membership and periodically notifies other system nodes of membership changes and implements a novel distributed hash table called dBQS that provides atomic semantics even across changes in replica sets.
Abstract: Byzantine-fault-tolerant replication enhances the availability and reliability of Internet services that store critical state and preserve it despite attacks or software errors. However, existing Byzantine-fault-tolerant storage systems either assume a static set of replicas, or have limitations in how they handle reconfigurations (e.g., in terms of the scalability of the solutions or the consistency levels they provide). This can be problematic in long-lived, large-scale systems where system membership is likely to change during the system lifetime. In this paper, we present a complete solution for dynamically changing system membership in a large-scale Byzantine-fault-tolerant system. We present a service that tracks system membership and periodically notifies other system nodes of membership changes. The membership service runs mostly automatically, to avoid human configuration errors; is itself Byzantine-fault-tolerant and reconfigurable; and provides applications with a sequence of consistent views of the system membership. We demonstrate the utility of this membership service by using it in a novel distributed hash table called dBQS that provides atomic semantics even across changes in replica sets. dBQS is interesting in its own right because its storage algorithms extend existing Byzantine quorum protocols to handle changes in the replica set, and because it differs from previous DHTs by providing Byzantine fault tolerance and offering strong semantics. We implemented the membership service and dBQS. Our results show that the approach works well, in practice: the membership service is able to manage a large system and the cost to change the system membership is low.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: The SPAN project is an open source implementation of a generalized Mobile Ad-Hoc Network framework to bring dynamic mesh networking to smart phones and to explore the concepts of Off-Grid communications.
Abstract: The SPAN project is an open source implementation of a generalized Mobile Ad-Hoc Network framework. The project's goals are to bring dynamic mesh networking to smart phones and to explore the concepts of Off-Grid communications.

Journal ArticleDOI
TL;DR: This article analyzed a set of 18,520 ultrafast black swan events that were uncovered in stock-price movements between 2006 and 2011, and provided empirical evidence for, and an accompanying theory of, an abrupt system-wide transition from a mixed human-machine phase to a new all machine phase characterized by frequent black-swan events with ultrafast durations.
Abstract: Society’s drive toward ever faster socio-technical systems, means that there is an urgent need to understand the threat from ‘black swan’ extreme events that might emerge. On 6 May 2010, it took just five minutes for a spontaneous mix of human and machine interactions in the global trading cyberspace to generate an unprecedented system-wide Flash Crash. However, little is known about what lies ahead in the crucial sub-second regime where humans become unable to respond or intervene sufficiently quickly. Here we analyze a set of 18,520 ultrafast black swan events that we have uncovered in stock-price movements between 2006 and 2011. We provide empirical evidence for, and an accompanying theory of, an abrupt system-wide transition from a mixed human-machine phase to a new all-machine phase characterized by frequent black swan events with ultrafast durations (

Book
08 Oct 2012
TL;DR: Engineering Risk Management Introduction Objectives and Practices New Challenges New Challenges Perspectives on Theories of Systems and Risk
Abstract: Engineering Risk Management Introduction Objectives and Practices New Challenges Perspectives on Theories of Systems and Risk Introduction General Systems Theory Risk and Decision Theory Engineering Risk Management Foundations of Risk and Decision Theory Introduction Elements of Probability Theory The Value Function Risk and Utility Functions Multiattribute Utility-The Power Additive Utility Function Applications to Engineering Risk Management A Concluding Thought A Risk Analysis Framework in Engineering Enterprise Systems Introduction Perspectives on Engineering Enterprise Systems A Framework for Measuring Enterprise Capability Risk A Risk Analysis Algebra Information Needs for Portfolio Risk Analysis The "Cutting Edge" An Index to Measure Risk Co-Relationships Introduction RCR Postulates, Definitions, and Theory Computing the RCR Index Applying the RCR Index: A Resource Allocation Example Summary Functional Dependency Network Analysis Introduction FDNA Fundamentals Weakest Link Formulations FDNA (alpha, ss) Weakest Link Rule Network Operability and Tolerance Analyses Special Topics Summary A Decision-Theoretic Algorithm for Ranking Risk Criticality Introduction A Prioritization Algorithm A Model for Measuring Risk in Engineering Enterprise Systems A Unifying Risk Analytic Framework and Process Summary Random Processes and Queuing Theory Introduction Deterministic Process Random Process Markov Process Queuing Theory Basic Queuing Models Applications to Engineering Systems Summary Extreme Event Theory Introduction to Extreme and Rare Events Extreme and Rare Events and Engineering Systems Traditional Data Analysis Extreme Value Analysis Extreme Event Probability Distributions Limit Distributions Determining Domain of Attraction Using Inverse Function Determining Domain of Attraction Using Graphical Method Complex Systems and Extreme and Rare Events Summary Prioritization Systems in Highly Networked Environments Introduction Priority Systems Types of Priority Systems Summary Risks of Extreme Events in Complex Queuing Systems Introduction Risk of Extreme Latency Conditions for Unbounded Latency Conditions for Bounded Latency Derived Performance Measures Optimization of PS Summary Appendix: Bernoulli Utility and the St. Petersburg Paradox References Index Questions and Exercises appear at the end of each chapter.

Proceedings ArticleDOI
21 May 2012
TL;DR: The characteristics of each of theGNSS signals are described, and the performance benefits that will be provided to the precise time and frequency community as the GNSS evolves are highlighted.
Abstract: This paper provides an overview of Global Navigation Satellite System (GNSS) signals. Today, GNSS comprises two major constellations: (1) the United States' Global Positioning System (GPS), and (2) the Russian Federation's Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS). Two other major constellations are being deployed. Additionally, regional systems have been deployed or are planned. Whereas most GNSS timing receivers today rely only upon the legacy GPS signals, it is anticipated in the near future that multi-system receivers will become the norm. This paper describes the characteristics of each of the GNSS signals, and highlights the performance benefits that will be provided to the precise time and frequency community as the GNSS evolves.

Proceedings Article
05 Jun 2012
TL;DR: Four aspects of cyber defense collaboration are explored to identify approaches for improving cyber defense information sharing and risk management approaches that have built-in mechanisms for sharing and receiving information, increasing transparency, and improving entity peering relationships.
Abstract: Information and Communication Technologies are increasingly intertwined across the economies and societies of developed countries. Protecting these technologies from cyber-threats requires collaborative relationships for exchanging cyber defense data and an ability to establish trusted relationships. The fact that Communication and Information Systems (CIS) security1 is an international issue increases the complexity of these relationships. Cyber defense collaboration presents specific challenges since most entities would like to share cyber-related data but lack a successful model to do so.

Proceedings ArticleDOI
19 Mar 2012
TL;DR: The U.S. Department of Defense state-of-practice of integrating security into systems engineering (SE) in order to implement system security engineering (SSE) through the program protection process is discussed.
Abstract: This paper discusses the U.S. Department of Defense (DoD) state-of-practice of integrating security into systems engineering (SE) in order to implement system security engineering (SSE) through the program protection process. The discussion includes a description of new policies and the application of methods and techniques to implement SSE. Although SSE is normally viewed as a specialty engineering area, this paper emphasizes the need to more tightly integrate SSE with the overall systems engineering.

Journal ArticleDOI
TL;DR: This paper argued that the two categories of preverbal nouns cannot receive the same analysis since they display distinct syntactic and semantic behavior: the preverbal nominals, unlike the bare object nouns, cannot be questioned, are modified differently, have different interpretations, give rise to distinct case-assignment contexts, and can co-occur with a non-specific object.
Abstract: The nature of preverbal nominals and their relation to the verb have been the focus of much debate in languages with a productive complex predication process. For Persian, certain analyses have argued that the bare nominals in complex predicate constructions are distinct from bare objects, while others have treated the two types of bare nominals uniformly. This paper argues that the two categories of preverbal nouns cannot receive the same analysis since they display distinct syntactic and semantic behavior: the preverbal nominals, unlike the bare object nouns, cannot be questioned, are modified differently, have different interpretations, give rise to distinct case-assignment contexts, and can co-occur with a non-specific object. The distinct properties of the two nominal categories are captured by positing distinct structural positions for these nouns. Non-specific bare nouns are internal arguments of the thematic verb, while the nominal element of the complex predicate construction is part of the verbal domain with which it combines through a process of conflation, as defined in Hale and Keyser (2002), to form a single predicate.

Journal ArticleDOI
TL;DR: In this paper, the emergence of a novel influenza virus and its spread to the United States were simulated for February 2009 from 55 international metropolitan areas using three basic reproduction numbers (R(0)): 1.53, 1.70, and 1.90).

Proceedings ArticleDOI
09 Dec 2012
TL;DR: This Panel reflects on progress made since the Workshop, new Grand Challenges that have emerged over the past ten years and key M&S milestones for the next decade.
Abstract: It has been a decade since the Workshop on Grand Challenge for Modeling & Simulation (M&S) was held at Dagstuhl in Germany (www.dagstuhl.de/02351). Grand challenges provide a critical focal point for research and development and can potentially create the critical mass needed to bring substantial transformation and benefit to a community. The Workshop addressed a wide variety of M&S theoretical, methodological and technological issues across many application areas. This Panel reflects on progress made since the Workshop, new Grand Challenges that have emerged over the past ten years and key M&S milestones for the next decade.

Patent
09 Jan 2012
TL;DR: Secure Remote Peripheral Encryption Tunnel (SeRPEnT) as discussed by the authors can be implemented in a portable embedded device for the Universal Serial Bus (USB) with a much more restricted attack surface than a general purpose client computer.
Abstract: A Secure Remote Peripheral Encryption Tunnel (SeRPEnT) can be implemented in a portable embedded device for the Universal Serial Bus (USB) with a much more restricted attack surface than a general purpose client computer. The SeRPEnT device can comprise a small, low-power “cryptographic switchboard” that can operate in a trusted path mode and a pass-through mode. In the trusted path mode, the SeRPEnT device can tunnel connected peripherals through the client to a server with Virtual Machine (VM)-hosted applications. In the pass-through mode, the SeRPEnT device can pass-through the connected peripherals to the client system, allowing normal use of the local system by the user. SeRPEnT can also enable secure transactions between the user and server applications by only allowing input to the VMs to originate from the SeRPEnT device.

Proceedings ArticleDOI
19 Mar 2012
TL;DR: This paper will review the techniques used during agile projects to manage the sprint cycle, including templates for user story management, and establishes a repository using an internal website where it documented corporte IT processes and share agile templates and samples.
Abstract: Agile teams need the Business Analyst (BA) to clearly define and communicate the detailed user stories to ensure a successful product. MITRE's Corporate IT Systems Engineering department supports software development activities which recently adopted an agile methodology. Unlike the detailed requirements documentation of more traditional, waterfall-based projects, we have found the streamlined “user stories” inadequate for developers or testers. Our BA experiences with eliciting user story details and maintaining the backlog for sprint planning are a critical component to agile development. BA activities include grooming the backlog, documenting user stories with detailed contracts, and performing user story verification through testing. This paper will review the techniques we have used during agile projects to manage the sprint cycle, including templates for user story management. Capturing artifacts from other agile projects and documenting recommended agile process guidelines can help projects be successful through reuse and collaboration. Many agile projects generate artifacts that are lost or are created for the benefit of only their project and discarded when through. Agile encourages lean documentation in order to maximize agility. We established a repository using an internal website where we documented corporte IT processes and share agile templates and samples. This repository includes samples of sprint schedules, backlog lists, burn-down charts, retrospective items, and user stories. The corporate IT process recommends agile but also includes traditional waterfall guidance and correlates the two different approaches. All projects, whether they are agile or otherwise, need similar deliverables, including project schedules and project plans. In correlating agile considerations to a waterfall approach, we hope to ease the transition to agile. The process guidance and repository site promotes collaboration, reuse, and review among agile projects within the organization.

Proceedings ArticleDOI
13 Aug 2012
TL;DR: In this article, a prototype capability for Flow Contingency Management, a component of strategic traffic flow management decision making in the Next Generation Air Transportation System, is described, where decision makers can simulate and evaluate proposed congestion-mitigation strategies prior to implementation and quantitatively compare different options before enacting a given plan.
Abstract: This paper describes a prototype capability for Flow Contingency Management, a component of strategic Traffic Flow Management decision making in the Next Generation Air Transportation System. The Flow Contingency Management concept and associated capabilities described in this paper aim to address current shortfalls in today’s strategic planning process, namely the lack of integrated information, simulation and evaluation capabilities provided to decision makers. Specifically, the proposed prototype integrates the traffic and weather forecasts and further translates these predictions into forecasts of system impact, addressing a gap in today’s operating environment. Viewing the integrated forecast, decision makers can simulate and evaluate proposed congestion-mitigation strategies prior to implementation and quantitatively compare different options before enacting a given plan. As such, the prototype provides an integrated problem identification and quantitative what-if analysis capability for strategic traffic flow management. The paper reviews the overall concept and associated modeling framework, highlighting aspects of the model that address difficulties inherent to traffic flow management planning in the strategic timeframe. To illustrate the proposed decision making process, an example weather and traffic situation, taken from historic data, is simulated and the results highlight the envisioned operational benefits for strategic traffic flow management decision making.

Journal ArticleDOI
TL;DR: In this evaluation, the vertex selection policy that most accurately identified vertex-partition community structure in a given graph depended on how closely the graph’s degree distribution approximated a power-law distribution, which indicates that local community detection should be context-sensitive in the sense of basing vertex selection on the graph's degree distribution and the target community structure.
Abstract: Local methods for detecting community structure are necessary when a graph’s size or node-expansion cost make global community detection methods infeasible. Various algorithms for local community detection have been proposed, but there has been little analysis of the circumstances under which one approach is preferable to another. This paper describes an evaluation comparing the accuracy of five alternative vertex selection policies in detecting two distinct types of community structures—vertex partitions that maximize modularity, and link partitions that maximize partition density—in a variety of graphs. In this evaluation, the vertex selection policy that most accurately identified vertex-partition community structure in a given graph depended on how closely the graph’s degree distribution approximated a power-law distribution. When the target community structure was partition-density maximization, however, an algorithm based on spreading activation generally performed best, regardless of degree distribution. These results indicate that local community detection should be context-sensitive in the sense of basing vertex selection on the graph’s degree distribution and the target community structure.

Proceedings ArticleDOI
05 May 2012
TL;DR: An integrated social software platform, called Handshake, is evaluated to determine individuals' usage patterns and characterize Handshake's business value, finding that both the level and type of participation affects whether users experience value.
Abstract: We evaluated an integrated social software platform, called Handshake, to determine individuals' usage patterns and characterize Handshake's business value. Our multi-step investigation included conducting 63 in-depth interviews, analyzing log data from 4600+ users, and administering an online survey. We found that both the level and type of participation affects whether users experience value. Active participants, for example, say that Handshake supports collaboration, strengthens social connections, fosters awareness of connections' activities, and facilitates knowledge management. This case study captures an early snapshot of behavior that we anticipate will change and grow over time.

Journal ArticleDOI
TL;DR: Results show that SA indeed provides competitive alternatives to the k-shortest path approach and improved alternatives over the heuristic search procedure, and represents a desirable method for generating alternate flight paths.
Abstract: This paper presents a simulated annealing (SA) methodology for defining operationally acceptable route alternatives for flights impacted by weather. By dynamically generating route alternatives that inherently possess traits amenable to traffic managers and users, more efficient use of the airspace can be realized. This paper explores the use of SA to provide quality solutions quickly and to capture additional route alternative options, such as ground delay. For comparison, a k-shortest path approach and an ad hoc heuristic search approach have also been employed to generate reroutes and the results show that SA indeed provides competitive alternatives to the k-shortest path approach and improved alternatives over the heuristic search procedure. Furthermore, SA can potentially generate these alternatives with less computation effort than the k-shortest path approach and, therefore, represents a desirable method for generating alternate flight paths.

Proceedings ArticleDOI
20 May 2012
TL;DR: The efficiency of this model enables the design and analysis of logic circuits composed of multiple graphene devices and allows the potential for graphene-based circuit speeds five times that of circuits based upon 32-nm silicon technology.
Abstract: This paper presents a compact device model for graphene field-effect transistors. This model extends prior iterative models (due to Meric et al. and Thiele et al.) in two ways. First, the model is given as a closed-form expression that is more computationally efficient. Second, it is valid for devices based upon either monolayer graphene or bilayer graphene. Simulations demonstrate that this model agrees closely with experimental data. Furthermore, the efficiency of this model enables the design and analysis of logic circuits composed of multiple graphene devices. Example simulation results are provided that demonstrate the potential for graphene-based circuit speeds five times that of circuits based upon 32-nm silicon technology.

Journal ArticleDOI
TL;DR: A dynamic gap-out approach that uses individual vehicular information and quantified benefits of the proposed approach via simulation is presented that reduced vehicular delays by 12.5% over existing regular gap- out.
Abstract: Traffic signal timing optimization and control are one of the most cost-effective ways of improving urban arterial network congestions. Among various control modes, actuated traffic signal control is designed to provide green times where they are needed and it uses a pre-specified gap-out time to determine early termination of current phase green time. However, the effectiveness of its signal state decisions is limited by its dependence on vehicular information from fixed point sensors. With the emerging wireless communication technology based on cooperative vehicle-infrastructure system known as IntelliDrive, individual vehicular information (e.g., speed, acceleration, and location) is needed to be fully utilized for traffic signal control applications. This paper presents a dynamic gap-out approach that uses individual vehicular information and quantified benefits of the proposed approach via simulation. The timing plans were optimized by the genetic algorithm for both the traditional and the proposed gap-out cases. The results based on a four-leg intersection indicated that the dynamic gap-out reduced vehicular delays by 12.5% over existing regular gap-out.

Proceedings ArticleDOI
17 Sep 2012
TL;DR: The Tactical Rerouting concept presented here takes advantage of electronic means, and adds decision support capabilities that together increase throughput near a weather constraint and thereby reduce the need for larger scale, strategic Traffic Management Initiatives that frequently produce unnecessary delays.
Abstract: A concept is described that enables traffic managers to efficiently develop and coordinate tactical reroutes around convective weather, facilitating incremental decision making. Tactical reroutes are more precise and efficient than strategic ones, since weather predictions improve as look-ahead time decreases. Currently, tactical rerouting is prohibitively labor intensive as there is little automation assistance to identify flights projected to be affected by weather, to choose appropriate reroutes, and to coordinate them. In the United States, electronic means will soon be available for coordinating reroutes between traffic managers and en route controllers, making tactical rerouting a much more viable option for traffic managers. The Tactical Rerouting concept presented here takes advantage of this technology, and adds decision support capabilities that together increase throughput near a weather constraint and thereby reduce the need for larger scale, strategic Traffic Management Initiatives that frequently produce unnecessary delays.

Proceedings ArticleDOI
Judith Dahmann1
19 Mar 2012
TL;DR: This paper focuses on how to approach T&E for SoS given the challenges of large scale SoS development as a continuous improvement process that provides information on capabilities and limitations for end users and feedback to the SoS and system SE teams toward SoS evolution.
Abstract: This paper presents an approach to integrated systems engineering (SE) and test and evaluation (T&E) for SoS based on work underway by the National Defense Industry Association Systems Engineering Division Systems of Systems and Developmental Test and Evaluation Committees [1]. The paper focuses on how to approach T&E for SoS given the challenges of large scale SoS development as a continuous improvement process that provides information on capabilities and limitations for end users and feedback to the SoS and system SE teams toward SoS evolution.