scispace - formally typeset
Search or ask a question

Showing papers on "Concept of operations published in 2013"


Journal ArticleDOI
TL;DR: In this paper, the authors developed from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios, which is applied to the review of existing airport terminal passenger models.
Abstract: Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.

67 citations


ReportDOI
01 Jul 2013
TL;DR: This document describes a microgrid cyber security reference architecture and describes cyber actors that can help mitigate potential vulnerabilities, in addition to performance bene ts and vulnerability mitigation that may be realized using this reference architecture.
Abstract: This document describes a microgrid cyber security reference architecture. First, we present a high-level concept of operations for a microgrid, including operational modes, necessary power actors, and the communication protocols typically employed. We then describe our motivation for designing a secure microgrid; in particular, we provide general network and industrial control system (ICS)-speci c vulnerabilities, a threat model, information assurance compliance concerns, and design criteria for a microgrid control system network. Our design approach addresses these concerns by segmenting the microgrid control system network into enclaves, grouping enclaves into functional domains, and describing actor communication using data exchange attributes. We describe cyber actors that can help mitigate potential vulnerabilities, in addition to performance bene ts and vulnerability mitigation that may be realized using this reference architecture. To illustrate our design approach, we present a notional a microgrid control system network implementation, including types of communica- tion occurring on that network, example data exchange attributes for actors in the network, an example of how the network can be segmented to create enclaves and functional domains, and how cyber actors can be used to enforce network segmentation and provide the neces- sary level of security. Finally, we describe areas of focus formore » the further development of the reference architecture.« less

40 citations


Book
31 Jul 2013
TL;DR: An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented and several safety properties of the system are proven using formal techniques.
Abstract: An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).

33 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that a satellite equipped with a standard chemical propulsion thruster in a 60-degree inclined, 500-km altitude orbit can move the ground over-flight target anywhere on the globe (within 60 degrees north and south latitudes) in as little as ten hours from the time of the maneuver with a fuel expenditure, measured in change in velocity (ΔV), of 100 meters-per-second.
Abstract: Traditional space operations are characterized by large, highly-technical, long-standing satellite systems that cost billions of dollars and take decades to develop. Many branches of the US government have recognized the problem of sustaining current space operations and have responded by heavily supporting research and development in a field known as Operationally Responsive Space (ORS). This ORS research focuses on hardware, interfaces, rapid launch and deployment with the overall goal of reducing per-mission-cost down to $20 million. However, there are few studies on the feasibility of maneuvering satellites in lowEarth orbit (LEO) from a ground-track perspective once an asset is launched. We can achieve operational responsiveness by changing the ground track of a given satellite and thereby a geographical target location by applying existing thruster technology. Therefore it is not so much a new technology but a change in the Concept of Operations (CONOPS) of today’s space systems application. The existing paradigm on maneuvering is that it is costprohibitive, especially in performing orbital plane changes, thus orbit-changing maneuvers are only done at the beginning-of-life to establish the service orbit, end-of-life for disposal, and when absolutely necessary for the safety of the system (collision avoidance). This paradigm along with traditional space programs have to change and a transition to responsive and maneuverable systems must take place to meet the needs of space users in a timely manner. The analysis we present here shows that a satellite equipped with a standard chemical propulsion thruster in a 60-degree inclined, 500-km altitude orbit can move the ground over-flight target anywhere on the globe (within 60 degrees north and south latitudes) in as little as ten hours from the time of the maneuver with a fuel expenditure, measured in change in velocity (ΔV), of 100 meters-per-second. Similarly, a more efficient electric propulsion thruster can provide the same maneuverability in 27 hours from the start of the maneuver while expending the same amount of ΔV. This research demonstrates that existing technology can maneuver a satellite significantly to change its ground track to overfly a desired target on Earth in a relatively short period of time and well-within standard fuel budgets.

20 citations


Book
26 Jun 2013
TL;DR: The document describes a Concept of Operations for Flight Deck Display and Decision Support technologies which may help enable emerging Next Generation Air Transportation System capabilities while also maintaining, or improving upon, flight safety.
Abstract: The document describes a Concept of Operations for Flight Deck Display and Decision Support technologies which may help enable emerging Next Generation Air Transportation System capabilities while also maintaining, or improving upon, flight safety. This concept of operations is used as the driving function within a spiral program of research, development, test, and evaluation for the Integrated Intelligent Flight Deck (IIFD) project. As such, the concept will be updated at each cycle within the spiral to reflect the latest research results and emerging developments

15 citations


01 Apr 2013
TL;DR: In this article, human performance models (HPMs) are used for predicting and evaluating operator performance in systems, such as next-generation air traffic control (ATC) systems.
Abstract: NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA evaluate pilot modeling efforts and select the appropriate tools for future modeling efforts to predict pilot performance in NextGen operations.

15 citations


10 Jun 2013
TL;DR: In this article, the integration of trajectory-based arrival-management automation, controller tools, and Flight- Deck Interval Management avionics to enable Continuous Descent Operations (CDO) during periods of sustained high traffic demand is discussed.
Abstract: Air traffic management simulations conducted in the Airspace Operations Laboratory at NASA Ames Research Center have addressed the integration of trajectory-based arrival-management automation, controller tools, and Flight- Deck Interval Management avionics to enable Continuous Descent Operations (CDOs) during periods of sustained high traffic demand. The simulations are devoted to maturing the integrated system for field demonstration, and refining the controller tools, clearance phraseology, and procedures specified in the associated concept of operations. The results indicate a variety of factors impact the concept's safety and viability from a controller's perspective, including en-route preconditioning of arrival flows, useable clearance phraseology, and the characteristics of airspace, routes, and traffic-management methods in use at a particular site. Clear understanding of automation behavior and required shifts in roles and responsibilities is important for controller acceptance and realizing potential benefits. This paper discusses the simulations, drawing parallels with results from related European efforts. The most recent study found en-route controllers can effectively precondition arrival flows, which significantly improved route conformance during CDOs. Controllers found the tools acceptable, in line with previous studies.

15 citations


01 Jun 2013
TL;DR: In this paper, the authors developed the Precision Departure Release Capability (PDRC) concept that uses this technology to improve tactical departure scheduling by automatically communicating surface trajectory-based ready time predictions to the Center scheduling tool.
Abstract: After takeoff, aircraft must merge into en route (Center) airspace traffic flows which may be subject to constraints that create localized demandcapacity imbalances. When demand exceeds capacity Traffic Management Coordinators (TMCs) often use tactical departure scheduling to manage the flow of departures into the constrained Center traffic flow. Tactical departure scheduling usually involves use of a Call for Release (CFR) procedure wherein the Tower must call the Center TMC to coordinate a release time prior to allowing the flight to depart. In present-day operations release times are computed by the Center Traffic Management Advisor (TMA) decision support tool based upon manual estimates of aircraft ready time verbally communicated from the Tower to the Center. The TMA-computed release is verbally communicated from the Center back to the Tower where it is relayed to the Local controller as a release window that is typically three minutes wide. The Local controller will manage the departure to meet the coordinated release time window. Manual ready time prediction and verbal release time coordination are labor intensive and prone to inaccuracy. Also, use of release time windows adds uncertainty to the tactical departure process. Analysis of more than one million flights from January 2011 indicates that a significant number of tactically scheduled aircraft missed their en route slot due to ready time prediction uncertainty. Uncertainty in ready time estimates may result in missed opportunities to merge into constrained en route flows and lead to lost throughput. Next Generation Air Transportation System (NextGen) plans call for development of Tower automation systems capable of computing surface trajectory-based ready time estimates. NASA has developed the Precision Departure Release Capability (PDRC) concept that uses this technology to improve tactical departure scheduling by automatically communicating surface trajectory-based ready time predictions to the Center scheduling tool. The PDRC concept also incorporates earlier NASA and FAA research into automation-assisted CFR coordination. The PDRC concept helps reduce uncertainty by automatically communicating coordinated release times with seconds-level precision enabling TMCs to work with target times rather than windows. NASA has developed a PDRC prototype system that integrates the Center's TMA system with a research prototype Tower decision support tool. A two-phase field evaluation was conducted at NASA's North Texas Research Station (NTX) in DallasFort Worth. The field evaluation validated the PDRC concept and demonstrated reduced release time uncertainty while being used for tactical departure scheduling of more than 230 operational flights over 29 weeks of operations. This paper presents the Concept of Operations. Companion papers include the Final Report and a Technology Description. ? SUBJECT:

12 citations


Book
28 Jun 2013
TL;DR: An integrated system concept for vehicle health assurance that integrates ground-based inspection and repair information with in-flight measurement data for airframe, propulsion, and avionics subsystems is described.
Abstract: This document describes a Concept of Operations (ConOps) for an Integrated Vehicle Health Assurance System (IVHAS). This ConOps is associated with the Maintain Vehicle Safety (MVS) between Major Inspections Technical Challenge in the Vehicle Systems Safety Technologies (VSST) Project within NASA s Aviation Safety Program. In particular, this document seeks to describe an integrated system concept for vehicle health assurance that integrates ground-based inspection and repair information with in-flight measurement data for airframe, propulsion, and avionics subsystems. The MVS Technical Challenge intends to maintain vehicle safety between major inspections by developing and demonstrating new integrated health management and failure prevention technologies to assure the integrity of vehicle systems between major inspection intervals and maintain vehicle state awareness during flight. The approach provided by this ConOps is intended to help optimize technology selection and development, as well as allow the initial integration and demonstration of these subsystem technologies over the 5 year span of the VSST program, and serve as a guideline for developing IVHAS technologies under the Aviation Safety Program within the next 5 to 15 years. A long-term vision of IVHAS is provided to describe a basic roadmap for more intelligent and autonomous vehicle systems.

11 citations


Proceedings ArticleDOI
02 Mar 2013
TL;DR: The concept of operations for the CYGNSS constellation as planned for implementation at the CY GNSS MOC in conjunction with the selected ground network operator is focused on.
Abstract: Hurricane track forecasts have improved in accuracy by ∼50% since 1990, while in that same period there has been essentially no improvement in the accuracy of intensity prediction. One of the main problems in addressing intensity occurs because the rapidly evolving stages of the tropical cyclone (TC) life cycle are poorly sampled in time by conventional polar-orbiting, wide-swath surface wind imagers. NASA's most recently awarded Earth science mission, the NASA EV-2 Cyclone Global Navigation Satellite System (CYGNSS) has been designed to address this deficiency by using a constellation of micro-satellite-class Observatories designed to provide improved sampling of the TC during its life cycle. Managing a constellation of Observatories has classically resulted in an increased load on the ground operations team as they work to create and maintain schedules and command loads for multiple Observatories. Using modern tools and technologies at the Mission Operations Center (MOC) in conjunction with key components implemented in the flight system and an innovative strategy for pass execution coordinated with the ground network operator, the CYGNSS mission reduces the burden of constellation operations to a level commensurate with the low-cost mission concept. This paper focuses on the concept of operations for the CYGNSS constellation as planned for implementation at the CYGNSS MOC in conjunction with the selected ground network operator.

10 citations


Journal ArticleDOI
TL;DR: The authors propose a concept of operation document that incorporates existing regulations and ensures an acceptable level of performance based on experience with a Personal Remote Sensing (PRS) Unmanned Aerial System (UAS).
Abstract: Unmanned Aerial Systems (UASs) have rapidly grown into a significant part of the world-wide aviation budget. However, regulations and official standards have lagged significantly. Within the U.S., there has been significant pressure to develop the regulations to allow commercial and governmental agencies to utilize UASs within the National Airspace System (NAS). The authors propose a concept of operation document that incorporates existing regulations and ensures an acceptable level of performance based on experience with a Personal Remote Sensing (PRS) Unmanned Aerial System (UAS).

01 Sep 2013
TL;DR: This document focuses on the arrival scenarios and procedures to be used during the ATD-1 operational evaluation of TMA-TM and CMS (planned for 2015-2016), and flight test of FIM avionics with TMA
Abstract: This document is an update to the operations and procedures envisioned for NASA s Air Traffic Management (ATM) Technology Demonstration #1 (ATD-1). The ATD-1 Concept of Operations (ConOps) integrates three NASA technologies to achieve high throughput, fuel-efficient arrival operations into busy terminal airspace. They are Traffic Management Advisor with Terminal Metering (TMA-TM) for precise time-based schedules to the runway and points within the terminal area, Controller-Managed Spacing (CMS) decision support tools for terminal controllers to better manage aircraft delay using speed control, and Flight deck Interval Management (FIM) avionics and flight crew procedures to conduct airborne spacing operations. The ATD-1 concept provides de-conflicted and efficient operations of multiple arrival streams of aircraft, passing through multiple merge points, from top-of-descent (TOD) to the Final Approach Fix. These arrival streams are Optimized Profile Descents (OPDs) from en route altitude to the runway, using primarily speed control to maintain separation and schedule. The ATD-1 project is currently addressing the challenges of integrating the three technologies, and their implantation into an operational environment. The ATD-1 goals include increasing the throughput of high-density airports, reducing controller workload, increasing efficiency of arrival operations and the frequency of trajectory-based operations, and promoting aircraft ADS-B equipage.

Journal Article
TL;DR: In this paper, the authors define what current doctrine requires for production of effective mission orders, while focusing on what retired Lt. Gen. L.D. Holder argued in 1990 that the Army was drifting away from the standard field order and that leader focus had shifted away from what was required to win a combined arms fight.
Abstract: [ILLUSTRATION OMITTED] IN 1990, RETIRED Lt. Gen. (then Col.) L.D. Holder wrote an article for Military Review titled "Concept of the Operation--See Ops Overlay." In the article, Holder voiced his concerns that the Army was drifting away from the standard field order and that leader focus had shifted away from what was required to win a combined arms fight. Holder argued that an over reliance on a rigid, methodical planning process and the relatively new doctrinal addition of commander's intent had left many orders without an appropriate concept of operations paragraph and subsequently left subordinates without a clear understanding of the operation. In essence, leaders were losing the balance between the "art" and the "science" of writing effective mission orders. Over the past decade of persistent conflict, many Army leaders have again distanced themselves from the "art" of effective orders production. Officers have learned to create expert multi-paged concept of operations (CONOPs) in electronic media as a tool to provide situational awareness to higher echelons and to assist in the allocation of resources. These CONOP slides rarely convey the actual concept of the operation and usually consist of poorly drawn intent symbols on satellite imagery and a task and purpose for each element. While the slides have some utility, they never were intended to be used as a briefing tool for company commanders and platoon leaders. Using these products, instead of doctrinally complete mission orders, could lead to a disjointed understanding of the concept of operations in a combined arms fight. The undesired effect of this process has created a generation of officers unfamiliar with the doctrinally correct way to write effective mission orders. Multiple changes to doctrine over the last decade have contributed to a lack of understanding. Although current doctrine clearly defines the contents of the concept of operation paragraph, many leaders are guilty of relying on knowledge acquired during the Captain's Career Course or the Command and General Staff College (CGSC). Depending on how long ago the leader attended these courses, his or her doctrinal knowledge may be outdated. This article defines what current doctrine requires for production of effective mission orders, while focusing on what Holder argued in 1990 was the most important part of the order: the commander's intent and the concept of operation. To address this growing concern, we have to start with a common understanding of how our Army fights. Unified land operations are executed through decisive action by means of the Army's core competencies and guided by mission command. Army Doctrine Publication (ADP) 3-0 defines unified land operations as the ability to-- "seize, retain, and exploit the initiative to gain and maintain a position of relative advantage in sustained land operations through simultaneous offensive, defensive, and stability operations in order to prevent or deter conflict, prevail in war, and create the conditions for favorable conflict resolution." (1) Unified land operations are executed through decisive action. Decisive Action Decisive action is the "continuous, simultaneous combination of offensive, defensive, and stability or defense support of civil authorities tasks." (2) When conducting operations outside of the United States and its territories, the Army simultaneously combines three elements--offense, defense, and stability. Within the United States and its territories, decisive action combines the elements of defense support of civil authorities and, as required, offense and defense to support homeland security. Decisive action is conducted by means of the Army's core competencies. (3) Army's Core Competencies The Army has two core competencies: combined arms maneuver and wide area security. Combined arms maneuver is "the application of the elements of combat power in unified action to defeat enemy ground forces; to seize, occupy, and defend land areas; and to achieve physical, temporal, and psychological advantages over the enemy to seize and exploit the initiative. …


Proceedings ArticleDOI
TL;DR: A concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions for sustainable data/information systems and knowledge discovery is presented.
Abstract: The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Options that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.

Journal Article
TL;DR: The Turnaround Integration in Trajectory and Network (TITAN) project as mentioned in this paper is a collaborative decision-making initiative that proposes an advanced concept of operations to identify opportunities for improved information flows.
Abstract: Aircraft turnaround has been identified in many studies as a major driver of departure delays that impact efficient airport and air traffic management (ATM) network operation. It is, therefore, necessary to have reliable information sharing among stakeholders and between stakeholders and network managers. The authors discuss the Turnaround Integration in Trajectory and Network (TITAN) project, a collaborative decision-making initiative that proposes an advanced concept of operations to identify opportunities for improved information flows. Following successful validation of the TITAN project, a cost-benefit analysis was conducted on a decision-support tool. The tool defined a service-oriented architecture that improves sharing of a more predictive common awareness of influences on aircraft turnaround in order to facilitate intelligent decision making. This would mitigate turnaround delays and ameliorate airport performance. In addition to summarizing the project output, the authors introduce ways to integrate project output into the existing ATM system and manage the transition to a future TITAN environment.

Book
31 Jul 2013
TL;DR: This paper presents a formal method for modeling and verifying software systems using the PVS theorem proving system and is demonstrated on a preliminary concept of operation for the Small Aircraft Transportation System (SATS) project at NASA Langley.
Abstract: New concepts for automating air traffic management functions at small non-towered airports raise serious safety issues associated with the software implementations and their underlying key algorithms. The criticality of such software systems necessitates that strong guarantees of the safety be developed for them. In this paper we present a formal method for modeling and verifying such systems using the PVS theorem proving system. The method is demonstrated on a preliminary concept of operation for the Small Aircraft Transportation System (SATS) project at NASA Langley.

01 Jan 2013
TL;DR: In this paper, the authors present a list of tables, FIGURES, ABBREVIATIONS, and CHAPTER 7, Section 7, Chapter 7, Table 1.
Abstract: ...........................................................................................................................v LIST OF TABLES ................................................................................................................. xii LIST OF FIGURES ............................................................................................................... xiv LIST OF ABBREVIATIONS ................................................................................................. xviii CHAPTER

Journal ArticleDOI
01 Jul 2013-Insight
TL;DR: The acquirer and the supplier must also engage, in a shared responsibility that recognizes and deals with an unpredictable future of security threats, one that cannot be effective until systems and security engineering engagement is achieved.
Abstract: Who is responsible for systems security? As shown in figure 1, the acquirer (Acq) thinks it is the supplier, the supplier (Sup) delegates that responsibility to systems engineering, who pass it on to system security engineering (SSE), who meet requirements originating with the acquirer. This arrangement results in a finger-pointing circle when security fails. New revisions to the INCOSE Systems Engineering Handbook are integrating responsibility for system security into the systems engineering processes. Placing responsibility on systems engineering is only a first step. A second step requires mutual engagement between systems engineering and security engineering, an engagement that can only be enabled by systems engineering. Systems engineers and program or project managers will be expected to engage effectively throughout the systems engineering processes and activities—beginning with requirements analysis and the concept of operations, and proceeding through the full lifecycle of development, operations, and disposal. The theme articles in this issue of INSIGHT focus on the nature and problems of effective security engineering engagement in critical systems engineering processes. In the end, the acquirer and the supplier must also engage, in a shared responsibility that recognizes and deals with an unpredictable future of security threats. But that is another story, one that cannot be effective until systems and security engineering engagement is achieved.

Proceedings ArticleDOI
07 Jan 2013
TL;DR: It is proposed that the combined activities of both human and automation required by a proposed design can best be captured by focusing on modeling the work inherent to a complex operation.
Abstract: Humans have always been the vital components of complex operations, notably including aviation. They remain so even as sophisticated automation systems are introduced, changing – but not eliminating – the role of the human relative to the collective work required to achieve mission performance. Automation designers and certification agencies are interested in methods to predict and model how complex operations can be performed by teams of humans and automated agents. This paper proposes that the combined activities of both human and automation required by a proposed design can best be captured by focusing on modeling the work inherent to a complex operation. As a fundamental first step, the overall concept of operations spanning all the work activities can be examined for its feasibility in nominal and off-nominal conditions. These activities can then also be examined to see whether the demands they place upon the human agents in the system are feasible and facilitate the human's ability to contribute, rather than assuming unreasonable situations such as excessive workload, boredom, incoherent task descriptions, excessive monitoring requirements, etc. Further, trade-offs in distributing these activities across agents (both human and automated) can be evaluated in terms of task-interleaving created by the distribution of activity and in terms of the 'interaction overhead' associated with communication and coordination between agents required for a given distribution. A description of a modeling and simulation framework capable of modeling work is provided along with an analysis framework to evaluate proposed complex operations.

ReportDOI
01 May 2013
TL;DR: In this article, the authors present a mission analysis to identify the roles and responsibilities for cyber operations within the Air and Space Operations Center (AOC), separating them from traditional J6/A6 responsibilities.
Abstract: : The Air and Space Operations Center (AOC) is the United States Air Force's operational command and control (C2) platform for the planning and execution of Air, Space, and Cyber operations. Operational C2 of cyber forces is a significant challenge that impacts the planning and integration of cyber operations at the AOC. The Joint Staff's Transitional Cyberspace C2 Concept of Operations, released in March 2012, provides a cyber C2 framework at the Geographical and Functional Combatant Command level, but it is not clear yet how Air Force AOCs will work together to meet the requirements of the CONOPS or conduct cyber planning to support the needs of the Joint Force Air Component Commander. This paper summarizes the results of a mission analysis to identify the roles and responsibilities for cyber operations within the AOC, separating them from traditional J6/A6 responsibilities. Additionally, the Joint Staff CONOPS calls for significant reach back for planning, expertise, and potential execution of cyber capabilities; as such, the paper provides a discussion on how to facilitate globally linked, interoperable AOCs for cyber planning and execution.

29 May 2013
TL;DR: The ARAPAlMA project as discussed by the authors is a proximity operations mission sponsored by the US Air ForceOffice of Scientific Research and the Air Force Research Laboratory, to perform the in-orbit demonstration of proximity operations for visible, infrared, and three dimensional imaging of resident space objects (RSOs) on a nanosat platform.
Abstract: : ARAPAlMA is a proximity operations mission sponsored by the US Air ForceOffice of Scientific Research and the Air Force Research Laboratory, to perform the in-orbit demonstration of proximity operations for visible, infrared, and three dimensional imaging of resident space objects (RSOs) on a nanosat platform. The nanosat is of the 6U CubeSat class, with overall dimensions of 11 X 26 X 34cm and a mass of 9kg. This paper details the goals and the concept of operations of the mission and presents the current status of the design.

Proceedings ArticleDOI
12 Aug 2013
TL;DR: It is possible to emulate future data link capabilities using the existing in-flight Internet and reduced-cost test equipment and indicate that the FIM ConOp, and therefore, many other advanced ConOps with equal or lesser response characteristics and data requirements, can be evaluated in flight using the proposed concept.
Abstract: The National Airspace System (NAS) must be improved to increase capacity, reduce flight delays, and minimize environmental impacts of air travel. NASA has been tasked with aiding the Federal Aviation Administration (FAA) in NAS modernization. Automatic Dependent Surveillance-Broadcast (ADS-B) is an enabling technology that is fundamental to realization of the Next Generation Air Transportation System (NextGen). Despite the 2020 FAA mandate requiring ADS-B Out equipage, airspace users are lacking incentives to equip with the requisite ADS-B avionics. A need exists to validate in flight tests advanced concepts of operation (ConOps) that rely on ADS-B and other data links without requiring costly equipage. A potential solution is presented in this paper. It is possible to emulate future data link capabilities using the existing in-flight Internet and reduced-cost test equipment. To establish proof-of-concept, a high-fidelity traffic operations simulation was modified to include a module that simulated Internet transmission of ADS-B messages. An advanced NASA ConOp, Flight Deck Interval Management (FIM), was used to evaluate technical feasibility. A preliminary assessment of the effects of latency and dropout rate on FIM was performed. Flight hardware that would be used by proposed test environment was connected to the simulation so that data transfer from aircraft systems to test equipment could be verified. The results indicate that the FIM ConOp, and therefore, many other advanced ConOps with equal or lesser response characteristics and data requirements, can be evaluated in flight using the proposed concept.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: WebPuff provides users at CSEPP sites with a suite of planning and response tools that are integrated with a unique chemical dispersion model that provides an advanced level of science on which decisions about public protection can be based.
Abstract: What do you do if there is an accident involving Sarin nerve gas and you are part of the team responsible for protecting thousands of people in the path of this deadly chemical plume? Emergency operations personnel at chemical weapons stockpile sites within the continental United States know exactly what to do. They rely on WebPuff, a state-of-the-art decision support system sponsored by the U.S. Army Chemical Materials Activity (CMA)1 and developed by IEM, a security consulting firm based in North Carolina's Research Triangle Park. WebPuff is used by military and civilian jurisdictions within the Chemical Stockpile Emergency Preparedness Program (CSEPP), which is jointly managed by the U.S. Army and the Federal Emergency Management Agency (FEMA). WebPuff provides users at CSEPP sites with a suite of planning and response tools that are integrated with a unique chemical dispersion model that provides an advanced level of science on which decisions about public protection can be based. Incorporating real-time and forecast weather, topography data, and current toxicity standards, WebPuff's dispersion model provides the most realistic plume prediction in less than two minutes - enabling military installations to meet a five-minute criterion for notifying civilian jurisdictions of an impending threat. WebPuff's chemical dispersion model has been independently tested and certified by scientists at Dugway Proving Ground [1, 2]. WebPuff incorporates a shared framework for risk management among independently managed military and civil jurisdictions at each site as well as a common understanding of how to plan for and, if necessary, respond to the threat that faces communities around chemical weapons stockpile sites every day. In developing the system, CMA and IEM gained consensus from both military and civilian users on operational requirements, business rules, and detailed designs for reports and computer displays that are foundational to the system. WebPuff provides users with information that is organized - primarily through visual means - around a common understanding of the threat and a common concept of operations. Moreover, it is a cost-effective solution to emergency preparedness and response because it is built using 100% open-source technology - the customer pays no third-party license fees. The system meets Department of Defense security and interoperability requirements - military bases using the system can communicate securely and effectively with civilian emergency management organizations. As a result, WebPuff is Defense Information Assurance Certification and Accreditation Process (DIACAP) certified with a current Authority to Operate (ATO) on Army networks. To ensure interoperability with civilian jurisdictions, the system uses the Emergency Data eXchange Language (EDXL) Common Alerting Protocol (CAP) developed by the Organization for the Advancement of Structured Information Standards (OASIS). Nearly ten years after its fielding, WebPuff continues to evolve to meet emerging standards and operational concepts and CSEPP communities are still using the system. Its ability to provide trusted results quickly and to truly facilitate cooperation and collaboration among diverse organizations during disaster response has been, and continues to be, the key to its success. While WebPuff was originally designed to support preparedness for chemical weapons accidents, it represents a unique framework, models, and components that can be customized and extended for use with other hazards and other concepts of operations.

Proceedings ArticleDOI
15 Apr 2013
TL;DR: The challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes are identified and insights gained by applying the Model Based System Engineering are highlighted.
Abstract: System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its results and impact. We will highlight the insights gained by applying the Model Based System Engineering and provide recommendations for its applications and improvements.

01 Jan 2013
TL;DR: The research presented here has led to the development of the Integrated Concept Engineering Framework to explore and demonstrate the effectiveness of virtual environments, gaming technologies and visualization in improving the CONOPS development process and report on the research results and findings.
Abstract: Some believe that the weakest link in systems engineering is often between what the stakeholder desires and what the development team believes is needed. The CONOPS can be a means to bridge this understanding gap. The systems engineering community has identified a need to improve the CONOPS development process and increase the level of understanding between stakeholders and engineers. The research presented here has led to the development of the Integrated Concept Engineering Framework to explore and demonstrate the effectiveness of virtual environments, gaming technologies and visualization in improving the CONOPS development process. This work has shown that 3D visualization has the potential to improve how stakeholders reason about operational concepts. This paper will review the need for an improved CONOPS and report on the research results and findings. Finally, the authors will discuss future directions in which the research can further mature. INTRODUCTION At the onset of any engineering challenge, it is often the case that engineers do not fully understand the problems that need to be addressed and the operational environment in which the solution will be deployed. A system’s end users typically have a better grasp of these considerations, and successful system development will often hinge on how well users and systems engineering are able to communicate and reach a shared mental model. As user communities grow in size and products are expected to operate in a variety of environments, building a shared mental model is becoming more important and more difficult. Traditional methods, processes and tools used by engineers during the early stages of development are no longer able to keep pace with the increasing complexity of systems as is evidenced through a growing number of systems that fail to meet the needs of their users. This paper presents research targeted at evolving the way users and systems engineers work together to set the foundation for successful system development. CONCEPT OF OPERATIONS The Concept of Operation (CONOPS) is a document that describes the characteristics of a system from the point of view of its users. The Department of Defense (DOD) summarizes the purpose of a CONOPS as a method of “obtain[ing] consensus among the acquirer, developer, support and user agencies on the operational concept of a proposed system” [1]. The CONOPS effort should be initiated prior to any other system development activity and presents an opportunity for stakeholders to describe the current environment in which they are operating, potential areas for improvement, and needs from a future system or capability. The exact content of a CONOPS can vary based on industry and specific use, however most defense and aerospace CONOPS tend to follow two prevailing standards, established by IEEE and AIAA [2, 3]. Based on these standards, the CONOPS should address both the current and proposed systems, present anticipated operational scenarios, and include the elements displayed in Figure 1. Figure 1: Recommended CONOPS elements Proceedings of the 2013 Ground Vehicle Systems Engineering and Technology Symposium (GVSETS) Using 3D Gaming Technologies to Improve the Concept of Operations (CONOPS) Process, Korfiatis & Cloutier. Page 2 of 8 It is important to distinguish between two common uses of the word CONOPS that may lead to confusion. As described in [4], variance in the usage of the term CONOPS can cause misperception of its purpose, value and audience. In the DoD, the higher-order CONOPS refers to the “conduct of military action at the operational level of war” [5]. When working with materiel solutions and the DoD engineering community, the system CONOPS is at a lower level and describes specific characteristics of a system or capability. This work will focus on the system-level CONOPS. Edson and Frittman provide other common names, purposes and references for the term CONOPS [4]. When developed properly, a well-established CONOPS can provide the following benefits [4, 6-9]: Allows for consensus by ensuring that the path forward is agreed upon by all stakeholders Reduces risk by forcing the predetermination of aspects of the system before it is implemented Improves quality by revealing opportunities to leverage technology to increase system performance Documents system characteristics without being overly technical and verbose Fosters a collaborative environment where users can state their expectations qualitatively Records design constraints and rationale Enhances the design of legacy systems Maintains a living record of how the development of a system has changed Table 1: Benefits of a proper CONOPS However, research has shown that CONOPS are often under-utilized and under-developed, which not only inhibits the benefits in Table 1, but also introduces negative effects at each stage of system development. Current CONOPS Shortcomings Based on studies of CONOPS and their development process, significant shortcomings exist that hinder the effectiveness of CONOPS. Typically, CONOPS are developed in textual form that requires multiple iterations of writing and editing. In the Department of Transportation’s CONOPS guide, the importance of including each view of the system corresponding to every stakeholder is stressed [10]. However, using the current document-driven approach to CONOPS development, inclusion of all stakeholders is difficult to manage and time consuming. This often requires an organization to choose between excluding some stakeholders or commencing requirements elicitation before the CONOPS has been completed [11]. Two studies have been conducted investigating the current state of practice of CONOPS development. Roberts and Edson administered a survey to 108 practicing systems engineers in the DoD ecosystem, and discovered some startling trends, which they presented at the NDIA Systems Engineering Conference [12]. A summary of some of their findings is recounted in Table 2. Of 108 survey respondents: 36% have never worked a program with a CONOPS 31% stated the CONOPS was completed by bid phase 27% stated the CONOPS was completed by program startup 50% witnessed CONOPS that were not maintained throughout the development lifecycle 74% of CONOPS creation involved customers during creation 70% of CONOPS creation involved users during creation 50% acknowledged use of a standard during the development of a CONOPS Table 2: Results of CONOPS survey [12] Given the shortcomings identified by Roberts and Edson in the CONOPS development process, Cloutier et al [11] conducted a state of practice study to examine the actual CONOPS document. Cloutier et al examined sixty publically available CONOPS documents and compared the information contained within to the recommendations of four dominant CONOPS standards. A full account of the results can be seen in [6, 11], with some highlight recounted below:  Less than 75% of the CONOPS actually list or identify specific mission needs.  Nearly a third had no description of the current system, situation, or context in which it was embedded.  Little attention was paid to other stakeholders who do not directly interact with the system, including regulatory agencies and acquisitions and government personnel.  Personnel related issues (e.g., personnel needs, activities, types, profiles) were rarely discussed.  Less than 20% of the CONOPS identified associated risks of the system and its development. Since the CONOPS is an entry point for the future user into the system development process, it is critical that it be written as an accurate, unambiguous representation of user needs. A completed CONOPS document is often lengthy, dense, and static. These characteristics make it difficult update throughout system development, reduce the likelihood that engineers will read and understand the document, and do not allow for what-if analysis. Finally, Proceedings of the 2013 Ground Vehicle Systems Engineering and Technology Symposium (GVSETS) Using 3D Gaming Technologies to Improve the Concept of Operations (CONOPS) Process, Korfiatis & Cloutier. Page 3 of 8 today's textual CONOPS must be manually translated into artifacts that are useful for requirements engineers, system analysts and architects. A major goal of the model-based systems engineering (MBSE) initiative is the integration of MBSE methods, processes and tools across the full system development lifecycle [13]. While researchers and practitioners have seen success in interoperability of requirements, architecture, design and testing in a modelcentric environment, [14, 15] there has been little progress in linking the stakeholder directly to the MBSE process [16]. This research focuses on bridging the gap between stakeholder and system engineer during early systems engineering and conceptual design. Through use of 3D visualization, gaming technology and immersive environments, the authors have developed the Integrated Concept Engineering Framework as an intuitive, easy to use, and powerful method for developing and analyzing a graphical CONOPS. INTEGRATED CONCEPT ENGINEERING FRAMEWORK This work has had two primary sponsors. The first sponsor to participate was the DoD Intelligence Community (IC). Later the US Army Armament Research, Development and Engineering Center (ARDEC) joined the research effort. The work has been funded through the Systems Engineering Research Center (SERC) which is a University Affiliated Research Center. Both sponsors have been heavily involved in this CONOPS research. The initial goals of this research were to understand the state of practice of CONOPS development and apply emerging technology to improve the way CONOPS are created [11]. At the conclusion of this initial assessment, follow on research was conducted to develop a proof of concept prototype to investigate the effectiveness of applying 3D visualization, gaming engines and immersive environments to CONOPS development. With inspiration fro

Proceedings ArticleDOI
01 Oct 2013
TL;DR: In this paper, a simulation model of airport performance that can be achieved under departure metering is described in the Surface CDM Concept of Operations, as well as results and insights gained from the model are discussed.
Abstract: Surface Collaborative Decision-Making (SCDM) is a process for data exchange to improve the efficient movement of arrivals and departures on and near the airport surface. The Surface CDM Concept of Operations (the ConOps) describes a vision for data exchange as well as a process for metering the flow of departures entering the movement area in order to reduce the need for physical departure queues [1]. This departure metering capability is known as Departure Reservoir Management (DRM). Under the DRM concept, flight operators provide and maintain an updated Earliest Off-Block Time (EOBT) for each flight indicating when the operator expects the flight to be ready to push back from the gate. The DRM assigns each flight a Target Movement Area entry Time (TMAT) when departure metering is in effect. The DRM selects the timing of the TMAT's to maintain a queue at the end of each departure runway of the Target Queue Length (measured in aircraft) whenever there is sufficient demand. If the demand and capacity of each runway are as forecast, and taxi-out and related processes occur as predicted, a queue of the Target Queue Length (measured in aircraft) will be maintained. The DRM capability is expected to be implemented at several airports in 2015 as part of the Federal Aviation Administration's (FAA's) Next Generation Air Transportation System (NextGen). When the information provided to DRM contains inaccuracies, maintenance of the target queue length may be compromised. In response to updates in poor information, DRM may re-adjust TMAT's in an attempt to maintain the desired queue. Updates to TMAT's present challenges to the flight operators who attempt to orchestrate aircraft loading, gate operations, passenger communications, crew times, and a variety of other factors in order to hit their assigned TMAT's. A variety of controls is envisioned in the ConOps to allow the Departure Reservoir Coordinator (DRC) to encourage TMAT stability while maintaining the desired queue lengths. In this paper we discuss the Surface CDM Simulation, a simulation model of airport performance that can be achieved under DRM as described in the Surface CDM Concept of Operations, as well as results and insights gained from the model. We show that the concept can work very well when provided with accurate, timely information from operators, but that inaccurate information can lead to undesirable outcomes. We also find that the different TMAT stability controls provided in the ConOps are of varying effectiveness. We expect these results to be useful to future operators of Surface CDM DRM capabilities, to the designers of tools enabling Surface CDM, and to developers focused on future refinements of the Surface CDM Concept of Operations.

Book
27 Jun 2013
TL;DR: In this paper, the authors discuss the innovative information technologies, human-machine interfaces, and simulation capabilities that must be developed in order to develop, test, and validate deep-space mission operations.
Abstract: Historically, manned spacecraft missions have relied heavily on real-time communication links between crewmembers and ground control for generating crew activity schedules and working time-critical off-nominal situations. On crewed missions beyond the Earth-Moon system, speed-of-light limitations will render this ground-centered concept of operations obsolete. A new, more distributed concept of operations will have to be developed in which the crew takes on more responsibility for real-time anomaly diagnosis and resolution, activity planning and replanning, and flight operations. I will discuss the innovative information technologies, human-machine interfaces, and simulation capabilities that must be developed in order to develop, test, and validate deep-space mission operations

01 Jan 2013
TL;DR: A space program developed at the University of North Dakota which successfully combined the aforementioned elements is presented and the formation, operations and results-to-date of the program are presented and discussed.
Abstract: The University of North Dakota’s OpenOrbiter program is providing an interdisciplinary learning experience for students from numerous STEM and non-STEM fields. OpenOrbiter allows student participants to experience not just the engineering and other technical aspects of the space program, it also involves students from diverse, non-STEM fields (including communications, entrepreneurship, management, visual arts, public policy and English). Traditional STEM fields such as mathematics, physics, electrical engineering, mechanical engineering, computer science and technology are also well represented. Students from specially programs at the University of North Dakota including atmospheric sciences, Earth System Sciences and Policy, aviation, Space Studies and Air Traffic Control are also participating. Students began by developing mission concepts, refining them and down selecting to a final concept that is being implemented. Students chose the OpenOrbiter name, created a logo, associated branding, a web site and a social media presence. In addition to technical objectives, program accomplishments include the creation of numerous technical papers, holding a public forum, operating a launch event and running a successful media campaign. Students involved in the program gained an appreciation for the needs of those working in disciplines outside their own, and learned valuable inter-field interaction skills. Introduction Multi-disciplinary efforts face a variety of challenges; these include melding divergent methodologies, coordinating efforts and solving interdepartmental and interpersonal challenges. Space mission design and implementation is an inherently multi-disciplinary effort. A spacecraft requires mechanical, electrical and computational systems. A space program augments the disciplines required with communications, public policy, management and disciplines to support mission design (physics, mathematics, etc.). This paper presents a space program developed at the University of North Dakota which successfully combined the aforementioned elements. The formation, operations and results-to-date of the program are presented and discussed. This program, called OpenOrbiter, seeks to provide participating students and faculty members with an experience that can only be attained via an integrated, operating space program. In addition to the technical challenges (the typical consideration in small satellite programs), participants were involved with the policy and political dimensions of program operation, creating a mission brand, securing appropriate resources, disseminating mission information, mission planning and implementation. This innovative approach facilitated the analysis of key issues that are typically the crux of real space missions. In many cases, it is not the technology that is the problem, but the other considerations. The OpenOrbiter project exposed engineering and other STEM students to these considerations; it also provided students in other areas (that normally do not have the opportunity to participate in the technical program) with the opportunity and the associated learning benefits. Program Structure The OpenOrbiter program is structured around a multi-level organizational hierarchy. The program is led by a student program director and deputy program director. Reporting to these individuals are four associate directors (for electrical, software, architecture, and communications, outreach and policy) and three managers (for mechanical, operations, and ground station). Three of the associate directors have managers reporting to them; these managers include ground station software, operating software, payload software, sensors and bus, optical systems, power, electrical communications, group communications, outreach, and policy. Each associate director and manager is advised by a faculty mentor. These faculty mentors are from various departments spanning multiple colleges at the University of North Dakota. Program management is affected via weekly meetings between each manager and his or her group members, and a weekly meeting of all team leads. Associate directors and managers are required to send out a weekly e-mail to their team members and the communications team summarizing current tasks in progress. The communications team creates a summary version that is sent to all participants. Associate directors and managers also have frequent contact with their faculty mentors, on an as needed basis. Approximately 300 students and 20 faculty members are involved directly or indirectly with the program. The organizational structure is shown in Figure 1. Student participation was solicited through in class presentations across a variety of disciplines. These presentations were largely given by the associate directors to find managers and/or fill their respective teams. A number of intake sessions were held to introduce perspective participants to the various opportunities for participation. At the University of North Dakota, the aerospace college (housing the Computer Science Department and the Space Studies Department, among others) is located on the opposite end of campus from the other colleges. Because of this, significant effort was undertaken to promote the program in classes in numerous departments and intake sessions were held on both ends of campus to facilitate attendance by students in those areas. Initial Outcomes The first semester of OpenOrbiter’s operations has been very successful. The team has created a high level spacecraft design and detailed implementation plans related to software, electrical and mechanical subsystems. Several aspects of the project’s technical work, the architecture, and the OPEN framework itself were presented in seven papers submitted to the IEEE Aerospace Conference and two papers submitted to the Reinventing Space Conference. OpenOrbiter also participated in the North Dakota Space Robotics Forum, presenting a number of posters. OpenOrbiter was featured on the front page of the Dakota Student (the University of North Dakota’s student newspaper), and on Prairie Public Radio (the local NPR affiliate). This publicity is a testament to interest in the project concept and the hard work and skill of the communications and outreach groups. An innovative structural design has been created, and prepared for 3-D printing. This structure includes an electrical stacking design that does not require physical stacking, an innovative board locking mechanism (designed to prevent accidental seating problems with spacecraft electrical boards), and a large 5 cm x 5 cm x 10 cm payload area (with a substructure for attachment purposes). Software for operating the spacecraft has been designed and its implementation has commenced. Software for the payload (an image superresolution and mosaicking system) has also been designed, and is currently under development. Design work on software to run the ground station, and perform data analysis is ongoing. Electrical requirements have been identified. The various electrical teams are at different places in the design process, with the power system being the most advanced and the radio design requiring the most additional work. The mission concept of operations is still under development; it is being continuously updated based upon decisions and changes made throughout the project. The operations team is also liaising with the ground station software team to facilitate the ground station software design. Ongoing Work While the initial accomplishments have been notable, much work remains to be done. The team must progress the designs into a working hardware system, conduct testing and prepare the spacecraft for launch. It is anticipated that each semester will require, and thus begin with, a student recruiting process. The ability to complete the project, the quality of the product, and the ability to deliver the proposed national benefit are all dependent upon the number and quality of the students that are attracted. Previous work has examined the numerous considerations involved in a project utilizing student-workers. It is clear that many challenges lay ahead; however, it is only through overcoming them that all three goals can be reached concurrently. Conclusion The foregoing has presented a program that is driving student and faculty intrepreneurship, entrepreneurship and significant innovation at the University of North Dakota. It serves as a model to emulate nationwide. Principal Investigator / CoInvestigator Program Director & Deputy Director Team Lead, Mission Architecture Refinement Manager, Electrical Manager, Software Team Lead, Mechanical Team Lead, Operations Operations Team Mission Architecture Refinement Team Sensors & Bus Team Mechanical Team Ground Station Team Manager, Communications, Outreach and Policy Outreach Team Team Lead, Ground Station Team Lead, Communications Anders Nervold Team Lead, Outreach Team Lead, Operating Software Team Lead, Ground Station Software Team Lead, Payload Software Ground Station Software Team Operating Software Team Payload Software Team Communications Team Team Lead, Sensors & Bus Tyler Przybylski Team Lead, Communications Team Lead, Optical System

12 Aug 2013
TL;DR: A feasibility study of the Networked Air Traffic Infrastructure Validation Environment (NATIVE) concept, which aims to simulate future aircraft surveillance and communications equipage and employ an existing commercial data link to exchange data during dedicated flight tests, is presented.
Abstract: -Next Generation Air Transportation System (NextGen) applications reliant upon aircraft data links such as Automatic Dependent Surveillance-Broadcast (ADS-B) offer a sweeping modernization of the National Airspace System (NAS), but the aviation stakeholder community has not yet established a positive business case for equipage and message content standards remain in flux. It is necessary to transition promising Air Traffic Management (ATM) Concepts of Operations (ConOps) from simulation environments to full-scale flight tests in order to validate user benefits and solidify message standards. However, flight tests are prohibitively expensive and message standards for Commercial-off-the-Shelf (COTS) systems cannot support many advanced ConOps. It is therefore proposed to simulate future aircraft surveillance and communications equipage and employ an existing commercial data link to exchange data during dedicated flight tests. This capability, referred to as the Networked Air Traffic Infrastructure Validation Environment (NATIVE), would emulate aircraft data links such as ADS-B using in-flight Internet and easily-installed test equipment. By utilizing low-cost equipment that is easy to install and certify for testing, advanced ATM ConOps can be validated, message content standards can be solidified, and new standards can be established through full-scale flight trials without necessary or expensive equipage or extensive flight test preparation. This paper presents results of a feasibility study of the NATIVE concept. To determine requirements, six NATIVE design configurations were developed for two NASA ConOps that rely on ADS-B. The performance characteristics of three existing in-flight Internet services were investigated to determine whether performance is adequate to support the concept. Next, a study of requisite hardware and software was conducted to examine whether and how the NATIVE concept might be realized. Finally, to determine a business case, economic factors were evaluated and a preliminary cost-benefit analysis was performed.