scispace - formally typeset
Search or ask a question

Showing papers by "Raytheon published in 2020"



Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed hierarchical MPC approach outperforms the baseline controller across a range of electrical loading in terms of both efficient energy management and constraint satisfaction.
Abstract: A hierarchical model predictive control (MPC) approach is developed for energy management of aircraft electro-thermal systems. High-power electrical systems on board modern and future aircraft perform a variety of mission- and flight-critical tasks, while thermal management systems actively cool these electronics to satisfy component-specific temperature constraints, ensuring safe and reliable operation. In this paper, coordination of these electrical and thermal systems is performed using a hierarchical control approach that decomposes the multi-energy domain, constrained optimization problem into smaller, more computationally efficient problems that can be solved in real-time. A hardware-in-the-loop (HIL) experimental testbed is used to evaluate the proposed hierarchical MPC in comparison to a baseline controller for a scaled, laboratory representation of an aircraft electro-thermal system. Experimental results demonstrate that the proposed approach outperforms the baseline controller across a range of electrical loading in terms of both efficient energy management and constraint satisfaction.

27 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed continuous variable quantum reservoir computing in a single nonlinear oscillator and demonstrated quantum-classical performance improvement, and identified its likely source: the nonlinearity of quantum measurement.
Abstract: Realizing the promise of quantum information processing remains a daunting task, given the omnipresence of noise and error. Adapting noise-resilient classical computing modalities to quantum mechanics may be a viable path towards near-term applications in the noisy intermediate-scale quantum era. Here, we propose continuous variable quantum reservoir computing in a single nonlinear oscillator. Through numerical simulation of our model we demonstrate quantum-classical performance improvement, and identify its likely source: the nonlinearity of quantum measurement. Beyond quantum reservoir computing, this result may impact the interpretation of results across quantum machine learning. We study how the performance of our quantum reservoir depends on Hilbert space dimension, how it is impacted by injected noise, and briefly comment on its experimental implementation. Our results show that quantum reservoir computing in a single nonlinear oscillator is an attractive modality for quantum computing on near-term hardware.

27 citations


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate the power of citizen scientists operating smaller observatories (≤1 m) to keep ephemerides "fresh," defined here as when the 1σ uncertainty in the mid-transit time is less than half the transit duration.
Abstract: Due to the efforts by numerous ground-based surveys and NASA's Kepler and Transiting Exoplanet Survey Satellite (TESS), there will be hundreds, if not thousands, of transiting exoplanets ideal for atmospheric characterization via spectroscopy with large platforms such as James Webb Space Telescope and ARIEL. However their next predicted mid-transit time could become so increasingly uncertain over time that significant overhead would be required to ensure the detection of the entire transit. As a result, follow-up observations to characterize these exoplanetary atmospheres would require less-efficient use of an observatory's time—which is an issue for large platforms where minimizing observing overheads is a necessity. Here we demonstrate the power of citizen scientists operating smaller observatories (≤1 m) to keep ephemerides "fresh," defined here as when the 1σ uncertainty in the mid-transit time is less than half the transit duration. We advocate for the creation of a community-wide effort to perform ephemeris maintenance on transiting exoplanets by citizen scientists. Such observations can be conducted with even a 6 inch telescope, which has the potential to save up to ~10,000 days for a 1000-planet survey. Based on a preliminary analysis of 14 transits from a single 6 inch MicroObservatory telescope, we empirically estimate the ability of small telescopes to benefit the community. Observations with a small-telescope network operated by citizen scientists are capable of resolving stellar blends to within 5''/pixel, can follow-up long period transits in short-baseline TESS fields, monitor epoch-to-epoch stellar variability at a precision 0.67% ± 0.12% for a 11.3 V-mag star, and search for new planets or constrain the masses of known planets with transit timing variations greater than two minutes.

27 citations


Journal ArticleDOI
TL;DR: The power of citizen scientists operating smaller observatories (≤1 m) to keep ephemerides "fresh," defined here as when the 1σ uncertainty in the mid-transit time is less than half the transit duration is demonstrated.
Abstract: Due to the efforts by numerous ground-based surveys and NASA's Kepler and TESS, there will be hundreds, if not thousands, of transiting exoplanets ideal for atmospheric characterization via spectroscopy with large platforms such as JWST and ARIEL. However their next predicted mid-transit time could become so increasingly uncertain over time that significant overhead would be required to ensure the detection of the entire transit. As a result, follow-up observations to characterize these exoplanetary atmospheres would require less-efficient use of an observatory's time---which is an issue for large platforms where minimizing observing overheads is a necessity. Here we demonstrate the power of citizen scientists operating smaller observatories ($\le$1-m) to keep ephemerides "fresh", defined here as when the 1$\sigma$ uncertainty in the mid-transit time is less than half the transit duration. We advocate for the creation of a community-wide effort to perform ephemeris maintenance on transiting exoplanets by citizen scientists. Such observations can be conducted with even a 6-inch telescope, which has the potential to save up to $\sim$10,000~days for a 1000-planet survey. Based on a preliminary analysis of 14 transits from a single 6-inch MicroObservatory telescope, we empirically estimate the ability of small telescopes to benefit the community. Observations with a small-telescope network operated by citizen scientists are capable of resolving stellar blends to within 5''/pixel, can follow-up long period transits in short-baseline TESS fields, monitor epoch-to-epoch stellar variability at a precision 0.67\%$\pm$0.12\% for a 11.3 V-mag star, and search for new planets or constrain the masses of known planets with transit timing variations greater than two minutes.

25 citations


Journal ArticleDOI
TL;DR: In this article, an efficient model to simulate the competitive growth of epitaxial columnar dendritic grains is proposed, which tracks the dynamic changes in the dendrites emanating from discrete points along the solid/liquid interface of a quasi-steady melt pool.
Abstract: Epitaxial columnar grain growth is a prevalent microstructural feature in the additive manufacturing (AM) of metal components such as Inconel, with cubic unit cell crystal lattice structure (face centered cubic (FCC) or body centered cubic (BCC)). These columnar grains evolve from the partly molten grains in the substrate or the solidified metal. This work proposes an efficient model to simulate the competitive growth of epitaxial columnar dendritic grains. The proposed model tracks the dynamic changes in the dendrites emanating from discrete points along the solid/liquid interface of a quasi-steady melt pool (MP). These dynamic changes include convergence and divergence of growing dendrites. The model is extended to predict the microstructure of large 3D parts and experimentally validated by comparing the simulation results for laser powder bed fusion (L-PBF) and wire-arc additive manufacturing (WAAM) processes. The microstructure and pole figures are predicted for Inconel 718 samples produced by L-PBF and Inconel 740H samples produced by WAAM processes. The model predictions compare well with the observed microstructure and pole figures results for both the L-PBF and WAAM processes.

23 citations



Posted Content
TL;DR: This paper explores the use of the popular bidirectional language model, BERT, to model and learn the relevance between English queries and foreign-language documents in the task of cross-lingual information retrieval, and introduces a deep relevance matching model based on BERT.
Abstract: Multiple neural language models have been developed recently, e.g., BERT and XLNet, and achieved impressive results in various NLP tasks including sentence classification, question answering and document ranking. In this paper, we explore the use of the popular bidirectional language model, BERT, to model and learn the relevance between English queries and foreign-language documents in the task of cross-lingual information retrieval. A deep relevance matching model based on BERT is introduced and trained by finetuning a pretrained multilingual BERT model with weak supervision, using home-made CLIR training data derived from parallel corpora. Experimental results of the retrieval of Lithuanian documents against short English queries show that our model is effective and outperforms the competitive baseline approaches.

20 citations


Journal ArticleDOI
Fred Daum1
TL;DR: The calculations show that the minimum cost quantum radar at X-band is many orders of magnitude more expensive than the corresponding classical radar, even assuming the most optimistic wideband-phased array radar architecture.
Abstract: We compute the minimum cost for an optimal quantum radar, and we compare it with the cost of actual real world classical radars as a function of range. Our calculations show that the minimum cost quantum radar at X-band is many orders of magnitude more expensive than the corresponding classical radar, even assuming the most optimistic wideband-phased array radar architecture. We also assume that the quantum radar is optimal; that is, the effective signal-to-noise ratio is 6 dB better than for a classical radar with the same transmit power and bandwidth at low photon flux per mode. Finally, we discuss many practical issues and potential solutions.

16 citations


Journal ArticleDOI
TL;DR: A Design for any X Manufacturing (DFXM) method is introduced to use at early design stages to identify the best process for a given product design in cases where comprehensive current process databases may not yet be available to a designer to screen process choices.
Abstract: Design for additive manufacturing (DFAM) calls for more complex designs to best utilise unique design freedoms to improve designs. Conversely, less complex designs are generally more suitable for conventional manufacturing processes due to the higher cost to produce features that are more complex. As additive manufacturing (AM) emerges as an increasingly viable option to produce products beyond initial prototyping, the choice of conventional versus additive manufacturing must occur as early as possible in the design process as this choice can substantially affect how the product is designed. Realising the right decision too late in a design process will lead to wasted design time, increased time to market the product, a functionally inferior design, and/or a costlier product. To address this critical decision, we introduce a Design for any X Manufacturing (DFXM) method to use at early design stages to identify the best process for a given product design in cases where comprehensive current process databases may not yet be available to a designer to screen process choices. This DFXM method customises targeted questions to break down concepts into the key elements while capturing any known disparate process choices within consistent formulations. The method relates any measurable metrics found for any criteria at conceptual design within these formulations to evaluate them accordingly. A technique is introduced to simplify and focus voluminous process capability information toward that needed for this specialised early stage decision. After initial inputs from a designer, an algorithm automatically computes the best process choice as a function of expected order quantity. Three illustrative case studies demonstrate the practical application of this DFXM method in representative design scenarios.

15 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the color-magnitude diagram (CMD) fitting algorithm MATCH to derive the star formation history (SFH) and find that it is consistent with the typical dwarf irregular or transitional dwarf galaxy (dTrans) in the Local Group.
Abstract: A census of the satellite population around dwarf galaxy primary hosts in environments outside the Local Group is essential to understanding Λ cold dark matter galaxy formation and evolution on the smallest scales. We present deep optical Hubble Space Telescope imaging of the gas-rich, faint dwarf galaxy Antlia B (M_V = −9.4)—a likely satellite of NGC 3109 (D = 1.3 Mpc)—discovered as part of our ongoing survey of primary host galaxies similar to the Magellanic Clouds. We derive a new tip of the red giant branch distance of D = 1.35 ± 0.06 Mpc (m − M = 25.65 ± 0.10), consistent with membership in the nearby NGC 3109 dwarf association. The color–magnitude diagram (CMD) shows both a prominent old, metal-poor stellar component and confirms a small population of young, blue stars with ages ≾1 Gyr. We use the CMD fitting algorithm MATCH to derive the star formation history (SFH) and find that it is consistent with the typical dwarf irregular or transitional dwarf galaxy (dTrans) in the Local Group. Antlia B shows relatively constant stellar mass growth for the first ~10–11 Gyr and almost no growth in the last ~2–3 Gyr. Despite being gas-rich, Antlia B shows no evidence of active star formation (i.e., no Hα emission) and should therefore be classified as a dTrans dwarf. Both Antlia B and the Antlia dwarf (dTrans) are likely satellites of NGC 3109, suggesting that the cessation of ongoing star formation in these galaxies may be environmentally driven. Future work studying the gas kinematics and distribution in Antlia B will explore this scenario in greater detail. Our work highlights the fact that detailed studies of nearby dwarf galaxies in a variety of environments may continue to shed light on the processes that drive the SFH and evolution of dwarf galaxies more generally.

Journal ArticleDOI
01 Dec 2020-JOM
TL;DR: In this paper, a sequential coupled thermomechanical analysis of wire arc additive manufacturing (WAAM) B91 steel is conducted to quantify the residual stress variation across the component.
Abstract: Wire arc additive manufacturing (WAAM) is an energy-efficient manufacturing technique used for near-net-shape production of functional industrial components. However, heat accumulation during deposition and the associated mechanical and metallurgical changes result in complex residual stress profiles across the cross section of the fabricated components. These residual stresses are detrimental to the service life of the components. In this study, sequentially coupled thermomechanical analysis of WAAM B91 steel is conducted to quantify the residual stress variation across the component. The thermomechanical analysis includes a transient heat transfer model and a static stress model that incorporates the transformation-induced plasticity due to martensitic phase transformation. The experimentally calibrated heat transfer model mirrors the temperature variation of the system during the deposition. The results from the stress model are validated via x-ray diffraction measurements, and the numerical results are in good agreement with the experimental data.

Journal ArticleDOI
14 Jul 2020
TL;DR: This letter compares three deep rolling force control strategies: position-based rolling with open-loop force control, impedance control, and gradient-based iterative learning control (ILC).
Abstract: Large industrial robots offer an attractive option for deep rolling in terms of cost and flexibility. These robots are typically designed for fast and precise motion, but may be commanded to perform force control by adjusting the position setpoint based on the measurements from a wrist-mounted force/torque sensor. Contact force during deep rolling may be as high as 2000 N. The force control performance is affected by robot dynamics, robot joint servo controllers, and motion-induced inertial force. In this letter, we compare three deep rolling force control strategies: position-based rolling with open-loop force control, impedance control, and gradient-based iterative learning control (ILC). Open loop force control is easy to implement but does not correct for any force deviation. Impedance control uses force feedback, but does not track well non-constant force profiles. The ILC augments the impedance control by updating the commanded motion and force profiles based on the motion and force error trajectories in the previous iteration. The update is based on the gradient of the motion and force trajectories with respect to the commanded motion and force. We show that this gradient may be generated experimentally without the need of an explicit model. This is possible because the mapping from the commanded joint motion to the actual joint motion is nearly identical for all joints in industrial robots. We have evaluated the approach on the physical testbed using an ABB robot and demonstrated the convergence of the ILC scheme. The final ILC tracking performance of a trapezoidal force profile improves by over 70% in terms of the RMS error compared with the impedance controller.

Journal ArticleDOI
TL;DR: A data-driven simulation algorithm is proposed to model the complex and manual manufacturing process in a generic-reusable way and demonstrate that the production managers can make more informed early decisions that can help bring assembly schedules in check and limit wasteful efforts when disruptions in the supply chain of parts sourced for the assembly occur.

Journal ArticleDOI
TL;DR: In this article, the Johnson-Cook and Preston-Tonks-Wallace plasticity models were used to compare single-particle impact morphology outputs with experimental microstructures using scanning electron microscopy and optical microscopy.
Abstract: Prior work has demonstrated greater antipathogenic efficacy concerning the nanostructured copper cold spray coatings versus conventional copper cold spray coatings, while both the nanostructured and conventional cold spray coatings maintain greater contact killing/inactivation rates relative to other thermal spray deposition methods. Recent work has more heavily focused upon the nanostructured cold spray coatings greater efficacy. However, the antimicrobial efficacy of conventional copper cold spray coatings may be improved upon by way of identifying processing parameters that yield microstructures with the greatest concentration of atomic copper ion diffusion pathways. Since ideal processing parameters for a given application can be computed in silico via finite element analysis methods, the fundamental computational frameworks for doing so using the Johnson–Cook and Preston–Tonks–Wallace plasticity models. Modeled single-particle impact morphology outputs were compared with experimental microstructures using scanning electron microscopy and optical microscopy. The computed von Mises flow stresses associated with the two plasticity models were compared with traditionally static nanoindentation data as well as dynamic spherical nanoindentation stress–strain curves. Continued work with the finite element analysis framework developed herein will enable the best cold spray parameters to be identified for optimized antimicrobial properties as a function of deformation-mediated microstructures while still maintaining the structural integrity of the deposited material. Subsequent work will extend the finite element analysis models to multi-particle impacts when spray-dried and gas-atomized copper powder particles have been appropriately meshed.

Patent
28 Apr 2020
TL;DR: In this article, image data acquisition methods and systems that utilize selective temporal co-addition of detector integration samples to construct improved high-resolution output imagery for arrays with selectable line rates are presented.
Abstract: Disclosed are image data acquisition methods and systems that utilizes selective temporal co-adding of detector integration samples to construct improved high-resolution output imagery for arrays with selectable line rates. Configurable TDI arrays are used to construct output imagery of various resolutions dependent upon array commanding, the acquisition geometry, and temporal sampling. The image acquisition techniques may be applied to any optical sensor system and to optical systems with multiple sensors at various relative rotations which enable simultaneous image acquisitions of two or more sensors. Acquired image data may be up-sampled onto a multitude of image grids of various resolution.

Journal ArticleDOI
07 Jul 2020
TL;DR: In this article, it was shown that the mean flow in a spanwise rotating channel follows a linear law at the pressure side with an additive constant $C, and the exact dependence of this additive constant on the Reynolds number and the rotation speed was not entirely clear.
Abstract: While it is known that the mean flow in a spanwise rotating channel follows a linear law at the pressure side with an additive constant $C$, the exact dependence of this additive constant on the Reynolds number and the rotation speed was not entirely clear. It is shown that this additive constant $C$ is a logarithmic function of a rotating induced length scale. After determining the mean-flow scaling, this knowledge is used for wall modeling and for relating the skin friction and the flow rate.

Journal ArticleDOI
TL;DR: This study proposes two different integer programming models, namely, timetabling and assignment based models, and then a scheduling based constraint programming model to solve the flight-gate assignment problem to optimality.
Abstract: Flight-gate assignment problems are complex real world problems involving different constraints. Some of these constraints include plane-gate eligibility, assigning planes of the same airline and planes getting service from the same ground handling companies to adjacent gates, buffers for changes in flight schedules, night stand flights, priority of some gates over others, and so on. In literature there are numerous models to solve this highly complicated problem and tackle its complexity. In this study, first, we propose two different integer programming models, namely, timetabling and assignment based models, and then a scheduling based constraint programming model to solve the problem to optimality. These models prove to be highly efficient in that the computational times are quite short. We also present the results for one day operation of an airport using real data. Finally, we present our conclusions based on our study along with the possible further research.

Journal ArticleDOI
TL;DR: In this paper, the authors measured spatial variations in the stellar mass-to-light ratio (M_\star/L$) and their dependence on color, star formation history, and dust across the disk of M31, using a map of resolved stars in the Panchromatic Hubble Andromeda Treasury (PHAT) survey.
Abstract: A galaxy's stellar mass-to-light ratio ($M_\star/L$) is a useful tool for converting luminosity to stellar mass ($M_\star$). However, the practical utility of $M_\star/L$ inferred from stellar population synthesis (SPS) models is limited by mismatches between the real and assumed models for star formation history (SFH) and dust geometry, both of which vary within galaxies. Here, we measure spatial variations in $M_\star/L$ and their dependence on color, star formation history, and dust across the disk of M31, using a map of $M^\mathrm{CMD}_\star$ derived from color-magnitude diagrams of resolved stars in the Panchromatic Hubble Andromeda Treasury (PHAT) survey. First, we find comparable scatter in $M_\star/L$ for the optical and mid-IR, contrary to the common idea that $M_\star/L$ is less variable in the IR. Second, we confirm that $M_\star/L$ is correlated with color for both the optical and mid-IR and report color vs. $M_\star/L$ relations (CMLRs) in M31 for filters used in the Sloan Digital Sky Survey (SDSS) and Widefield Infrared Survey Explorer (WISE). Third, we show that the CMLR residuals correlate with recent SFH, such that quiescent regions are offset to higher $M_\star/L$ than star-forming regions at a fixed color. The mid-IR CMLR, however, is not linear due to the high scatter of $M_\star/L$ in star-forming regions. Finally, we find a flatter optical CMLR than any SPS-based CMLRs in the literature. We show this is an effect of dust geometry, which is typically neglected but should be accounted for when using optical data to map $M_\star/L$.

Proceedings ArticleDOI
01 Nov 2020
TL;DR: It is argued that temporal dependency graphs, built on previous research on narrative times and temporal anaphora, provide a representation scheme that achieves a good trade-off between completeness and practicality in temporal annotation.
Abstract: We present the construction of a corpus of 500 Wikinews articles annotated with temporal dependency graphs (TDGs) that can be used to train systems to understand temporal relations in text. We argue that temporal dependency graphs, built on previous research on narrative times and temporal anaphora, provide a representation scheme that achieves a good trade-off between completeness and practicality in temporal annotation. We also provide a crowdsourcing strategy to annotate TDGs, and demonstrate the feasibility of this approach with an evaluation of the quality of the annotation, and the utility of the resulting data set by training a machine learning model on this data set. The data set is publicly available.

Proceedings ArticleDOI
01 Feb 2020
TL;DR: This paper focuses on analyzing the delay spread of a directional mmW channel, and proposes a beam selection method that finds the best Rx beam direction that results in a low delay spread and high signal-to-noise ratio (SNR).
Abstract: The harsh propagation environment at millimeter-wave (mmW) frequencies can be countered by using large antenna arrays, which can be steered electronically to create directional beams. Knowledge of the key channel characteristics in this environment, including the delay spread, the coherence time, and the coherence bandwidth, plays a significant role in optimal adaptation of the transmission waveform. In this paper, we focus on analyzing the delay spread of a directional mmW channel. A high delay spread causes inter-symbol interference (ISI), which can be mitigated by concatenating cyclic prefixes (CPs) to data symbols at the expense of lower spectral efficiency. Considering a single mmW link, whose transmitter (Tx) and receiver (Rx) are equipped with uniform planar arrays (UPAs), we study the impact of various beamforming attributes (e.g., antenna-array size, beamwidth, beam direction, and beam misalignment) on the average and root-mean-square delay spread. We use detailed simulations with accurate 3GPP channel models and conduct extensive experiments using a $4\times 8$ UPA at 28 GHz to verify our analysis. Based on this analysis, we study the optimal beamforming configuration at the Rx for a given Tx beamformer so as to maximize the spectral efficiency. Our proposed beam selection method finds the best Rx beam direction that results in a low delay spread and high signal-to-noise ratio (SNR). Our extensive simulation and experimental results verify that this method significantly improves the spectral efficiency, almost doubling the data rate in some cases.

Book ChapterDOI
Tuhin Sahai1
28 Sep 2020
TL;DR: This article surveys a range of examples that illustrate the use of dynamical systems theory in the context of computational complexity analysis and novel algorithm construction and summarizes a novel approach for clustering graphs using the wave equation partial differential equation.
Abstract: This article surveys the burgeoning area at the intersection of dynamical systems theory and algorithms for NP-hard problems. Traditionally, computational complexity and the analysis of non-deterministic polynomial-time (NP)-hard problems have fallen under the purview of computer science and discrete optimization. However, over the past few years, dynamical systems theory has increasingly been used to construct new algorithms and shed light on the hardness of problem instances. We survey a range of examples that illustrate the use of dynamical systems theory in the context of computational complexity analysis and novel algorithm construction. In particular, we summarize a) a novel approach for clustering graphs using the wave equation partial differential equation, b) invariant manifold computations for the traveling salesman problem, c) novel approaches for building quantum networks of Duffing oscillators to solve the MAX-CUT problem, d) applications of the Koopman operator for analyzing optimization algorithms, and e) the use of dynamical systems theory to analyze computational complexity.

Proceedings ArticleDOI
Fred Daum1
28 Apr 2020
TL;DR: The calculations show that the minimum cost quantum radar at X-Band is many orders of magnitude more expensive than the corresponding classical radar, even assuming the most optimistic wideband phased array radar architecture.
Abstract: We compute the minimum cost for an optimal quantum radar, and we compare it with the cost of actual real world classical radars as a function of range. Our calculations show that the minimum cost quantum radar at X-Band is many orders of magnitude more expensive than the corresponding classical radar, even assuming the most optimistic wideband phased array radar architecture. We also assume that the quantum radar is optimal; that is, the effective signal-to-noise ratio is 6 dB better than for a classical radar with the same transmit power and bandwidth at low photon flux per mode.

Proceedings ArticleDOI
11 Oct 2020
TL;DR: In this paper, the authors proposed a voltage-based control of a 5-Level active-neutral-point-clamped (ANPC) II type WBG inverter for high frequency and high power density.
Abstract: In this paper, detrimental effects of conventional dead-time based control of multilevel power inverters for high frequency and high power density are analyzed. To mitigate these effects, a compensation technique is proposed based on a voltage-based control. Apparent Switching Frequency Doubling (ASFD) carrier-based pulse-width-modulation (PWM) is considered for the control of a 5-Level Active-Neutral-Point-Clamped (ANPC) II type WBG inverter, and the conventional dead-time and dead-time compensation are applied to this topology. In the conventional dead-time method, since an error of its phase voltage can be generated according to its current direction, the system output can be reduced accordingly and its current total harmonic distortion (iTHD) can also be increased. In addition, the three-phase output voltages have a significant voltage error. The proposed dead-time compensation can significantly improve the current quality by eliminating the error of each phase voltage. Each phase voltage and current distortion were compared and analyzed for the 5-Level ANPC II type WBG inverter with the conventional dead-time and proposed dead-time compensation method. The proposed technique and results were verified through simulation and experiment.

Posted Content
TL;DR: This work model resource sharing as a multi-objective optimization problem and presents a solution framework based on Cooperative Game Theory and proves that for a monotonic, non-decreasing utility function, the game is canonical and convex.
Abstract: Mobile edge computing seeks to provide resources to different delay-sensitive applications. This is a challenging problem as an edge cloud-service provider may not have sufficient resources to satisfy all resource requests. Furthermore, allocating available resources optimally to different applications is also challenging. Resource sharing among different edge cloud-service providers can address the aforementioned limitation as certain service providers may have resources available that can be ``rented'' by other service providers. However, edge cloud service providers can have different objectives or \emph{utilities}. Therefore, there is a need for an efficient and effective mechanism to share resources among service providers, while considering the different objectives of various providers. We model resource sharing as a multi-objective optimization problem and present a solution framework based on \emph{Cooperative Game Theory} (CGT). We consider the strategy where each service provider allocates resources to its native applications first and shares the remaining resources with applications from other service providers. We prove that for a monotonic, non-decreasing utility function, the game is canonical and convex. Hence, the \emph{core} is not empty and the grand coalition is stable. We propose two algorithms \emph{Game-theoretic Pareto optimal allocation} (GPOA) and \emph{Polyandrous-Polygamous Matching based Pareto Optimal Allocation} (PPMPOA) that provide allocations from the core. Hence the obtained allocations are \emph{Pareto} optimal and the grand coalition of all the service providers is stable. Experimental results confirm that our proposed resource sharing framework improves utilities of edge cloud-service providers and application request satisfaction.

Journal ArticleDOI
01 Jul 2020
TL;DR: The current effort focuses on the foundational extended taxonomy that uses a minimal set of terms to model system‐ and SoS‐related concepts and relations among them to streamline collaboration among involved SoS stakeholders, with focus on safety.

Journal ArticleDOI
TL;DR: This paper explores the critical effects that changes in certain parameters can have on the reservoir computers' ability to express multifunctionality and exposes the existence of several "untrained attractors"; attractors that dwell within the prediction state space of the reservoir computer were not part of the training.
Abstract: Multifunctionality is a well observed phenomenological feature of biological neural networks and considered to be of fundamental importance to the survival of certain species over time. These multifunctional neural networks are capable of performing more than one task without changing any network connections. In this paper we investigate how this neurological idiosyncrasy can be achieved in an artificial setting with a modern machine learning paradigm known as `Reservoir Computing'. A training technique is designed to enable a Reservoir Computer to perform tasks of a multifunctional nature. We explore the critical effects that changes in certain parameters can have on the Reservoir Computers' ability to express multifunctionality. We also expose the existence of several `untrained attractors'; attractors which dwell within the prediction state space of the Reservoir Computer that were not part of the training. We conduct a bifurcation analysis of these untrained attractors and discuss the implications of our results.

Patent
29 Jan 2020
TL;DR: In this paper, a combustor for a gas turbine engine includes a support shell; a first liner panel mounted to the support shell via a multiple of studs, the second liner panel including a second rail that extends from a cold side of the first liner panels adjacent to the first rail to form an interface passage; and at least one heat transfer feature within the interface passage.
Abstract: A combustor for a gas turbine engine includes a support shell; a first liner panel mounted to the support shell via a multiple of studs, the first liner panel including a first rail that extends from a cold side of the first liner panel; a second liner panel mounted to the support shell via a multiple of studs, the second liner panel including a second rail that extends from a cold side of the second liner panel adjacent to the first rail to form an interface passage; and at least one heat transfer feature within the interface passage

Proceedings ArticleDOI
26 Aug 2020
TL;DR: Major technologies and design trades for various components and system architectures are presented to provide guidelines and framework to address this grand challenge of electric drivetrain (EDT) designs that would significantly improve fuel burn reduction, design flexibility, and operational improvements in next generation of aircrafts.
Abstract: Development of electric, hybrid and turboelectric propulsion technologies for electrified aircraft propulsion system is essential for improving fuel consumption, reducing emissions and noise pollution, lowering maintenance costs and improving reliability of the air transportation systems. The future needs and key benefits of aircraft electrification has made it a highly persuaded common technology trend across the aerospace industry ranging from very large airplanes to small aircrafts, all alike. For very high power (20MW) propulsion system, with the inadequacies of current and near future state-of-the art of electric energy storage technologies, all electric aircraft solution faces enormous technology gaps that needs to be bridged. Advanced turbo-electric technology offers potential solutions towards successful realization of the benefits of electrification of aircrafts. However, this represent a grand challenge in many fronts to realize electric drivetrain (EDT) designs that would significantly improve fuel burn reduction, design flexibility, and operational improvements in next generation of aircrafts. This work focuses on the underlying technological elements to enable such high power turbo-electric aircraft. A preliminary study is carried out to find that to achieve the key benefits of electrifications, the ETD system efficiency has to be > 93% and the specific power density of the system is required to be > 7.5 kW/kg. Furthermore, it is found that that to achieve such system level performances, the EDT components is required to be ≥ 99% and with specific power densities > 40 kW/kg to achieve the 7.5 kW/kg target. These necessitates orders of magnitude of improvements at all technological fronts and requires radical improvement in design and integration methodologies. Major technologies and design trades for various components and system architectures are presented to provide guidelines and framework to address this grand challenge. Key results are provided to support the design study.

Journal ArticleDOI
TL;DR: The TC-10 comprises an international group of electronics engineers, mathematicians, professors and physicists with representatives from national metrology laboratories, national science laboratories, component manufacturers, the test instrumentation industry, academia, and end users.
Abstract: Global trade relies on the ability to reproducibly and accurately communicate the performance of products and to support these attestations. This standardization is essential for accurate, reproducible, reliable, and communicable characterization of the performance of these devices, to support technology and product advancement, product comparison and performance tracking, and device calibration and traceability. Standard terms and definitions, reproducible test methods, and accurate computational procedures are necessary for this communication and facilitate economic growth and technology evolution through the common understanding of technology. The IEEE Technical Committee 10 (TC-10), the Waveform Generation, Measurement, and Analysis Committee of the IEEE Instrumentation and Measurement Society (IMS), fulfills the global need for standardized terms and test and computational methods for describing and/or measuring the parameters that describe the performance of signal generators and waveform recorders and analyzers. The TC-10 has developed and maintains the following documentary standards: IEEE Std 181-2011, “Standard on Transitions, Pulses, and Related Waveforms” [1]; IEEE Std 1057-2017, “Standard for Digitizing Waveform Recorders” [2]; IEEE Std 1241-2010, “Standard for Terminology and Test Methods for Analog-to-Digital Converters” [3]; IEEE Std 1658-2011, “Standard for Terminology and Test Methods for Digital-to-Analog Converters” [4]; and the IEEE Std 1696-2013, “Standard for Terminology and Test Methods for Circuit Probes” [5]. In development is the IEEE Draft Std P2414 “Draft Standard for Jitter and Phase Noise.” The TC-10 comprises an international group of electronics engineers, mathematicians, professors and physicists with representatives from national metrology laboratories, national science laboratories, component manufacturers, the test instrumentation industry, academia, and end users. The status of the TC-10 standards is described herein.