scispace - formally typeset
Search or ask a question

Showing papers in "Theory of Computing Systems \/ Mathematical Systems Theory in 1990"


Journal ArticleDOI
TL;DR: It is proved that the complexity class ⊕P, parity polynomial time [PZ], contains the class of languages accepted byNP machines with few accepting paths, and that theclass of nondeterministic path-restricted languages is closed under bounded truth-table reductions.
Abstract: This paper proves that the complexity class ⊕P, parity polynomial time [PZ], contains the class of languages accepted byNP machines with few accepting paths. Indeed, ⊕P contains a broad class of languages accepted by path-restricted nondeterministic machines. In particular, ⊕P contains the polynomial accepting path versions ofNP, of the counting hierarchy, and of Mod m NP form>1. We further prove that the class of nondeterministic path-restricted languages is closed under bounded truth-table reductions.

84 citations


Journal ArticleDOI
TL;DR: It is observed that these classes have complete problems and a characterization of the classes using robust machines with bounded access to the oracle is found, obtaining some other results in this direction.
Abstract: We study certain language classes located betweenP andNP that are defined by polynomial-time machines with a bounded amount of nondeterminism. We observe that these classes have complete problems and find a characterization of the classes using robust machines with bounded access to the oracle, obtaining some other results in this direction. We also study questions related to the existence of complete tally sets in these classes and closure of the classes under different types of polynomial-time reducibilities.

60 citations


Journal Article
TL;DR: Signalling off-chip requires significant current, which changes drastically during certain output-bus transitions, and digital designers often go to great lengths to reduce this ``transmitted noise", which costs Cray a factor of two in output pins and wires.
Abstract: Signalling off-chip requires significant current. As a result, a chip''s power-supply current changes drastically during certain output-bus transitions. These current fluctuations cause a voltage drop between the chip and circuit board due to the parasitic inductance of the power-supply package leads. Digital designers often go to great lengths to reduce this ``transmitted'''' noise. Cray, for instance, carefully balances output signals using a technique called differential signalling to guarantee a chip has constant output current. Transmitted-noise reduction costs Cray a factor of two in output pins and wires. Coding achieves similar results at smaller costs.

58 citations


Journal ArticleDOI
TL;DR: The concepts of power_index, satisfiability hypothesis (SH), and structure tree are introduced and used to make sharper hypotheses about a problem's complexity than “the problem isNP-complete.”
Abstract: The concepts of power_index, satisfiability hypothesis (SH), and structure tree are introduced and used to make sharper hypotheses about a problem's complexity than “the problem isNP-complete.” These concepts are used to characterize the complexities of a number of basicNP-complete problems, including both CLIQUE and PARTITION which are shown to have power-indices at most 1/2. Also, the problem 3SAT is shown to be solvable deterministically in time exponential only in thesquare root ofv+c, wherev is the number of variables andc is the number of “crossovers” needed to layout the formula in the plane.

41 citations


Journal ArticleDOI
TL;DR: Three genres of butterfly-like graphs in the (boolean) hypercube are presented; each embedding is specified via a linear-time algorithm and has optimal dilation, congestion, and expansion.
Abstract: We present optimal embeddings of three genres of butterfly-like graphs in the (boolean) hypercube; each embedding is specified via a linear-time algorithm. Our first embedding finds an instance of the FFT graph as a subgraph of the smallest hypercube that is big enough to hold it; thus, we embed then-level FFT graph, which has (n+1)2 n vertices, in the (n+⌈log2(n+1)⌉)-dimensional hypercube, with unit dilation. This embedding yields a mapping of the pipelined FFT algorithm on the hypercube architecture, which is optimal in all resources (time, processor utilization, load balancing, etc.) and which is on-line in the sense that inputs can be added to the transform even during the computation. Second, we find optimal embeddings of then-level butterfly graph and then-level cube-connected cycles graph, each of which hasn2 n vertices, in the (n+⌈log2 n⌉)-dimensional hypercube. These embeddings, too, have optimal dilation, congestion, and expansion. The dilation is 1+(n mod 2), which is best possible. Our embeddings indicate that these two bounded-degree approximations to the hypercube do not have any communication power that is not already present in the hypercube.

40 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that a language C is rankable in deterministic polynomial time iffP = P #P #P677, whereP is any of the following six classes of languages: (1) languages accepted by log-time-bounded non-turing machines, (2) languages accept by (uniform) families of unbounded fan-in circuits, (3) languages acceptance by 2-way deterministic pushdown automata, (4) languages accepting by multi-head deterministic finite automata and (5)
Abstract: Ranking is the problem of computing for an input string its lexicographic index in a given (fixed) language. This paper concerns the complexity of ranking. We show that ranking languages accepted by 1-way unambiguous auxiliary pushdown automata operating in polynomial time is inNC (2). We also prove negative results about ranking for several classes of simple languages.C is rankable in deterministic polynomial time iffP=P #P , whereC is any of the following six classes of languages: (1) languages accepted by logtime-bounded nondeterministic Turing machines, (2) languages accepted by (uniform) families of unbounded fan-in circuits of constant depth and polynomial size, (3) languages accepted by 2-way deterministic pushdown automata, (4) languages accepted by multihead deterministic finite automata, (5) languages accepted by 1-way nondeterministic logspace-bounded Turing machines, and (6) finitely ambiguous linear context-free languages.

30 citations


Journal Article
TL;DR: An algorithm is presented which transforms a given combinational circuit into a functionally equivalent pipelined circuit with minimum latency and clock-period no greater than a given upper bound c, and an O (V lg V ) algorithm is given for a mixed-integer optimization problem which arises in the linear programming framework of retiming.
Abstract: In this paper we investigate properties of retiming, a circuit transformation which preserves the behavior of the circuit as a whole. We present an algorithm which transforms a given combinational circuit into a functionally equivalent pipelined circuit with minimum latency and clock-period no greater than a given upper bound c. The algorithm runs in O (E ) steps, where E is the number of interconnections in the circuit, and is optimal within a constant factor. We give a novel and concise characterization of the minimum clock-period of a circuit in terms of the maximum delay-to-register ratio cycle in the circuit. We show that this ratio does not exceed the minimum feasible clock-period by more than the maximum delay D of the elements in the circuit. This characterization leads to an O (E lg D) algorithm for minimum clock-period pipelining of combinational circuitry with latency no greater than a given upper bound l, an O (min{V E lg(V D), V E }) algorithm for minimum clock-period retiming of unit-delay circuitry, an O (V E lg D) algorithm for minimum clock-period retiming of general circuitry and an O (min{V E lg(V W ) lg (V D), V E lg (V D)}) algorithm for approximately minimum clock-period retiming, where V is the number of processing elements in the circuit. We demonstrate the closed semiring structure of retiming on unit-delay circuits under a given clock-period constraint. Finally, we give an O (V lg V ) algorithm for a mixed-integer optimization problem which arises in the linear programming framework of retiming.

11 citations


Journal ArticleDOI
TL;DR: It is proved that in a completion procedure, once a general superposition is considered, all its instances are unnecessary inferences and, thus, do not have to be considered.
Abstract: Theoretical results for identifying unnecessary inferences are discussed in the context of the use of a completion-procedure-based approach toward automated reasoning. The notion of a general superposition is introduced and it is proved that in a completion procedure, once a general superposition is considered, all its instances are unnecessary inferences and, thus, do not have to be considered. It is also shown that this result can be combined with another criterion, called the prime superposition criterion, proposed by Kapur, Musser, and Narendran, thus implying that prime and general superpositions are sufficient. These results should be applicable to other approaches toward automated reasoning, too. These criteria can be effectively implemented, and their implementation has resulted in automatically proving instances of Jacobson's theorem (also known as the ring commutativity problems) usingRRL (Rewrite Rule Laboratory), a theorem prover based on rewriting techniques and completion.

9 citations


Journal ArticleDOI
TL;DR: A method for constructing asymptotically optimalsingle-exception sorting networks is demonstrated and it is shown that the verification of shallow sorting networks of depth 2D(n)+6⌈logn⌉+O(1) is Co-NP complete.
Abstract: Asorting network is a combinational circuit for sorting constructed from comparison-swap units. The depth of such a circuit is a measure of its running time. It is known that sorting-network verification is computationally intractable. However, it is reasonable to hypothesize that only the fastest (that is, the shallowest) networks are likely to be fabricated. It is shown that the verification of shallow sorting networks is also computationally intractable. Firstly, a method for constructing asymptotically optimalsingle-exception sorting networks is demonstrated. These are networks which sort all zero-one inputs except one. More specifically, their depth isD(n-1)+2⌈log(n-1)⌉+2, whereD(n) is the minimum depth of ann-input sorting network. It follows that the verification problem for sorting networks of depth 2D(n)+6⌈logn⌉+O(1) is Co-NP complete. Given the current state of knowledge aboutD(n) for largen, this indicates that the complexity of verification for shallow sorting networks is as great as for deep networks.

9 citations


Journal ArticleDOI
TL;DR: A new notion of reducibility is defined which characterizes the classes ∑kp,k>0, of the polynomial-time hierarchy in this way, and it is shown that the levels of this hierarchy are distinct if and only if uniform witnesses to this separation exist.
Abstract: A series of recent results has characterized membership in certain complexity classes in terms of specific types of reductions: a set is in the class if and only if it is reducible to almost every set. We define a new notion of reducibility which characterizes the classes ∑kp,k>0, of the polynomial-time hierarchy in this way. As an application, we show that the levels of the polynomial-time hierarchy are distinct if and only if uniform witnesses to this separation exist.

7 citations


Journal ArticleDOI
TL;DR: The logical formalism used is described, the early results of this work are outlined, and how the formalism is extended by the use of new quantifiers and atomic predicates to express many of the fundamental classes of boolean circuit complexity is shown.
Abstract: Two important measures of the computational complexity of a regular language are the type of finite automaton needed to recognize it and the type of logical expression needed to describe it. Important connections between these measures were studied by Buchi and McNaughton as early as 1960. In this survey we describe the logical formalism used, outline these early results, and describe modern extensions of this work. In particular, we show how the formalism is extended by the use of new quantifiers and atomic predicates to express many of the fundamental classes of boolean circuit complexity.

Journal Article
TL;DR: In this article, an initial attempt to demonstrate aspect independent target identification of complex radar targets using annihilation filters based on the natural resonances of the targets is presented, and the Cadzow-Solomon signal processing algorithm is tested to determine its suitability for the task of extracting the poles from complex targets to a degree of accuracy required for successful implementation of an annihilation filtering target identification system.
Abstract: : This thesis represents an initial attempt to demonstrate aspect independent target identification of complex radar targets using annihilation filters based on the natural resonances of the targets. The Cadzow-Solomon signal processing algorithm is tested to determine its suitability for the task of extracting the poles from complex targets to a degree of accuracy required for successful implementation of an annihilation filtering target identification system. This testing was conducted through the use of noise polluted synthetic data as well as measured transient scattering data from thin-wire and silver coated scale model aircraft targets. The testing revealed that the Cadzow- Solomon algorithm can return pole clusters at false pole locations when processing the scattered returns from complex targets. Properties of annihilation filters which may affect their ability to discriminate complex targets are examined.

Journal ArticleDOI
TL;DR: The proof is constructive in the sense that it shows how to construct ak-head automaton recognizingLf(k), a solution of the problem stated by Chrobak.
Abstract: LanguagesL n ={1 x 2 ix :i, x ∈ ℕ, 1≤i≤n} were used to show that, for eachk, one-way non-sensing deterministic finite automata (1-MFA) withk+1 heads are more powerful than such automata withk heads, even if we consider only 2-bounded languages (Chrobak). Fork ∈ ℕ letf(k) be the maximal numbern such that languageL n can be recognized by a 1-MFA withk heads. We present a precise inductive formula forf(k). It may be shown that, fork≥3, $$\frac{{(2k - 5)! \cdot (k - 2) \cdot (k - 1)}}{{2^{k - 3} }} \leqslant f(k) \leqslant \frac{{(2k - 5)! \cdot (k - 2) \cdot (k - 1) \cdot 3k^2 }}{{2^{k - 3} }}$$ that is,f(k)≈k2k. The proof is constructive in the sense that it shows how to construct ak-head automaton recognizingL f(k) . This is a solution of the problem stated by Chrobak.

Journal Article
TL;DR: In this article, a state-of-the-art laser based acousto-ultrasonic system, incorporating a compact pulsed laser and a fiber-optic heterodyne interferometer, was delivered to the NASA Lewis Research Center.
Abstract: The acousto-ultrasonic method has proven to be a most interesting technique for nondestructive evaluation of the mechanical properties of a variety of materials. Use of the technique or a modification thereof, has led to correlation of the associated stress wave factor with mechanical properties of both metals and composite materials. The method is applied to the nondestructive evaluation of selected fiber reinforced structural composites. For the first time, conventional piezoelectric transducers were replaced with laser beam ultrasonic generators and detectors. This modification permitted true non-contact acousto-ultrasonic measurements to be made, which yielded new information about the basic mechanisms involved as well as proved the feasibility of making such non-contact measurements on terrestrial and space structures and heat engine components. A state-of-the-art laser based acousto-ultrasonic system, incorporating a compact pulsed laser and a fiber-optic heterodyne interferometer, was delivered to the NASA Lewis Research Center.

Journal Article
TL;DR: In this article, the design and analysis of a preprocessing system to remove bias introduced by a measurement system have been discussed and analyzed. But the preprocessing of the data is complicated by influences of the satellne and the ground station.
Abstract: The Olympus Propagation Experiment (OPEX) has been set up to collect accurate data on propagation properties for the 10 30 GHz range. July 1989 the Olympus satellite was launched and October 1989 the measurement campaign was started. At the groundstation of the Eindhoven University of Technology (EUT), 12.5, 20 and 30 GHz beacons are received to investigate attenuation, depolarization and scintillation properties of the propagation medium. Obtaining accurate data is complicated by influences of the satellne and the groundstation. Biases introduced by this measurement system have to be removed from the acquired data. It is the design goal of a preprocessing system to obtain 'clean' data sunable for analysis of propagation parameters. This report deals with the design and analysis of a preprocessing system. Methods to perform bias removal are discussed and analyzed. EUT measurements have been used to evaluate the effectiveness of these bias removal methods. It may be concluded that for attenuation measurements, radiometer measurements are necessary to obtain copolar reference levels. For crosspolar measurements at 12.5 and 30 GHz, adaptive cancellation is recommended. A method has been developed to estimate the uncertainly of propagation data in practice. Subsequently algornhms have been developed for the EUT preprocessing system, that will be part of the EUT processing system for event analysis. For statistical analysis, a standard processing system, 'DAPPER', will be used. A prototype of DAPPER preprocessing was supplied to the EUT and has been tested. This system is under development by SIEMENS for the European Space Agency (ESA) and will be submitted to all OPEX members. Tests with real propagation data show several errors and practical problems in the software system. Some examples of the output of the preprocessing system are discussed.

Journal Article
TL;DR: This thesis is a tutorial for Command, Control, and Communications (C3) students and synthesizes the essence of electronic communications systems and related frequency spectrums into an integrated set of illustrations to help the C3 student focus his or her attention on the performed operations and transformations in order to acquires a better understanding for the processes involved.
Abstract: : This thesis is a tutorial for Command, Control, and Communications (C3) students and synthesizes the essence of electronic communications systems and related frequency spectrums into an integrated set of illustrations. The objective is to help the C3 student focus his or her attention on the performed operations and transformations in order to acquires a better understanding for the processes involved. The first major illustration show how the various components of a communication system are related. These components include analog and digital signals, multiplexing, modulation, and various transmission mediums. Central to any communications system is the limitations imposed by bandwidth and noise. The concept of bandwidth is developed through Fourier analysis. An integrated set of graphics shows the relationship between the time and frequency domains and illustrates how the bandwidth increases as the pulse width decreases. Transmitting information often requires higher data rates which, in turn, require higher frequencies. Radio wave propagation is frequency dependent and a chart is developed showing the different categories of radio wave propagation as they relate to atmospheric layers and frequency. Finally, a chart relating transmission medium attenuation, noise sources, and various radio wave terminology is given.

Journal Article
TL;DR: In this article, the feasibility of tailoring a two-dimensional flow over a backward facing step was investigated using the PHOENICS computational fluid dynamics program, and the results showed that the results differed only by the underprediction of the length of the horizontal flow pattern.
Abstract: : The Naval Postgraduate School (NPS) has undertaken a series of studies on ship air wake tailoring, through the use of deflectors, with the goal of reducing the level of turbulence and the size of the separated zones on the helicopter deck This reduction will result in a larger engage/disengage envelope, which in turn, will increase the percentage of time that the H-46 and other helicopters can safely operate This study is the second in the series of NPS to attempt to achieve this goal and considers only the feasibility of tailoring a two-dimensional flow over a backward facing step Nonporous flat and curved deflectors and porous flat deflectors were mounted in various positions near the edge of the step and the flow patterns recorded using both still and video photography The deflectors were mounted at varying separations from the step and varying angles to the vertical It was found that the porous deflectors produced the lowest velocities in the region behind the step, and the nonporous produced the greatest changes in the flow pattern The results were also compared to the results of the first study that used the 'PHOENICS' computational fluid dynamics program With one exception, the results differed only by the underprediction of the length of the horizontal flow pattern

Journal ArticleDOI
TL;DR: It is shown that it is decidable whether a monadic Thue system is canonical over a regular set; i.e., there is an algorithm to determine whether every string in aRegular set has a unique normal form modulo a monadspecific Thue systems.
Abstract: A restricted confluence problem is investigated for string-rewriting systems (Thue systems). It is shown that it is decidable whether a monadic Thue system is canonical over a regular set; i.e., there is an algorithm to determine whether every string in a regular set has a unique normal form modulo a monadic Thue system.

Journal Article
TL;DR: In this paper, the authors examined the feasibility of performing bistatic radar cross section (RCS) measurements in the AFIT anechoic chamber, using a CW radar and utilizing background subtraction and software range gating.
Abstract: : This research effort examined the feasibility of performing bistatic radar cross section (RCS) measurements in the AFIT anechoic chamber. The capability was established to measure the bistatic RCS of a target versus frequency and versus target azimuth angle. In either case, one of three bistatic angles (angle between transmit and receive antennas) is available: 45 degrees, 90 degrees, and 135 degrees. Accurate bistatic RCS measurements were obtained using a CW radar and utilizing background subtraction, bistatic calibration, and software range gating. Simple targets were selected for validation purposes since their bistatic RCS could be predicted. These consisted of spheres and flat plates (square, triangle, and five sided). Several computer codes were utilized for system validation. Two codes based on the Uniform Theory of Diffraction were used to predict the scattering from the flat plates. A program using a Mie series solution provided the exact scattering from the flat plates. A program using a Mie series solution provided the exact scattering for the spheres, which were used for both RCS predictions and system calibrations.

Journal Article
TL;DR: This effort discusses a structure for a candidate proximity operations controller and provides initial development of an intelligent trajectory planner in three degrees of freedom using the A * Node Search technique.
Abstract: : As space operations become more complex and ambitious, there is a corresponding increase in the sophistication required of on-board algorithms for proximity operations. Unmanned missions such as planetary probes require sophisticated algorithms to deal with evolving mission requirements and contingencies, where man-in-the-loop control will be impractical. Other future missions that require autonomy for safety or security reasons will also require intelligent on-board controllers. This effort discusses a structure for a candidate proximity operations controller and provides initial development of an intelligent trajectory planner in three degrees of freedom using the A * Node Search technique.

Journal Article
TL;DR: In this paper, a U-D Covariance Factorization Kalman Filter is used for the on-board estimator with dynamics based on the Clohessy-Wiltshire equations.
Abstract: : The relative position determination of a cluster of satellites operating in a low earth orbit is investigated. A U-D Covariance Factorization Kalman Filter is used for the on-board estimator with dynamics based on the Clohessy-Wiltshire equations. Measurements consist of range data between a single host satellite and the remaining cluster. Therefore only relative position and velocity states with respect to the host satellite can be determined. A 15 sample Monte Carlo simulation was conducted with clusters of 2, 5, and 10 satellites, respectively. Performance results consists of average error, average true error and filter covariance as a function of time.

Journal Article
TL;DR: In this paper, a homomorphic transformation is used to transform the speckled images to a uniform background to simulate image motion and/or platform jitter, and then the resulting image can be uniquely reconstructed using bispectral estimation.
Abstract: : Coherent speckle noise as a multiplicative noise process that has a negative exponential probability density function. Using a homomorphic transformation, this speckle noise is converted to a signal-independent, additive process. The speckled images are randomly jittered from frame-to-frame against a uniform background to simulate image motion and/or platform jitter. Multiple images are logarithmically transformed and ensemble averaged in the bispectral domain. The bispectrum ignores this image motion so no blurring results from the ensemble averaging. Object Fourier magnitude and phase information are also retained in the bispectrum so that the resultant image can be uniquely reconstructed. This value is then exponentiated to complete the image reconstruction process. Since speckle masks the resolution of details in the noisy image and effectively destroys the object structure within the image, it is seen that image reconstruction using bispectrum estimation results in images that regain their object structure. Both one-dimensional and two- dimensional images were tested using separate bispectral signal reconstruction algorithms for each. (RH)

Journal ArticleDOI
TL;DR: For a restricted class of systems in which the communication channels between processes are of unit capacity, the nontermination problem is solvable in O(k2 logn) nondeterministic space for round-robin, random, priority, and first-come-first-served scheduling, and inno( k2) nondterministic time for equifair scheduling.
Abstract: In this paper we study the computational complexity of the nontermination problem for systems of communicating processes with respect to five types of scheduling schemes, namely, round-robin, random, priority, first-come-first-served, and equifair schedules. We show that the problem is undecidable (Π1-complete) with respect to round-robin, first-come-first-served, and priority scheduling; whereas it is decidable with respect to random and equifair scheduling. (Here Π1 denotes the set of languages whose complements are recursively enumerable.) For a restricted class of systems in which the communication channels between processes are of unit capacity, we show that the nontermination problem is solvable inO(k2 logn) nondeterministic space for round-robin, random, priority, and first-come-first-served scheduling, and inno(k2) nondeterministic time for equifair scheduling, wherek is the number of processes andn is the size of the maximal process. We are also able to establish a lower bound of Ω((k−59)/20*logn) nondeterministic space for all five types of scheduling schemes.

Journal Article
TL;DR: In this article, the effects of curvature and the resulting centrifugal instabilities on local heat transfer distributions were studied in a curved channel at Dean numbers ranging from 50 to 200.
Abstract: : The effects of curvature and the resulting centrifugal instabilities on local heat transfer distributions are studied in a curved channel at Dean numbers ranging from 50 to 200. The channel has a rectangular cross section of 1.27 cm by 50.1 cm giving an aspect ratio of 40 to 1. Flow is heated in a straight portion of the channel prior to the curved portion in order to obtain flow which is hydrodynamically and thermally fully developed. All baseline tests confirm techniques employed and qualify flow behavior. These consist of energy balance checks and comparison of results from the straight section to numerical and analytic solutions. Nusselt numbers in the curved section initially show an abrupt decrease after the imposition of the stabilizing influences of convex curvature. These are followed by a gradual increase as centrifugal instabilities and Dean vortices form and develop. Spatially resolved results also show significant surface Nusselt number variations across the span of a vortex pair, especially on the concave surface. On the convex surface, local Nusselt numbers are much more apt to be spanwise uniform. (jhd)

Journal Article
TL;DR: In this article, the authors examined the short-term variability of upper atmospheric water vapor with the intent of examining a proposed extraterrestrial water vapor source provided by an influx of the small (12 m in. diameter) comets described by Frank et al.
Abstract: : This study examines the short-term variability of upper atmospheric water vapor with the intent of examining a proposed extraterrestrial water vapor source. This source would be provided by an influx of the small (12 m in. diameter) comets described by Frank et al. (1986). A ground-base microwave (22. 235 GHz) radiometer located at Penn State has been measuring the thermal emission of upper atmospheric water vapor since 1984. Over 22,000 20-minute brightness temperature spectra from the period of Nov 1984 through Dec 1988 were analyzed for statistically significant, transient increases of the amounts of water vapor. This signature could indicate the presence of the cometary water vapor source. Individual 20-minute spectra were compared to the local 12-hour mean and variance spectra in a search for this excess signal signature or event. The analysis yielded 111 significant events which could have been caused by the cometary water vapor. The rate of detection (2.9 days between events) compares favorably with what could be expected from the small comet theory (1.8 days/ events). This result is also comparable to the 4.1 days/events obtained by Adams (1988) using small subset of this data base. After exploring alternate explanations for the observed phenomena, it is concluded that these results support the existence of the small comet hypothesis.

Journal Article
TL;DR: In this article, the authors compare and contrast results of three different models of acoustic propagation in an ocean with a sloping bottom, predicting the pressure field in a wedge-shaped ocean, with a pressure release surface and an acoustically fast bottom.
Abstract: : We compare and contrast results of three different models of acoustic propagation in an ocean with a sloping bottom, predicting the pressure field in a wedge-shaped ocean with a pressure release surface and an acoustically fast bottom. On image and two PE (Parabolic Equation) models were selected. The computer model for the image method was written in Basic for use on a PC and in Fortran for use on the NPS IBM 3033 computer main frame. The geometric shape of the wedge for the image calculation consists of a plane, sloping penetrable bottom and a plane, horizontal pressure release upper surface. This shape models a real continental shelf. The shape used in a parabolic equation models is a pressure release plane surface above the conical bottom. Two PE models were studied. One is based on a wide angle parabolic equation and providing a contour plot of TL as a function of position. The other, resident on the NPS computer, is based on an implicit finite difference algorithm. Results show that there is fair agreement among the different models. Comparison of the image method and the second PE model show a 2 to 3 dB difference in transmission loss near the surface and good agreement deeper. The transmission loss predicted by the first PE model differs from both other models by 4 to 5 dB near the surface and at the middle depths. Near the bottom they all agree well.

Journal Article
TL;DR: Test results indicate the performance of these networks was equal to that of Hyperplane Classifier networks trained, via backpropagation, to optimize the Mean Square Error, Cross Entropy, and Classification Figure of Merit objective functions, but the RBF networks trained much faster.
Abstract: : Recent work concerning artificial neural networks has focused on decreasing network training times. Kernel Classifier networks, using radial basis functions (RBFs) as the kernel function, can be trained quickly with little performance degradation. Short training times are critical for systems which must adapt to changing environments. The function of Kernel Classifier networks is based on the principle that multivariate functions can be approximated via linear combinations of RBFs. RBFs can also perform probability density estimations, making classifications approximating a Baye's optimal descriminant. Methods used to set the RBF centers included matching the training data, Kohonen Training, K-Means Clustering and placement at averages of data clusters of the same class. Test results indicate the performance of these networks was equal to that of Hyperplane Classifier networks trained, via backpropagation, to optimize the Mean Square Error, Cross Entropy, and Classification Figure of Merit objective functions. However, the RBF networks trained much faster. The RBF networks also outperformed the Probability Neural Networks, (PNN) indicating the weights in the output layer offset the choice of non-optimal spreads. This ability to train quickly while obtaining high classification accuracies make RBF Kernel Classifier networks an attractive option for systems which must adapt quickly to changing environments.

Journal Article
TL;DR: In this paper, the authors compared nighttime thermospheric winds for Sondrestrom, Greenland from 11 nights between 1983 and 1988, have been compared to learn about the O(+)-O collision cross section and the high-latitude atomic oxygen density.
Abstract: : Nighttime thermospheric winds for Sondrestrom, Greenland from 11 nights between 1983 and 1988, have been compared to learn about the O(+)-O collision cross section and the high-latitude atomic oxygen density. The horizontal winds in the magnetic meridian were derived indirectly from incoherent-scatter radar (ISR) measurements on ion velocities antiparallel to the magnetic field and directly from Fabry-Perot interferometer (FPI) measurements of Doppler shifts of the (6300-A) emission of atomic oxygen. In deriving the radar winds, the O(+)-O collision cross section, was scaled by a factor of f what was varied from 0.5 to 5.1. On the basis of several arguments the altitude of the 6300-A emission was assumed to be 230 km. The best agreement between the ISR and FPI winds was obtained when f was increased substantially, to between 1.7 and 3.4. If the average peak emission altitude were higher, these factors would be larger; if it were lower, they would be somewhat smaller. However, if the average altitude were substantially lower it would have been more difficult to have obtained agreement between the two techniques. (R.H.)

Journal Article
TL;DR: In this paper, a CAD-compatible circuit model for an infinitesimally thin inductive strip centered in homogeneous finline for 0.1 less than or equal to w over b less than 1.0.
Abstract: : This thesis describes a CAD-compatible circuit model for an infinitesimally thin inductive strip centered in homogeneous finline for 0.1 less than or equal to w over b less than or equal to 1.0. The model is shown to predict scattering data which agrees with data computed using the spectral domain method. Results were generated for strips of length T greater than or equal to 10 mils in X-band. By applying the scaling principle, the model is valid for any waveguide band over the normal frequency range for the dominant TE sub 10 mode.