scispace - formally typeset
Search or ask a question

Showing papers by "Naval Postgraduate School published in 2005"


Journal ArticleDOI
06 Oct 2005
TL;DR: This work advocate a complete refactoring of the functionality and proposes three key principles--network-level objectives, network-wide views, and direct control--that it believes should underlie a new architecture, called 4D, after the architecture's four planes: decision, dissemination, discovery, and data.
Abstract: Today's data networks are surprisingly fragile and difficult to manage. We argue that the root of these problems lies in the complexity of the control and management planes--the software and protocols coordinating network elements--and particularly the way the decision logic and the distributed-systems issues are inexorably intertwined. We advocate a complete refactoring of the functionality and propose three key principles--network-level objectives, network-wide views, and direct control--that we believe should underlie a new architecture. Following these principles, we identify an extreme design point that we call "4D," after the architecture's four planes: decision, dissemination, discovery, and data. The 4D architecture completely separates an AS's decision logic from pro-tocols that govern the interaction among network elements. The AS-level objectives are specified in the decision plane, and en-forced through direct configuration of the state that drives how the data plane forwards packets. In the 4D architecture, the routers and switches simply forward packets at the behest of the decision plane, and collect measurement data to aid the decision plane in controlling the network. Although 4D would involve substantial changes to today's control and management planes, the format of data packets does not need to change; this eases the deployment path for the 4D architecture, while still enabling substantial innovation in network control and management. We hope that exploring an extreme design point will help focus the attention of the research and industrial communities on this crucially important and intellectually challenging area.

805 citations


Journal ArticleDOI
TL;DR: This paper discusses a toolkit of designs for simulationists with limited DOE expertise who want to select a design and an appropriate analysis for their computational experiments and provides a research agenda listing problems in the design of simulation experiments that require more investigation.
Abstract: Many simulation practitioners can get more from their analyses by using the statistical theory on design of experiments (DOE) developed specifically for exploring computer models. We discuss a toolkit of designs for simulators with limited DOE expertise who want to select a design and an appropriate analysis for their experiments. Furthermore, we provide a research agenda listing problems in the design of simulation experiments-as opposed to real-world experiments-that require more investigation. We consider three types of practical problems: (1) developing a basic understanding of a particular simulation model or system, (2) finding robust decisions or policies as opposed to so-called optimal solutions, and (3) comparing the merits of various decisions or policies. Our discussion emphasizes aspects that are typical for simulation, such as having many more factors than in real-world experiments, and the sequential nature of the data collection. Because the same problem type may be addressed through different design types, we discuss quality attributes of designs, such as the ease of design construction, the flexibility for analysis, and efficiency considerations. Moreover, the selection of the design type depends on the metamodel (response surface) that the analysts tentatively assume; for example, complicated metamodels require more simulation runs. We present several procedures to validate the metamodel estimated from a specific design, and we summarize a case study illustrating several of our major themes. We conclude with a discussion of areas that merit more work to achieve the potential benefits-either via new research or incorporation into standard simulation or statistical packages.

605 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider new market creation as a process involving a new network of stakeholders, initiated through an effectual commitment that sets in motion two concurrent cycles of expanding resources and converging constraints that result in the new market.
Abstract: Is new market creation a search and selectionprocess within the theoretical space of all possible markets? Or is it the outcome of a process of transformation of extant realities into new possibilities? In this article we consider new market creation as a process involving a new network of stakeholders. The network is initiated through an effectual commitment that sets in motion two concurrent cycles of expanding resources and convergingconstraints that result in the new market. The dynamic model was induced from two empirical investigations, a cognitive science-based investigation of entrepreneurial expertise, and a real time history of the RFID industry.

545 citations


Book ChapterDOI
29 Aug 2005
TL;DR: This work refines the most compact implementations of AES by examining many choices of basis for each subfield, not only polynomial bases as in previous work, but also normal bases, giving 432 cases to achieve a more compact S-box.
Abstract: A key step in the Advanced Encryption Standard (AES) algorithm is the “S-box.” Many implementations of AES have been proposed, for various goals, that effect the S-box in various ways. In particular, the most compact implementations to date of Satoh et al.[14] and Mentens et al.[6] perform the 8-bit Galois field inversion of the S-box using subfields of 4 bits and of 2 bits. Our work refines this approach to achieve a more compact S-box. We examined many choices of basis for each subfield, not only polynomial bases as in previous work, but also normal bases, giving 432 cases. The isomorphism bit matrices are fully optimized, improving on the “greedy algorithm.” Introducing some NOR gates gives further savings. The best case improves on [14] by 20%. This decreased size could help for area-limited hardware implementations, e.g., smart cards, and to allow more copies of the S-box for parallelism and/or pipelining of AES.

465 citations


Journal ArticleDOI
TL;DR: In this article, the authors used historical station rainfall data to classify the annual cycles of rainfall over land areas, the TRMM rainfall measurements to identify the monsoon regimes of the four seasons in all of Southeast Asia, and the QuikSCAT winds to study the causes of the variations.
Abstract: In general, the Bay of Bengal, Indochina Peninsula, and Philippines are in the Asian summer monsoon regime while the Maritime Continent experiences a wet monsoon during boreal winter and a dry season during boreal summer. However, the complex distribution of land, sea, and terrain results in significant local variations of the annual cycle. This work uses historical station rainfall data to classify the annual cycles of rainfall over land areas, the TRMM rainfall measurements to identify the monsoon regimes of the four seasons in all of Southeast Asia, and the QuikSCAT winds to study the causes of the variations. The annual cycle is dominated largely by interactions between the complex terrain and a simple annual reversal of the surface monsoonal winds throughout all monsoon regions from the Indian Ocean to the South China Sea and the equatorial western Pacific. The semiannual cycle is comparable in magnitude to the annual cycle over parts of the equatorial landmasses, but only a very small regio...

316 citations


Journal ArticleDOI
TL;DR: The Time-Diffusion Synchronization Protocol (TDP) is proposed as a network-wide time synchronization protocol that allows the sensor network to reach an equilibrium time and maintains a small time deviation tolerance from the equilibrium time.
Abstract: In the near future, small intelligent devices will be deployed in homes, plantations, oceans, rivers, streets, and highways to monitor the environment. These devices require time synchronization, so voice and video data from different sensor nodes can be fused and displayed in a meaningful way at the sink. Instead of time synchronization between just the sender and receiver or within a local group of sensor nodes, some applications require the sensor nodes to maintain a similar time within a certain tolerance throughout the lifetime of the network. The Time-Diffusion Synchronization Protocol (TDP) is proposed as a network-wide time synchronization protocol. It allows the sensor network to reach an equilibrium time and maintains a small time deviation tolerance from the equilibrium time. In addition, it is analytically shown that the TDP enables time in the network to converge. Also, simulations are performed to validate the effectiveness of TDP in synchronizing the time throughout the network and balancing the energy consumed by the sensor nodes.

306 citations


Proceedings ArticleDOI
13 Mar 2005
TL;DR: This paper describes how to compute the reachability a network provides from a snapshot of the configuration state from each of the routers, and extends the algorithm to model the influence of packet transformations along the path.
Abstract: The primary purpose of a network is to provide reachability between applications running on end hosts. In this paper, we describe how to compute the reachability a network provides from a snapshot of the configuration state from each of the routers. Our primary contribution is the precise definition of the potential reachability of a network and a substantial simplification of the problem through a unified modeling of packet filters and routing protocols. In the end, we reduce a complex, important practical problem to computing the transitive closure to set union and intersection operations on reachability set representations. We then extend our algorithm to model the influence of packet transformations (e.g., by NATs or ToS remapping) along the path. Our technique for static analysis of network reachability is valuable for verifying the intent of the network designer, troubleshooting reachability problems, and performing "what-if" analysis of failure scenarios.

284 citations


Journal ArticleDOI
TL;DR: Locality of reference is a fundamental principle of computing with many applications and here is the story of its story.
Abstract: Locality of reference is a fundamental principle of computing with many applications. Here is its story.

277 citations


Proceedings ArticleDOI
18 Apr 2005
TL;DR: Experimental results validate the filter design, show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking, and validate the quaternion-based Kalman filter design.
Abstract: A human body motion tracking system based on use of the MARG (Magnetic, Angular Rate, and Gravity) sensors has been under development at the Naval Postgraduate School and Miami University. The design of a quaternion-based Kalman filter for processing the MARG sensor data was described in [1]. This paper presents the real-time implementation and testing results of the quaternion-based Kalman filter. Experimental results validate the Kalman filter design, and show the feasibility of the MARG sensors for real-time human body motion tracking.

269 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the role of the unique topography of the Maritime Continent region with respect to the variability in the synoptic-scale cold surge and Borneo vortex and found that the interaction of northeast winds with local topography and dynamic response to the change in latitude contribute to the turning of the winds and localized patterns of deep convection.
Abstract: During boreal winter, the Maritime Continent is a region of deep cumulus convection and heavy precipitation systems that play a major role in several global- and regional-scale processes. Over the western part of this region, the synoptic-scale Borneo vortex, the northeast cold surge, and the intraseasonal Madden–Julian oscillation (MJO) contribute to the variability in deep convection. This work studies the impact on deep convection due to interactions among these three different motion systems. Furthermore, the role of the unique topography of the region is examined with respect to the variability in the synoptic-scale cold surge and Borneo vortex. On the synoptic scale, the interaction of northeast winds with local topography and the dynamic response to the change in latitude contribute to the turning of the winds and localized patterns of deep convection. In days without a Borneo vortex, deep convection tends to be suppressed over the South China Sea and Borneo and enhanced downstream over th...

266 citations


Journal ArticleDOI
TL;DR: In this paper, a global eddy resolving simulation using the Parallel Ocean Program (POP) general circulation model is presented, and the simulation represents a major step forward in high resolution ocean modeling, with applications to prediction, climate, and general ocean science.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed mean meteorological data collected at five levels on a 20m tower over the Arctic pack ice during the Surface Heat Budget of the Arctic Ocean experiment (SHEBA) to examine different regimes of the stable boundary layer (SBL).
Abstract: Turbulent and mean meteorological data collected at five levels on a 20-m tower over the Arctic pack ice during the Surface Heat Budget of the Arctic Ocean experiment (SHEBA) are analyzed to examine different regimes of the stable boundary layer (SBL). Eleven months of measurements during SHEBA cover a wide range of stability conditions, from the weakly unstable regime to very stable stratification. Scaling arguments and our analysis show that the SBL can be classified into four major regimes: (i) surface-layer scaling regime (weakly stable case), (ii) transition regime, (iii) turbulent Ekman layer, and (iv) intermittently turbulent Ekman layer (supercritical stable regime). These four regimes may be considered as the basic states of the traditional SBL. Sometimes these regimes, especially the last two, can be markedly perturbed by gravity waves, detached elevated turbulence (‘upside down SBL’), and inertial oscillations. Traditional Monin–Obukhov similarity theory works well in the weakly stable regime. In the transition regime, Businger–Dyer formulations work if scaling variables are re-defined in terms of local fluxes, although stability function estimates expressed in these terms include more scatter compared to the surface-layer scaling. As stability increases, the near-surface turbulence is affected by the turning effects of the Coriolis force (the turbulent Ekman layer). In this regime, the surface layer, where the turbulence is continuous, may be very shallow (< 5 m). Turbulent transfer near the critical Richardson number is characterized by small but still significant heat flux and negligible stress. The supercritical stable regime, where the Richardson number exceeds a critical value, is associated with collapsed turbulence and the strong influence of the earth’s rotation even near the surface. In the limit of very strong stability, the stress is no longer a primary scaling parameter.

Journal ArticleDOI
22 Apr 2005-Science
TL;DR: The recent trend of declining winter and spring snow cover over Eurasia is causing a land-ocean thermal gradient that is particularly favorable to stronger southwest (summer) monsoon winds as mentioned in this paper.
Abstract: The recent trend of declining winter and spring snow cover over Eurasia is causing a land-ocean thermal gradient that is particularly favorable to stronger southwest (summer) monsoon winds. Since 1997, sea surface winds have been strengthening over the western Arabian Sea. This escalation in the intensity of summer monsoon winds, accompanied by enhanced upwelling and an increase of more than 350% in average summertime phytoplankton biomass along the coast and over 300% offshore, raises the possibility that the current warming trend of the Eurasian landmass is making the Arabian Sea more productive.

Journal ArticleDOI
TL;DR: In this article, the microstructural evolution occurring in disks of commercial purity aluminum processed by high pressure torsion (HPT) under constrained conditions was evaluated and the results showed the microhardness is lower and there is less grain refinement in the central regions of the disks in the initial stages of torsional straining but the micro structures become reasonably homogeneous across the disks at high imposed strains.
Abstract: An investigation was conducted to evaluate the microstructural evolution occurring in disks of commercial purity aluminum processed by high-pressure torsion (HPT) under constrained conditions. Microhardness measurements were taken to assess the variation in hardness across the diameters of disks subjected to different imposed strains and the microstructures were observed at the edges and in the centers of the disks using transmission electron microscopy. The results show the microhardness is lower and there is less grain refinement in the central regions of the disks in the initial stages of torsional straining but the microstructures become reasonably homogeneous across the disks at high imposed strains.

01 Dec 2005
TL;DR: The recent trend of declining winter and spring snow cover over Eurasia is causing a land-ocean thermal gradient that is particularly favorable to stronger southwest (summer) monsoon winds, raising the possibility that the current warming trend of the Eurasian landmass is making the Arabian Sea more productive.
Abstract: The recent trend of declining winter and spring snow cover over Eurasia is causing a land-ocean thermal gradient that is particularly favorable to stronger southwest (summer) monsoon winds. Since 1997, sea surface winds have been strengthening over the western Arabian Sea. This escalation in the intensity of summer monsoon winds, accompanied by enhanced upwelling and an increase of more than 350% in average summertime phytoplankton biomass along the coast and over 300% offshore, raises the possibility that the current warming trend of the Eurasian landmass is making the Arabian Sea more productive.

Journal ArticleDOI
TL;DR: In this paper, the authors use empirical data from two separate studies of entrepreneurial expertise, one involving the creation of new ventures and the other the birth of a new industry, to identify three logics that constitute working elements of a technology of foolishness: (1) the logic of identity, as opposed to the logics of preferences; (2) the Logic of action and (3) the Logics of commitment.

Journal ArticleDOI
TL;DR: In this article, a detailed investigation was conducted to evaluate the microstructural characteristics in samples of pure nickel processed using three different procedures of severe plastic deformation (SPD): equal-channel angular pressing (ECAP), high-pressure torsion (HPT), and a combination of ECAP and HPT.
Abstract: A detailed investigation was conducted to evaluate the microstructural characteristics in samples of pure nickel processed using three different procedures of severe plastic deformation (SPD): equal-channel angular pressing (ECAP), high-pressure torsion (HPT) and a combination of ECAP and HPT. Several different experimental techniques were employed to measure the grain size distributions, the textures, the distributions of the misorientation angles and the boundary surface energies in the as-processed materials. It is shown that a combination of ECAP and HPT leads both to a greater refinement in the microstructure and to a smaller fraction of boundaries having low angles of misorientation. The estimated boundary surface energies were higher than anticipated from data for coarse-grained materials and the difference is attributed to the non-equilibrium character of many of the interfaces after SPD processing.

Journal ArticleDOI
TL;DR: Computer science meets every criterion for being a science, but it has a self-inflicted credibility problem.
Abstract: Computer science meets every criterion for being a science, but it has a self-inflicted credibility problem.

Journal ArticleDOI
TL;DR: Rip current kinematics and beach morphodynamics were measured for 44 days at Sand City, Monterey Bay, CA using 15 instruments composed of co-located velocity and pressure sensors, acoustic Doppler current profilers, and kinematic GPS surveys as mentioned in this paper.

Journal ArticleDOI
TL;DR: In this paper, the coarsening kinetics of Ag 3 Sn particles in SnAg-based solder are studied, and the results are correlated with impression creep data from individual microelectronic solder balls subjected to thermal aging treatments.
Abstract: The creep response of solder joints in a microelectronic package, which are subjected to aggressive thermo-mechanical cycling (TMC) during service, often limits the reliability of the entire package. Furthermore, during TMC, the microstructures of the new lead-free solders (Sn–Ag and Sn–Ag–Cu) can undergo significant in situ strain-enhanced coarsening, resulting in in-service evolution of the creep behavior. In this paper, the coarsening kinetics of Ag 3 Sn particles in SnAg-based solder are studied, and the results are correlated with impression creep data from individual microelectronic solder balls subjected to thermal aging treatments. Coarsening influences creep behavior in two ways. At low stresses, the creep rate increases proportionately with precipitate size. At high stresses, precipitate coarsening influences creep response by altering the threshold stress for particle-limited creep. Based on these observations, a microstructurally adaptive creep model for solder interconnects undergoing in situ coarsening is presented.

Journal ArticleDOI
TL;DR: JOINT DEFENDER is a new two-sided optimization model for planning the pre-positioning of defensive missile interceptors to counter an attack threat that can provide unique insight into the value of secrecy and deception to either side.
Abstract: We describe JOINT DEFENDER, a new two-sided optimization model for planning the pre-positioning of defensive missile interceptors to counter an attack threat. In our basic model, a defender pre-positions ballistic missile defense platforms to minimize the worst-case damage an attacker can achieve; we assume that the attacker will be aware of defensive pre-positioning decisions, and that both sides have complete information as to target values, attacking-missile launch sites, weapon system capabilities, etc. Other model variants investigate the value of secrecy by restricting the attacker's and/or defender's access to information. For a realistic scenario, we can evaluate a completely transparent exchange in a few minutes on a laptop computer, and can plan near-optimal secret defenses in seconds. JOINT DEFENDER's mathematical foundation and its computational efficiency complement current missile-defense planning tools that use heuristics or supercomputing. The model can also provide unique insight into the value of secrecy and deception to either side. We demonstrate with two hypothetical North Korean scenarios.

01 Jan 2005
TL;DR: New bilevel programming models to help make the country's infrastructure more resilient to attacks by terrorists, help governments and businesses plan those improvements, and help influence related public policy on investment incentives, regulations, etc are described.
Abstract: We describe new bilevel programming models to (1) help make the country's criti- cal infrastructure more resilient to attacks by terrorists, (2) help governments and businesses plan those improvements, and (3) help influence related public policy on investment incentives, regulations, etc. An intelligent attacker (terrorists) and defender (us) are key features of all these models, along with information transparency: These are Stackelberg games, as opposed to two-person, zero-sum games. We illustrate these models with applications to electric power grids, subways, airports, and other critical infrastructure. For instance, one model identifies locations for a given set of electronic sensors that minimize the worst-case time to detection of a chemical, biological, or radiological contaminant introduced into the Washington, D.C. subway system. The paper concludes by reporting insights we have gained through forming "red teams," each of which gathers open-source data on a real-world system, develops an appro- priate attacker-defender or defender-attacker model, and solves the model to identify vulnerabilities in the system or to plan an optimal defense.

Proceedings ArticleDOI
04 Dec 2005
TL;DR: This tutorial focuses on experiments that can cut down the sampling requirements of some classic designs by orders of magnitude, yet make it possible and practical to develop a better understanding of a complex simulation model.
Abstract: We present the basic concepts of experimental design, the types of goals it can address, and why it is such an important and useful tool for simulation. A well-designed experiment allows the analyst to examine many more factors than would otherwise be possible, while providing insights that cannot be gleaned from trial-and-error approaches or by sampling factors one at a time. We focus on experiments that can cut down the sampling requirements of some classic designs by orders of magnitude, yet make it possible and practical to develop a better understanding of a complex simulation model. Designs we have found particularly useful for simulation experiments are illustrated using simple simulation models, and we provide links to other resources for those wishing to learn more. Ideally, this tutorial will leave you excited about experimental designs---and prepared to use them---in your upcoming simulation studies.

Proceedings ArticleDOI
24 Jul 2005
TL;DR: A water-resistant amphibious prototype design, based on the biologically-inspired Whegstrade platform, has been completed in this article, which allows the robot to navigate on rough terrain and underwater, and accomplish tasks with little or no low-level control.
Abstract: The capability of autonomous and semi-autonomous platforms to function in the shallow water surf zone is critical for a wide range of military and civilian operations. Of particular importance is the ability to transition between locomotion modes in aquatic and terrestrial settings. The study of animal locomotion mechanisms can provide specific inspiration to address these demands. In this work, we summarize on-going efforts to create an autonomous, highly mobile amphibious robot. A water-resistant amphibious prototype design, based on the biologically-inspired Whegstrade platform, has been completed. Through extensive field-testing, mechanisms have been isolated to improve the implementation of the Whegstrade concept and make it more suited for amphibious operation. Specific design improvements include wheel-leg propellers enabling swimming locomotion, an active, compliant, water resistant, non-backdrivable body joint, and improved feet for advanced mobility. These design innovations allow Whegstrade to navigate on rough terrain and underwater, and accomplish tasks with little or no low-level control, thus greatly simplifying autonomous control system implementation. Complementary work is underway for autonomous control. We believe these results can lay the foundation for the development of a generation of amphibious robots with an unprecedented versatility and mobility

Journal ArticleDOI
TL;DR: This approach demonstrates the use of cognitive mapping to extract tacit knowledge from employees in knowledge-intensive organizations and the extensive array of performance-relevant variables that arises from such mapping, and the potential to use the resulting causal performance map as a comprehensive, articulated basis for developing a performance measurement system.

Journal ArticleDOI
TL;DR: In this paper, measurements of the seasonal variation in the export flux of particulate organic carbon (POC) are reported for the upper waters of the Chukchi Sea, where POC fluxes were quantified by determination of 234Th/238U disequilibrium and POC/234Th ratios in large ( > 53 μ m ) aggregates collected using in situ pumps.
Abstract: As part of the 2002 Shelf-Basin Interactions (SBI) process study, measurements of the seasonal variation in the export flux of particulate organic carbon (POC) are reported for the upper waters of the Chukchi Sea. POC fluxes were quantified by determination of 234Th/238U disequilibrium and POC/234Th ratios in large ( > 53 μ m ) aggregates collected using in situ pumps. Samples were collected at 35 stations on two cruises, one in predominantly ice-coved conditions during the spring (May 6–June 15) and the other in predominantly open water during the summer (July 17–August 26). Enhanced particle export was observed in the shelf and slope waters, particularly within Barrow Canyon, and there was a marked increase in particle export at all stations during the summer (July–August) relative to the spring (May–June). 234Th-derived POC fluxes exhibit significant seasonal and spatial variability, averaging 2.9 ± 5.3 mmol C m - 2 d - 1 ( range = 0.031 – 22 mmol C m - 2 d - 1 ) in the spring and increasing ∼ 4 -fold in the summer to an average value of 10.5 ± 9.3 mmol C m - 2 d - 1 ( range = 0.79 – 39 mmol C m - 2 d - 1 ) . The fraction of primary production exported from the upper waters increases from ∼ 15 % in the spring to ∼ 32 % in the summer. By comparison, DOC accumulation associated with net community production represented ∼ 6 % of primary production ( ∼ 2 mmol C m - 2 d - 1 ) . The majority of shelf and slope stations indicate a close agreement between POC export and benthic C respiration in the spring, whereas there is an imbalance between POC export and benthic respiration in the summer. The implication is that up to ∼ 20 % of summer production ( ∼ 6 ± 7 mmol C m - 2 d - 1 ) may be seasonally exported off-shelf in this productive shelf/slope region of the Arctic Ocean.

Journal ArticleDOI
TL;DR: The recent decreases of enrollment in computer science programs signal a chasm between the historical emphasis on programming and the contemporary concerns of those choosing careers.
Abstract: The recent decreases of enrollment in computer science programs signal a chasm between our historical emphasis on programming and the contemporary concerns of those choosing careers.

Journal ArticleDOI
TL;DR: In this article, two mechanisms are shown to govern plastic deformation in AA5083 commercial aluminum materials, produced from five different alloy heats, under conditions of interest for superplastic and quick-plastic forming.
Abstract: The plastic deformation of seven 5083 commercial aluminum materials, produced from five different alloy heats, are evaluated under conditions of interest for superplastic and quick-plastic forming. Two mechanisms are shown to govern plastic deformation in AA5083 over the strain rates, strains, and temperatures of interest for these forming technologies: grain-boundary-sliding (GBS) creep and solutedrag (SD) creep. Quantitative analysis of stress transients following rate changes clearly differentiates between GBS and SD creep and offers conclusive proof that SD creep dominates deformation at fast strain rates and low temperature. Furthermore, stress transients following strain-rate changes under SD creep are observed to decay exponentially with strain. A new graphical construction is proposed for the analysis and prediction of creep transients. This construction predicts the relative size of creep transients under SD creep from the relative size of changes in an applied strain rate or stress. This construction reveals the relative size of creep transients under SD creep to be independent of temperature; temperature dependence resides in the “steady-state” creep behavior to which transients are related.

Journal ArticleDOI
TL;DR: In this paper, the authors measured water column nitrate 15N/14N and 18O/16O as integrative tracers of microbial denitrification, together with pore water-derived benthic nitrate fluxes in the deep Bering Sea basin, in order to gain new constraints on the mechanism of fixed nitrogen loss in the BS.
Abstract: [1] On the basis of the normalization to phosphate, a significant amount of nitrate is missing from the deep Bering Sea (BS). Benthic denitrification has been suggested previously to be the dominant cause for the BS nitrate deficit. We measured water column nitrate 15N/14N and 18O/16O as integrative tracers of microbial denitrification, together with pore water-derived benthic nitrate fluxes in the deep BS basin, in order to gain new constraints on the mechanism of fixed nitrogen loss in the BS. The lack of any nitrate isotope enrichment into the deep part of the BS supports the benthic denitrification hypothesis. On the basis of the nitrate deficit in the water column with respect to the adjacent North Pacific and a radiocarbon-derived ventilation age of ∼50 years, we calculate an average deep BS (>2000 m water depth) sedimentary denitrification rate of ∼230 μmol N m−2 d−1 (or 1.27 Tg N yr−1), more than 3 times higher than high-end estimates of the average global sedimentary denitrification rate for the same depth interval. Pore water-derived estimates of benthic denitrification were variable, and uncertainties in estimates were large. A very high denitrification rate measured from the base of the steep northern slope of the basin suggests that the elevated average sedimentary denitrification rate of the deep Bering calculated from the nitrate deficit is driven by organic matter supply to the base of the continental slope, owing to a combination of high primary productivity in the surface waters along the shelf break and efficient down-slope sediment focusing along the steep continental slopes that characterize the BS.

Journal ArticleDOI
TL;DR: In this article, the authors developed the theory of radar imaging from data measured by a moving antenna emitting a single-frequency waveform and showed that the signal at a given Doppler shift is due to a superposition of returns from stationary scatterers on a cone whose axis is the flight velocity vector.
Abstract: We develop the theory of radar imaging from data measured by a moving antenna emitting a single-frequency waveform. We show that, under a linearized (Born) scattering model, the signal at a given Doppler shift is due to a superposition of returns from stationary scatterers on a cone whose axis is the flight velocity vector. This cone reduces to a hyperbola when the scatterers are known to lie on a planar surface. In this case, reconstruction of the scatterer locations can be accomplished by a tomographic inversion in which the scattering density function is reconstructed from its integrals over hyperbolas. We give an approximate reconstruction formula and analyse the resolution of the resulting image. We provide a numerical shortcut and show results of numerical tests in a simple case.