scispace - formally typeset
Search or ask a question
Author

Rex Britter

Bio: Rex Britter is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Turbulence & Dispersion (optics). The author has an hindex of 57, co-authored 232 publications receiving 10526 citations. Previous affiliations of Rex Britter include North Carolina State University & Singapore–MIT alliance.


Papers
More filters
Book ChapterDOI
04 Nov 2011
TL;DR: This chapter addresses the nature of this supply chain; one overarching aspect is that all elements are currently undergoing both great performance enhancement combined with drastic price reduction (Paulsen & Riegger, 2006).
Abstract: ‘In the next century, planet earth will don an electronic skin. It will use the Internet as a scaffold to support and transmit its sensations. This skin is already being stitched together. It consists of millions of embedded electronic measuring devices: thermostats, pressure gauges, pollution detectors, cameras, microphones, glucose sensors, EKGs, electroencephalographs. These will probe and monitor cities and endangered species, the atmosphere, our ships, highways and fleets of trucks, our conversations, our bodies – even our dreams.’ (Gross, 1999) Following this comprehensive vision by Neil Gross (1999), it can be assumed that sensor network deployments will increase dramatically within the coming years, as pervasive sensing has recently become feasible and affordable. This enriches knowledge about our environment with previously uncharted real-time information layers. However, leveraging sensor data in an ad-hoc fashion is not trivial as ubiquitous geo-sensor web applications comprise numerous technologies, such as sensors, communications, massive data manipulation and analysis, data fusion with mathematical modelling, the production of outputs on a variety of scales, the provision of information as both hard data and user-sensitive visualisation, together with appropriate delivery structures. Apart from this, requirements for geo-sensor webs are highly heterogeneous depending on the functional context. This chapter addresses the nature of this supply chain; one overarching aspect is that all elements are currently undergoing both great performance enhancement combined with drastic price reduction (Paulsen & Riegger, 2006). This has led to the deployment of a number of geo-sensor networks. On the positive side the growing establishment of such networks will further decrease prices and improve component performance. This will particularly be so if the environmental regulatory structure moves from a mathematical modelling base to a more pervasive monitoring structure. Of specific interest in this chapter is our concern that most sensor networks are being built up in monolithic and specific application-centred measurement systems. In consequence, there is a clear gap between sensor network research and mostly very heterogeneous end user requirements. Sensor network research is often dedicated to a long-term vision, which tells a compelling story about potential applications. On the contrary, the actual implementation is mostly not more than a very limited demonstration without taking into account well-known issues such as interoperability, sustainable development, portability or the coupling with established data analysis systems.

10 citations

Journal ArticleDOI
TL;DR: In this article, a Lagrangian stochastic model was applied to describe the dispersion of contaminants during a shoreline fumigation, where mixed layer scaling was applied and the model dimensionless near-surface cross-wind integrated concentrations (C y ) together with the concentration contour plots, were calculated for slow and fast entrainment rates.

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The Navier-Stokes equations are well-known to be a good model for turbulence as discussed by the authors, and the results of well over a century of increasingly sophisticated experiments are available at our disposal.
Abstract: It has often been remarked that turbulence is a subject of great scientific and technological importance, and yet one of the least understood (e.g. McComb 1990). To an outsider this may seem strange, since the basic physical laws of fluid mechanics are well established, an excellent mathematical model is available in the Navier-Stokes equations, and the results of well over a century of increasingly sophisticated experiments are at our disposal. One major difficulty, of course, is that the governing equations are nonlinear and little is known about their solutions at high Reynolds number, even in simple geometries. Even mathematical questions as basic as existence and uniqueness are unsettled in three spatial dimensions (cf Temam 1988). A second problem, more important from the physical viewpoint, is that experiments and the available mathematical evidence all indicate that turbulence involves the interaction of many degrees of freedom over broad ranges of spatial and temporal scales. One of the problems of turbulence is to derive this complex picture from the simple laws of mass and momentum balance enshrined in the NavierStokes equations. It was to this that Ruelle & Takens (1971) contributed with their suggestion that turbulence might be a manifestation in physical

3,721 citations

Journal ArticleDOI
TL;DR: A simple classification of sedimentary density flows, based on physical flow properties and grain-support mechanisms, and briefly discusses the likely characteristics of the deposited sediments is presented in this paper.
Abstract: The complexity of flow and wide variety of depositional processes operating in subaqueous density flows, combined with post-depositional consolidation and soft-sediment deformation, often make it difficult to interpret the characteristics of the original flow from the sedimentary record. This has led to considerable confusion of nomenclature in the literature. This paper attempts to clarify this situation by presenting a simple classification of sedimentary density flows, based on physical flow properties and grain-support mechanisms, and briefly discusses the likely characteristics of the deposited sediments. Cohesive flows are commonly referred to as debris flows and mud flows and defined on the basis of sediment characteristics. The boundary between cohesive and non-cohesive density flows (frictional flows) is poorly constrained, but dimensionless numbers may be of use to define flow thresholds. Frictional flows include a continuous series from sediment slides to turbidity currents. Subdivision of these flows is made on the basis of the dominant particle-support mechanisms, which include matrix strength (in cohesive flows), buoyancy, pore pressure, grain-to-grain interaction (causing dispersive pressure), Reynolds stresses (turbulence) and bed support (particles moved on the stationary bed). The dominant particle-support mechanism depends upon flow conditions, particle concentration, grain-size distribution and particle type. In hyperconcentrated density flows, very high sediment concentrations (>25 volume%) make particle interactions of major importance. The difference between hyperconcentrated density flows and cohesive flows is that the former are friction dominated. With decreasing sediment concentration, vertical particle sorting can result from differential settling, and flows in which this can occur are termed concentrated density flows. The boundary between hyperconcentrated and concentrated density flows is defined by a change in particle behaviour, such that denser or larger grains are no longer fully supported by grain interaction, thus allowing coarse-grain tail (or dense-grain tail) normal grading. The concentration at which this change occurs depends on particle size, sorting, composition and relative density, so that a single threshold concentration cannot be defined. Concentrated density flows may be highly erosive and subsequently deposit complete or incomplete Lowe and Bouma sequences. Conversely, hydroplaning at the base of debris flows, and possibly also in some hyperconcentrated flows, may reduce the fluid drag, thus allowing high flow velocities while preventing large-scale erosion. Flows with concentrations <9% by volume are true turbidity flows (sensuBagnold, 1962), in which fluid turbulence is the main particle-support mechanism. Turbidity flows and concentrated density flows can be subdivided on the basis of flow duration into instantaneous surges, longer duration surge-like flows and quasi-steady currents. Flow duration is shown to control the nature of the resulting deposits. Surge-like turbidity currents tend to produce classical Bouma sequences, whose nature at any one site depends on factors such as flow size, sediment type and proximity to source. In contrast, quasi-steady turbidity currents, generated by hyperpycnal river effluent, can deposit coarsening-up units capped by fining-up units (because of waxing and waning conditions respectively) and may also include thick units of uniform character (resulting from prolonged periods of near-steady conditions). Any flow type may progressively change character along the transport path, with transformation primarily resulting from reductions in sediment concentration through progressive entrainment of surrounding fluid and/or sediment deposition. The rate of fluid entrainment, and consequently flow transformation, is dependent on factors including slope gradient, lateral confinement, bed roughness, flow thickness and water depth. Flows with high and low sediment concentrations may co-exist in one transport event because of downflow transformations, flow stratification or shear layer development of the mixing interface with the overlying water (mixing cloud formation). Deposits of an individual flow event at one site may therefore form from a succession of different flow types, and this introduces considerable complexity into classifying the flow event or component flow types from the deposits.

1,454 citations

Journal ArticleDOI
01 Jan 1957-Nature
TL;DR: The Structure of Turbulent Shear Flow by Dr. A.Townsend as mentioned in this paper is a well-known work in the field of fluid dynamics and has been used extensively in many applications.
Abstract: The Structure of Turbulent Shear Flow By Dr. A. A. Townsend. Pp. xii + 315. 8¾ in. × 5½ in. (Cambridge: At the University Press.) 40s.

1,050 citations

Journal ArticleDOI
TL;DR: A review is given of a set of model evaluation methodologies, including the BOOT and the ASTM evaluation software, Taylor’s nomogram, the figure of merit in space, and the CDF approach.
Abstract: This paper reviews methods to evaluate the performance of air quality models, which are tools that predict the fate of gases and aerosols upon their release into the atmosphere. Because of the large economic, public health, and environmental impacts often associated with the use of air quality model results, it is important that these models be properly evaluated. A comprehensive model evaluation methodology makes use of scientific assessments of the model technical algorithms, statistical evaluations using field or laboratory data, and operational assessments by users in real-world applications. The focus of the current paper is on the statistical evaluation component. It is important that a statistical model evaluation exercise should start with clear definitions of the evaluation objectives and specification of hypotheses to be tested. A review is given of a set of model evaluation methodologies, including the BOOT and the ASTM evaluation software, Taylor’s nomogram, the figure of merit in space, and the CDF approach. Because there is not a single best performance measure or best evaluation methodology, it is recommended that a suite of different performance measures be applied. Suggestions are given concerning the magnitudes of the performance measures expected of “good” models. For example, a good model should have a relative mean bias less than about 30% and a relative scatter less than about a factor of two. In order to demonstrate some of the air quality model evaluation methodologies, two simple baseline urban dispersion models are evaluated using the Salt Lake City Urban 2000 field data. The importance of assumptions concerning details such as minimum concentration and pairing of data are shown. Typical plots and tables are presented, including determinations of whether the difference in the relative mean bias between the two models is statistically significant at the 95% confidence level.

942 citations