scispace - formally typeset
Search or ask a question
Author

David H. Jones

Bio: David H. Jones is an academic researcher from British Antarctic Survey. The author has contributed to research in topics: Cellular automaton & Model checking. The author has an hindex of 7, co-authored 19 publications receiving 199 citations. Previous affiliations of David H. Jones include Imperial College London & Natural Environment Research Council.

Papers
More filters
Proceedings ArticleDOI
31 Aug 2010
TL;DR: It is shown that the GPU is more productive than the FPGA architecture for most of the benchmarks and it is concluded thatFPGA-based HPCS is being marginalised by GPUs.
Abstract: Heterogeneous or co-processor architectures are becoming an important component of high productivity computing systems (HPCS). In this work the performance of a GPU based HPCS is compared with the performance of a commercially available FPGA based HPC. Contrary to previous approaches that focussed on specific examples, a broader analysis is performed by considering processes at an architectural level. A set of benchmarks is employed that use different process architectures in order to exploit the benefits of each technology. These include the asynchronous pipelines common to "map" tasks, a partially synchronous tree common to "reduce" tasks and a fully synchronous, fully connected mesh. We show that the GPU is more productive than the FPGA architecture for most of the benchmarks and conclude that FPGA-based HPCS is being marginalised by GPUs.

61 citations

Journal ArticleDOI
TL;DR: Two consecutive cruises in the Weddell Sea, Antarctica, in winter 2013 provided the first direct observations of sea salt aerosol (SSA) production from blowing snow above sea ice, thereby validating a model hypothesis to account for winter time SSA maxima in the Antarctic.
Abstract: Two consecutive cruises in the Weddell Sea, Antarctica, in winter 2013 provided the first direct observations of sea salt aerosol (SSA) production from blowing snow above sea ice, thereby validating a model hypothesis to account for winter time SSA maxima in the Antarctic Blowing or drifting snow often leads to increases in SSA during and after storms For the first time it is shown that snow on sea ice is depleted in sulfate relative to sodium with respect to seawater Similar depletion in bulk aerosol sized ∼03 –6 µm above sea ice provided the evidence that most sea salt originated from snow on sea ice and not the open ocean or leads, eg >90 % during the 8 June to 12 August 2013 period A temporally very close association of snow and aerosol particle dynamics together with the long distance to the nearest open ocean further supports SSA originating from a local source A mass budget estimate shows that snow on sea ice contains even at low salinity ( psu) more than enough sea salt to account for observed increases in atmospheric SSA during storms if released by sublimation Furthermore, snow on sea ice and blowing snow showed no or small depletion of bromide relative to sodium with respect to seawater, whereas aerosol was enriched at 2 m and depleted at 29 m, suggesting that significant bromine loss takes place in the aerosol phase further aloft and that SSA from blowing snow is a source of atmospheric reactive bromine, an important ozone sink, even during winter darkness The relative increase in aerosol concentrations with wind speed was much larger above sea ice than above the open ocean, highlighting the importance of a sea ice source in winter and early spring for the aerosol burden above sea ice Comparison of absolute increases in aerosol concentrations during storms suggests that to a first order corresponding aerosol fluxes above sea ice can rival those above the open ocean depending on particle size Evaluation of the current model for SSA production from blowing snow showed that the parameterizations used can generally be applied to snow on sea ice Snow salinity, a sensitive model parameter, depends to a first order on snowpack depth and therefore was higher above first-year sea ice (FYI) than above multi-year sea ice (MYI) Shifts in the ratio of FYI and MYI over time are therefore expected to change the seasonal SSA source flux and contribute to the variability of SSA in ice cores, which represents both an opportunity and a challenge for the quantitative interpretation of sea salt in ice cores as a proxy for sea ice

53 citations

Journal ArticleDOI
01 Feb 2001
TL;DR: Various optimizations for improving the time and space efficiency of symbolic modal checking for systems specified as statecharts are presented and used in analyses of the models of a collision avoidance system and a fault-tolerance electrical power distribution system.
Abstract: Symbolic model checking based on binary decision diagrams is a powerful formal verification technique for reactive systems. In this paper, we present various optimizations for improving the time and space efficiency of symbolic modal checking for systems specified as statecharts. We used these techniques in our analyses of the models of a collision avoidance system and a fault-tolerant electrical power distribution (EPD) system, both used on commercial aircraft. The techniques together reduce the time and space requirements by orders of magnitude, making feasible some analysis that was previously intractable. We also elaborate on the results of verifying the EPD model. The analysis disclosed subtle modeling and logical flaws not found by simulation.

35 citations

Journal ArticleDOI
TL;DR: Six methods, three each for uncertainty and interconnectedness, are reviewed and how they are implemented using case studies in order to illustrate essential approaches to enhancing resilience are illustrated.
Abstract: Uncertainty and interconnectedness in complex engineering and engineered systems such as power-grids and telecommunication networks are sources of vulnerability compromising the resilience of these systems. Conditions of uncertainty and interconnectedness change over time and depend on emerging socio-technical contexts, thus conventional methods which can conduct normative, descriptive and prescriptive assessment of complex engineering and engineered systems resilience are limited. This paper brings together contributions of experts in complex engineering and engineered systems who have identified six methods, three each for uncertainty and interconnectedness, which form the foundational methods for knowing complex engineering and engineered systems resilience. The paper has reviewed how these methods contribute to overcoming uncertainty or interconnectedness and how they are implemented using case studies in order to illustrate essential approaches to enhancing resilience. It is hoped that this approach will allow the subject to be quantified and best practice standards to develop.

18 citations

Journal ArticleDOI
TL;DR: In this article, the British Antarctic Survey's Halley Research Station is located on the Brunt Ice Shelf, Antarctica, where it is potentially vulnerable to calving events, and three different possible future scenarios for a large-scale calving event on Brunt ice Shelf.
Abstract: . The British Antarctic Survey's Halley Research Station is located on the Brunt Ice Shelf, Antarctica, where it is potentially vulnerable to calving events. Existing historical records show that the Brunt Ice Shelf is currently extended further into the Weddell Sea than it was before its last large calving event, so a new calving event may be overdue. We describe three different possible future scenarios for a large-scale calving event on Brunt Ice Shelf. We conclude that currently the most threatening scenario for the Halley Research Station is a calving event on the neighbouring Stancomb-Wills Glacier Tongue, with subsequent detrimental consequences for the stability of the Brunt Ice Shelf. Based on available data, we suggest an increasing likelihood of this scenario occurring after 2020. We furthermore describe ongoing monitoring efforts aimed at giving advanced warning of an imminent calving event.

17 citations


Cited by
More filters
25 Nov 2002
TL;DR: A formal semantics for UML activity diagrams that is suitable for workflow modelling and allows verification of functional requirements using model checking and the feasibility of the approach is demonstrated by using the tool to verify some real-life workflow models.
Abstract: This thesis defines a formal semantics for UML activity diagrams that is suitable for workflow modelling. The semantics allows verification of functional requirements using model checking. Since a workflow specification prescribes how a workflow system behaves, the semantics is defined and motivated in terms of workflow systems. As workflow systems are reactive and coordinate activities, the defined semantics reflects these aspects. In fact, two formal semantics are defined, which are completely different. Both semantics are defined directly in terms of activity diagrams and not by a mapping of activity diagrams to some existing formal notation. The requirements-level semantics, based on the Statemate semantics of statecharts, assumes that workflow systems are infinitely fast w.r.t. their environment and react immediately to input events (this assumption is called the perfect synchrony hypothesis). The implementation-level semantics, based on the UML semantics of statecharts, does not make this assumption. Due to the perfect synchrony hypothesis, the requirements-level semantics is unrealistic, but easy to use for verification. On the other hand, the implementation-level semantics is realistic, but difficult to use for verification. A class of activity diagrams and a class of functional requirements is identified for which the outcome of the verification does not depend upon the particular semantics being used, i.e., both semantics give the same result. For such activity diagrams and such functional requirements, the requirements-level semantics is as realistic as the implementation-level semantics, even though the requirements-level semantics makes the perfect synchrony hypothesis. The requirements-level semantics has been implemented in a verification tool. The tool interfaces with a model checker by translating an activity diagram into an input for a model checker according to the requirements-level semantics. The model checker checks the desired functional requirement against the input model. If the model checker returns a counterexample, the tool translates this counterexample back into the activity diagram by highlighting a path corresponding to the counterexample. The tool supports verification of workflow models that have event-driven behaviour, data, real time, and loops. Only model checkers supporting strong fairness model checking turn out to be useful. The feasibility of the approach is demonstrated by using the tool to verify some real-life workflow models.

247 citations

Proceedings ArticleDOI
01 Sep 2003
TL;DR: Bogor as mentioned in this paper is a model checking framework with an extensible input language for defining domain-specific constructs and a modular interface design to ease the optimization of domain specific state-space encodings, reductions and search algorithms.
Abstract: Model checking is emerging as a popular technology for reasoning about behavioral properties of a wide variety of software artifacts including: requirements models, architectural descriptions, designs, implementations, and process models. The complexity of model checking is well-known, yet cost-effective analyses have been achieved by exploiting, for example, naturally occurring abstractions and semantic properties of a target software artifact. semantic properties of target software artifacts. Adapting a model checking tool to exploit this kind of domain knowledge often requires in-depth knowledge of the tool's implementation.We believe that with appropriate tool support, domain experts will be able to develop efficient model checking-based analyses for a variety of software-related models. To explore this hypothesis, we have developed Bogor, a model checking framework with an extensible input language for defining domain-specific constructs and a modular interface design to ease the optimization of domain-specific state-space encodings, reductions and search algorithms. We present the pattern-oriented design of Bogor and discuss our experiences adapting it to efficiently model check Java programs and event-driven component-based designs.

217 citations

Journal ArticleDOI
TL;DR: A tool that supports verification of workflow models specified in UML activity diagrams is described that translates an activity diagram into an input format for a model checker according to a mathematical semantics.
Abstract: We describe a tool that supports verification of workflow models specified in UML activity diagrams. The tool translates an activity diagram into an input format for a model checker according to a mathematical semantics. With the model checker, arbitrary propositional requirements can be checked against the input model. If a requirement fails to hold, an error trace is returned by the model checker, which our tool presents by highlighting a corresponding path in the activity diagram. We summarize our formal semantics, discuss the techniques used to reduce an infinite state space to a finite one, and motivate the need for strong fairness constraints to obtain realistic results. We define requirement-preserving rules for state space reduction. Finally, we illustrate the whole approach with a few example verifications.

146 citations

Journal Article
TL;DR: It is shown that for every erroneous finite computation, there is an RCTL formula that detects it and can be verified on-the-fly and moved model checking in IBM into a different class of designs inaccessible by prior techniques.
Abstract: The specification language RCTL, an extension of CTL, is defined by adding the power of regular expressions to CTL. In addition to being a more expressive and natural hardware specification language than CTL, a large family of RCTL formulas can be verified on-the-fly (during symbolic reachability analysis). On-the-fly model checking, as a powerful verification paradigm, is especially efficient when the specification is false and extremely efficient when the computation needed to get to a failing state is short. It is suitable for the inherently gradual design process since it detects a multitude of bugs at the early verification stages, and paves the way towards finding the more complex errors as the design matures. It is shown that for every erroneous finite computation, there is an RCTL formula that detects it and can be verified on-the-fly. On-the-fly verification of RCTL formulas has moved model checking in IBM into a different class of designs inaccessible by prior techniques.

91 citations

Proceedings ArticleDOI
01 Dec 2010
TL;DR: Evaluating the High-Productivity Reconfigurable Computer (HPRC) approach to FPGA programming, where a commodity CPU instruction set architecture is augmented with instructions which execute on a specialised FPGa co-processor, shows that high-productivity reconfigurable computing systems outperform GPUs in applications with poor locality characteristics and low memory bandwidth requirements.
Abstract: This paper provides the first comparison of performance and energy efficiency of high productivity computing systems based on FPGA (Field-Programmable Gate Array) and GPU (Graphics Processing Unit) technologies. The search for higher performance compute solutions has recently led to great interest in heterogeneous systems containing FPGA and GPU accelerators. While these accelerators can provide significant performance improvements, they can also require much more design effort than a pure software solution, reducing programmer productivity. The CUDA system has provided a high productivity approach for programming GPUs. This paper evaluates the High-Productivity Reconfigurable Computer (HPRC) approach to FPGA programming, where a commodity CPU instruction set architecture is augmented with instructions which execute on a specialised FPGA co-processor, allowing the CPU and FPGA to co-operate closely while providing a programming model similar to that of traditional software. To compare the GPU and FPGA approaches, we select a set of established benchmarks with different memory access characteristics, and compare their performance and energy efficiency on an FPGA-based Hybrid-Core system with a GPU-based system. Our results show that while GPUs excel at streaming applications, high-productivity reconfigurable computing systems outperform GPUs in applications with poor locality characteristics and low memory bandwidth requirements.

86 citations