scispace - formally typeset
Search or ask a question
Author

Bruce Christianson

Bio: Bruce Christianson is an academic researcher from University of Hertfordshire. The author has contributed to research in topics: Authentication & Automatic differentiation. The author has an hindex of 24, co-authored 177 publications receiving 2371 citations. Previous affiliations of Bruce Christianson include University UCINF & The Hertz Corporation.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the basic notions of automatic differentiation and their extensions are introduced and described in the context of nonlinear optimization and some illustrative examples are given. But they do not consider the automatic differentiation problem in the nonlinear setting.

234 citations

Book ChapterDOI
10 Apr 1996
TL;DR: The notion of trust is distinguished from a number of other (transitive) notions with which it is frequently confused, and it is argued that “proofs” of the unintensional transitivity of trust typically involve unpalatable logical assumptions as well as undesirable consequences.
Abstract: One of the great strengths of public-key cryptography is its potential to allow the localization of trust. This potential is greatest when cryptography is present to guarantee data integrity rather than secrecy, and where there is no natural hierarchy of trust. Both these conditions are typically fulfilled in the commercial world, where CSCW requires sharing of data and resources across organizational boundaries. One property which trust is frequently assumed or “proved” to have is transitivity (if A trusts B and B trusts C then A trusts C) or some generalization of transitivity such as *-closure. We use the loose term unintensional transitivity of trust to refer to a situation where B can effectively put things into A's set of trust assumptions without A's explicit consent (or sometimes even awareness.) Any account of trust which allows such situations to arise clearly poses major obstacles to the effective confinement (localization) of trust. In this position paper, we argue against the need to accept unintensional transitivity of trust. We distinguish the notion of trust from a number of other (transitive) notions with which it is frequently confused, and argue that “proofs” of the unintensional transitivity of trust typically involve unpalatable logical assumptions as well as undesirable consequences.

199 citations

Proceedings ArticleDOI
David Gray1, David Bowes1, Neil Davey1, Yi Sun1, Bruce Christianson1 
11 Apr 2011
TL;DR: A meticulously documented data cleansing process involving all 13 of the original NASA data sets found that each of the data sets had between 6 to 90 percent less of their original number of recorded values.
Abstract: Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments. Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction. Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets. Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values. Conclusions: One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points. Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.

153 citations

Journal ArticleDOI
TL;DR: It is shown how to re-use the computational graph for the fixed point constructor Φ so as to set explicit stopping criteria for the iterations, based on the gradient accuracy required, which allows the gradient vector to be obtained to the same order of accuracy as the objective function values.
Abstract: We apply reverse accumulation to obtain automatic gradients and error estimates of functions which include in their computation a convergent iteration of the form y= Φ(y,u), where y and u are vectors. We suggest an implementation approach which allows this to be done by a fairly routine extension of existing reverse accumulation code. We show how to re-use the computational graph for the fixed point constructor Φ so as to set explicit stopping criteria for the iterations, based on the gradient accuracy required. Our construction allows the gradient vector to be obtained to the same order of accuracy as the objective function values (which is in general the best we can hope to achieve), and the same order of computational cost (which does not explicitly depend upon the number of independent variables.) The technique can be applied to functions which contain several iterative constructions, either serially or nested

137 citations

Journal ArticleDOI
TL;DR: In this article, a pre-copy-editing, author produced PDF of an article accepted for publication in IMA Journal of Numerical Analysis following peer review is presented.
Abstract: “This is a pre-copy-editing, author produced PDF of an article accepted for publication in IMA Journal of Numerical Analysis following peer review. The definitive publisher-authenticated version [Vol.12, No.2 pp.135-150] is available online at: http://imajna.oxfordjournals.org/ Copyright Institute of Mathematics and its Applications.

104 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Jan 2005
TL;DR: "Parameter Estimation and Inverse Problems, 2/e" introduces readers to both Classical and Bayesian approaches to linear and nonlinear problems with particular attention paid to computational, mathematical, and statistical issues related to their application to geophysical problems.
Abstract: "Parameter Estimation and Inverse Problems, 2/e" provides geoscience students and professionals with answers to common questions like how one can derive a physical model from a finite set of observations containing errors, and how one may determine the quality of such a model. This book takes on these fundamental and challenging problems, introducing students and professionals to the broad range of approaches that lie in the realm of inverse theory. The authors present both the underlying theory and practical algorithms for solving inverse problems. The authors' treatment is appropriate for geoscience graduate students and advanced undergraduates with a basic working knowledge of calculus, linear algebra, and statistics. "Parameter Estimation and Inverse Problems, 2/e" introduces readers to both Classical and Bayesian approaches to linear and nonlinear problems with particular attention paid to computational, mathematical, and statistical issues related to their application to geophysical problems. The textbook includes Appendices covering essential linear algebra, statistics, and notation in the context of the subject. This book includes a companion website that features computational examples (including all examples contained in the textbook) and useful subroutines using MATLAB. It: includes appendices for review of needed concepts in linear, statistics, and vector calculus; features a companion website that contains comprehensive MATLAB code for all examples, which readers can reproduce, experiment with, and modify; offers an online instructor's guide that helps professors teach, customize exercises, and select homework problems; and, is accessible to students and professionals without a highly specialized mathematical background.

2,265 citations

Proceedings ArticleDOI
04 Jan 2000
TL;DR: In this article, a trust model that is grounded in real-world social trust characteristics, and based on a reputation mechanism, or word-of-mouth, is proposed for the virtual medium.
Abstract: At any given time, the stability of a community depends on the right balance of trust and distrust. Furthermore, we face information overload, increased uncertainty and risk taking as a prominent feature of modern living. As members of society, we cope with these complexities and uncertainties by relying trust, which is the basis of all social interactions. Although a small number of trust models have been proposed for the virtual medium, we find that they are largely impractical and artificial. In this paper we provide and discuss a trust model that is grounded in real-world social trust characteristics, and based on a reputation mechanism, or word-of-mouth. Our proposed model allows agents to decide which other agents' opinions they trust more and allows agents to progressively tune their understanding of another agent's subjective recommendations.

1,487 citations