scispace - formally typeset
Search or ask a question

Showing papers by "Richard Durbin published in 1989"


Journal ArticleDOI
TL;DR: A genome mapping system has been developed that reads and assembles data from clones analysed by restriction enzyme fragmentation and polyacrylamide gel electrophoresis that can be most effectively obtained by the use of a scanning densitometer and image-processing package.
Abstract: A genome mapping system has been developed that reads and assembles data from clones analysed by restriction enzyme fragmentation and polyacrylamide gel electrophoresis. Input data for the system can be most effectively obtained by the use of a scanning densitometer and image-processing package, such as that described in this article. The image-processing procedure involves preliminary location of bands, cooperative tracking of lanes by correlation of adjacent bands, a precise densitometric pass, alignment of the marker bands with the standard, optional interactive editing, and normalization of the accepted bands.

94 citations


Journal ArticleDOI
TL;DR: A learning algorithm for multi-layer networks with a single output unit that greatly outperforms back propagation at the task of learning random vectors and provides further empirical evidence that the lower bound of 2n can be exceeded.
Abstract: We obtain bounds for the capacity of some multi-layer networks of linear threshold units. In the case of a network having n inputs, a single layer of h hidden units and an output layer of s units, where all the weights in the network are variable and s?h?n, the capacity m satisfies 2n?m?nt logt, where t=1=h/s. We consider in more detail the case where there is a single output that is a fixed boolean function of the hidden units. In this case our upper bound is of order nh logh but the argument which provided the lower bound of 2n no longer applies. However, by explicit computation in low dimensional cases we show that the capacity exceeds 2n but is substantially less than the upper bound. Finally, we describe a learning algorithm for multi-layer networks with a single output unit. This greatly outperforms back propagation at the task of learning random vectors and provides further empirical evidence that the lower bound of 2n can be exceeded.

80 citations