scispace - formally typeset
D

Daniel Filan

Researcher at University of California, Berkeley

Publications -  11
Citations -  67

Daniel Filan is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Artificial neural network & Modularity (networks). The author has an hindex of 5, co-authored 10 publications receiving 62 citations. Previous affiliations of Daniel Filan include Australian National University.

Papers
More filters
Book ChapterDOI

Self-Modification of Policy and Utility Function in Rational Agents

TL;DR: The conclusion is that the self- modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.
Posted Content

Self-Modification of Policy and Utility Function in Rational Agents

TL;DR: In this article, it is shown that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of selfmodifications and uses the current utility function when evaluating the future.
Posted Content

Neural Networks are Surprisingly Modular

TL;DR: A measurable notion of modularity is introduced for multi-layer perceptrons (MLPs) and it is found that MLPs that undergo training and weight pruning are often significantly more modular than random networks with the same distribution of weights.
Posted Content

Pruned Neural Networks are Surprisingly Modular

TL;DR: A measurable notion of modularity for multi-layer perceptrons (MLPs) is introduced, and it is found that training and weight pruning produces MLPs that are more modular than randomly initialized ones, and often significantly more modules than random MLPs with the same (sparse) distribution of weights.
Posted Content

Clusterability in Neural Networks.

TL;DR: In this article, the authors look for structure in the form of clusterability: how well a network can be divided into groups of neurons with strong internal connectivity but weak external connectivity and find that a trained neural network is typically more clusterable than randomly initialized networks, and often clusterable relative to random networks with the same distribution of weights.