scispace - formally typeset
Search or ask a question
Author

Tim Landgraf

Other affiliations: Technical University of Berlin
Bio: Tim Landgraf is an academic researcher from Free University of Berlin. The author has contributed to research in topics: Honey bee & Computer science. The author has an hindex of 19, co-authored 60 publications receiving 957 citations. Previous affiliations of Tim Landgraf include Technical University of Berlin.


Papers
More filters
Journal ArticleDOI
01 Jan 2018
TL;DR: In this paper, a novel framework called RenderGAN is proposed to generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework.
Abstract: Deep Convolutional Neuronal Networks (DCNNs) are showing remarkable performance on many computer vision tasks. Due to their large parameter space, they require many labeled samples when trained in a supervised setting. The costs of annotating data manually can render the use of DCNNs infeasible. We present a novel framework called RenderGAN that can generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework. In our approach, image augmentations (e.g. lighting, background, and detail) are learned from unlabeled data such that the generated images are strikingly realistic while preserving the labels known from the 3D model. We apply the RenderGAN framework to generate images of barcode-like markers that are attached to honeybees. Training a DCNN on data generated by the RenderGAN yields considerably better performance than training it on various baselines.

113 citations

Posted Content
TL;DR: By adding noise to intermediate feature maps to restrict the flow of information and can quantify how much information image regions provide, the method’s information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not required for the network's decision.
Abstract: Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work we adapt the information bottleneck concept for attribution. By adding noise to intermediate feature maps we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The method's information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision. For reviews: this https URL For code: this https URL

94 citations

Journal ArticleDOI
TL;DR: This contribution describes recent advances with regard to the acceptance of the biomimetic RoboFish by live Trinidadian guppies (Poecilia reticulata), and provides a detailed technical description of the RoboFish system and shows the effect of different appearance, motion patterns and interaction modes on theaccept of the artificial fish replica.
Abstract: In recent years, simple biomimetic robots have been increasingly used in biological studies to investigate social behavior, for example collective movement. Nevertheless, a big challenge in developing biomimetic robots is the acceptance of the robotic agents by live animals. In this contribution, we describe our recent advances with regard to the acceptance of our biomimetic RoboFish by live Trinidadian guppies (Poecilia reticulata). We provide a detailed technical description of the RoboFish system and show the effect of different appearance, motion patterns and interaction modes on the acceptance of the artificial fish replica. Our results indicate that realistic eye dummies along with natural motion patterns significantly improve the acceptance level of the RoboFish. Through the interactive behaviors, our system can be adjusted to imitate different individual characteristics of live animals, which further increases the bandwidth of possible applications of our RoboFish for the study of animal behavior.

80 citations

Proceedings Article
12 Jul 2020
TL;DR: The paper provides a framework to assess the faithfulness of new and existing modified BP methods theoretically and empirically, and measures how information of later layers is ignored by using the new metric, cosine similarity convergence (CSC).
Abstract: Attribution methods aim to explain a neural network's prediction by highlighting the most relevant image areas. A popular approach is to backpropagate (BP) a custom relevance score using modified rules, rather than the gradient. We analyze an extensive set of modified BP methods: Deep Taylor Decomposition, Layer-wise Relevance Propagation (LRP), Excitation BP, PatternAttribution, DeepLIFT, Deconv, RectGrad, and Guided BP. We find empirically that the explanations of all mentioned methods, except for DeepLIFT, are independent of the parameters of later layers. We provide theoretical insights for this surprising behavior and also analyze why DeepLIFT does not suffer from this limitation. Empirically, we measure how information of later layers is ignored by using our new metric, cosine similarity convergence (CSC). The paper provides a framework to assess the faithfulness of new and existing modified BP methods theoretically and empirically. For code see: this https URL

80 citations

Journal ArticleDOI
TL;DR: It is found that the bees sleep more during the dark phase of the day compared with the light phase, and retention for extinction learning was significantly reduced, indicating that consolidation of extinction memory but not acquisition memory was affected by sleep deprivation.
Abstract: Sleep-like behavior has been studied in honeybees before, but the relationship between sleep and memory formation has not been explored. Here we describe a new approach to address the question if sleep in bees, like in other animals, improves memory consolidation. Restrained bees were observed by a web camera, and their antennal activities were used as indicators of sleep. We found that the bees sleep more during the dark phase of the day compared with the light phase. Sleep phases were characterized by two distinct patterns of antennal activities: symmetrical activity, more prominent during the dark phase; and asymmetrical activity, more common during the light phase. Sleep-deprived bees showed rebound the following day, confirming effective deprivation of sleep. After appetitive conditioning of the bees to various olfactory stimuli, we observed their sleep. Bees conditioned to odor with sugar reward showed lesser sleep compared with bees that were exposed to either reward alone or air alone. Next, we asked whether sleep deprivation affects memory consolidation. While sleep deprivation had no effect on retention scores after odor acquisition, retention for extinction learning was significantly reduced, indicating that consolidation of extinction memory but not acquisition memory was affected by sleep deprivation.

66 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Several of the fundamental algorithms used in LAMMPS are described along with the design strategies which have made it flexible for both users and developers, and some capabilities recently added to the code which were enabled by this flexibility are highlighted.

1,956 citations

Proceedings ArticleDOI
Ekin D. Cubuk1, Barret Zoph1, Dandelion Mane, Vijay K. Vasudevan1, Quoc V. Le1 
15 Jun 2019
TL;DR: This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).
Abstract: Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5% which is 0.4% better than the previous record of 83.1%. On CIFAR-10, we achieve an error rate of 1.5%, which is 0.6% better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars.

1,902 citations

Posted Content
TL;DR: This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).
Abstract: Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5% which is 0.4% better than the previous record of 83.1%. On CIFAR-10, we achieve an error rate of 1.5%, which is 0.6% better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars.

1,278 citations

Journal ArticleDOI
TL;DR: A comprehensive and up-to-date review of the state-of-the-art (SOTA) in AutoML methods according to the pipeline, covering data preparation, feature engineering, hyperparameter optimization, and neural architecture search (NAS).
Abstract: Deep learning (DL) techniques have obtained remarkable achievements on various tasks, such as image recognition, object detection, and language modeling. However, building a high-quality DL system for a specific task highly relies on human expertise, hindering its wide application. Meanwhile, automated machine learning (AutoML) is a promising solution for building a DL system without human assistance and is being extensively studied. This paper presents a comprehensive and up-to-date review of the state-of-the-art (SOTA) in AutoML. According to the DL pipeline, we introduce AutoML methods – covering data preparation, feature engineering, hyperparameter optimization, and neural architecture search (NAS) – with a particular focus on NAS, as it is currently a hot sub-topic of AutoML. We summarize the representative NAS algorithms’ performance on the CIFAR-10 and ImageNet datasets and further discuss the following subjects of NAS methods: one/two-stage NAS, one-shot NAS, joint hyperparameter and architecture optimization, and resource-aware NAS. Finally, we discuss some open problems related to the existing AutoML methods for future research.

809 citations