scispace - formally typeset
Search or ask a question
Author

H. Giebel

Bio: H. Giebel is an academic researcher. The author has contributed to research in topics: Pattern recognition (psychology) & Human visual system model. The author has an hindex of 1, co-authored 1 publications receiving 11 citations.

Papers
More filters
Book ChapterDOI
01 Jan 1971
TL;DR: It is not necessary that an artificial recognition system is constructed in the same way as neuronal systems; but if it is based on the same principles of perception — as far as they are known — it might have a better chance of performing the same recognition operation.
Abstract: Artificial systems for solving difficult problems of pattern recognition are still very inefficient compared to the human visual system — at least if we regard the error rate. On the other hand characters are formed in such a manner that they can be easily distinguished by the human visual system which defines their meaning. It is not necessary that an artificial recognition system is constructed in the same way as neuronal systems; but if it is based on the same principles of perception — as far as they are known — it might have a better chance of performing the same recognition operation.

11 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A neural network model for a mechanism of visual pattern recognition that is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity of their shapes without affected by their positions.
Abstract: A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname “neocognitron”. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of “S-cells”, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of “C-cells” similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any “teacher” during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern.

4,713 citations

Journal ArticleDOI
01 Apr 2018
TL;DR: One of the best CNN representatives You Only Look Once (YOLO), which breaks through the CNN family's tradition and innovates a complete new way of solving the object detection with most simple and high efficient way.
Abstract: As a key use of image processing, object detection has boomed along with the unprecedented advancement of Convolutional Neural Network (CNN) and its variants since 2012. When CNN series develops to Faster Region with CNN (R-CNN), the Mean Average Precision (mAP) has reached 76.4, whereas, the Frame Per Second (FPS) of Faster R-CNN remains 5 to 18 which is far slower than the real-time effect. Thus, the most urgent requirement of object detection improvement is to accelerate the speed. Based on the general introduction to the background and the core solution CNN, this paper exhibits one of the best CNN representatives You Only Look Once (YOLO), which breaks through the CNN family's tradition and innovates a complete new way of solving the object detection with most simple and high efficient way. Its fastest speed has achieved the exciting unparalleled result with FPS 155, and its mAP can also reach up to 78.6, both of which have surpassed the performance of Faster R-CNN greatly. Additionally, compared with the latest most advanced solution, YOLOv2 achieves an excellent tradeoff between speed and accuracy as well as an object detector with strong generalization ability to represent the whole image.

192 citations

Book ChapterDOI
TL;DR: In this article, the authors show state-of-the-art performance on a challenging dataset, People-Art, which contains people from photos, cartoons and 41 different artwork movements.
Abstract: CNNs have massively improved performance in object detection in photographs. However research into object detection in artwork remains limited. We show state-of-the-art performance on a challenging dataset, People-Art, which contains people from photos, cartoons and 41 different artwork movements. We achieve this high performance by fine-tuning a CNN for this task, thus also demonstrating that training CNNs on photos results in overfitting for photos: only the first three or four layers transfer from photos to artwork. Although the CNN's performance is the highest yet, it remains less than 60\% AP, suggesting further work is needed for the cross-depiction problem. The final publication is available at Springer via this http URL

54 citations

Journal ArticleDOI
TL;DR: It is found that watershed‐based segmentation provides a wide range of possible petrophysical values depending on user‐selected thresholds, whereas CNN provides a smaller variance when trained on scanning electron microscope data.

50 citations

Book ChapterDOI
08 Oct 2016
TL;DR: This work shows state-of-the-art performance on a challenging dataset, People-Art, which contains people from photos, cartoons and 41 different artwork movements, by fine-tuning a CNN for this task, and demonstrates that training CNNs on photos results in overfitting for photos.
Abstract: CNNs have massively improved performance in object detection in photographs. However research into object detection in artwork remains limited. We show state-of-the-art performance on a challenging dataset, People-Art, which contains people from photos, cartoons and 41 different artwork movements. We achieve this high performance by fine-tuning a CNN for this task, thus also demonstrating that training CNNs on photos results in overfitting for photos: only the first three or four layers transfer from photos to artwork. Although the CNN’s performance is the highest yet, it remains less than 60 % AP, suggesting further work is needed for the cross-depiction problem.

38 citations