scispace - formally typeset
Search or ask a question
Author

Aylin Sevik

Bio: Aylin Sevik is an academic researcher from Düzce University. The author has contributed to research in topics: Character (computing) & Font. The author has an hindex of 1, co-authored 1 publications receiving 10 citations.

Papers
More filters
Proceedings ArticleDOI
01 Dec 2018
TL;DR: The purpose of this article is to recognize letter and especially font from images which are containing texts by performing recognition process, primarily, the text in the image is divided into letters.
Abstract: The purpose of this article is to recognize letter and especially font from images which are containing texts. In order to perform recognition process, primarily, the text in the image is divided into letters. Then, each letter is sended to the recognition system. Results are filtered according to vowels which are most used in Turkish texts. As a result, font of the text is obtained. In order to separate letters from text, an algorithm used which developed by us to do separation. This algorithm has been developed considering Turkish characters which has dots or accent such as i, j, u, o and g and helps these characters to be perceived by the system as a whole. In order to provide recognition of Turkish characters, all possibilities were created for each of these characters and the algorithm was formed accordingly. After recognizing the each character, these individual parts are sended to the pre-trained deep convolutional neural network. In addition, a data set has been created for this pre-trained network. The data set contains nearly 13 thousands of letters with 227*227*3 size have been created with different points, fonts and letters. As a result, 100 percent of success has been attained in the training. %79.08 letter and %75 of font success has been attained in the tests.

13 citations


Cited by
More filters
Proceedings ArticleDOI
01 Oct 2019
TL;DR: Deep learning-based YOLO (You only look once), for the detection of an unmanned aerial vehicle (UAV) is presented and for the specifically created data set made, Y OLOv3 is outperforming YolOv2 both in MAP and accuracy.
Abstract: This paper presents deep learning-based YOLO (You only look once), for the detection of an unmanned aerial vehicle (UAV). In common practice, the creation of own data set is an extensive and hectic task, that takes a long time because it requires proper resolution images from different angles. These issues make the data set creation an important task. Implementation of YOLOv2 and YOLOv3 is done on the own created data set for the real-time UAV’s detection and to benchmark the performance of both models in terms of mean average precision (MAP) and accuracy. For the specifically created data set made, YOLOv3 is outperforming YOLOv2 both in MAP and accuracy.

36 citations

Journal ArticleDOI
TL;DR: A deep learning approach to RFI detection using SMAP spectrogram data as input images and the well-known pretrained convolutional neural networks, AlexNet, GoogleNet, and ResNet-101 were investigated.
Abstract: Radio frequency interference (RFI) is a risk for microwave radiometers due to their requirement of very high sensitivity. The Soil Moisture Active Passive (SMAP) mission has an aggressive approach to RFI detection and filtering using dedicated spaceflight hardware and ground processing software. As more sensors push to observe at larger bandwidths in unprotected or shared spectrum, RFI detection continues to be essential. This article presents a deep learning approach to RFI detection using SMAP spectrogram data as input images. The study utilizes the benefits of transfer learning to evaluate the viability of this method for RFI detection in microwave radiometers. The well-known pretrained convolutional neural networks, AlexNet, GoogleNet, and ResNet-101 were investigated. ResNet-101 provided the highest accuracy with respect to validation data (99%), while AlexNet exhibited the highest agreement with SMAP detection (92%).

8 citations

Proceedings ArticleDOI
27 Mar 2019
TL;DR: In order to automate a large Library where finding a book is a tough task, this paper shows a solution for it using deep learning concept through MATLAB Neural network toolbox which can pick the particular book with the help of a mobile manipulator.
Abstract: This paper deals with library automation and book detection using deep learning which comes under field of computer vision [1]. In order to automate a large Library where finding a book is a tough task, this paper shows a solution for it. Here we detect the required book with the help of deep learning concept through MATLAB Neural network toolbox which can pick the particular book with the help of a mobile manipulator. Convolutional neural network used is Alex Net which can detect around 1000 classes.Deep learning is a part of machine learning in which a network learns how to classify objects from texts, Images, and sound. The first part in the name Deep Learning i.e. "deep" means number of layers in the network—A network is said to be deeper when it has many layers. Generally neural networks contain 2 or 3 layers, while deeper networks contain hundreds of layers.The device used is a rover with capability of holding a book with its fingers. It also consists of a camera which scans through the books whose output is then analysed by a neural network.

5 citations

Proceedings ArticleDOI
01 Oct 2019
TL;DR: The purpose of this study is helping persons with disabilities for moving a wheelchair using brain waves, the wheelchair will move depending on the watched color by user, which is called the BWD (Brain Wave for Disabilities).
Abstract: the purpose of this study is helping persons with disabilities for moving a wheelchair using brain waves, the wheelchair will move depending on the watched color by user. There are three colors to be detected by the system; they were red, green and blue. This tool is called the BWD (Brain Wave for Disabilities). Brain waves were received with the neurosky Mind wave Headset to produce electroencephalograph (EEG) waves. EEG waves go through the extraction process using the FFT algorithm according to the rules of nyquist frequency. Each color was taken as many as 50 times for this experimental case. Each retrieved data in one color has 500 data points. Then the data points are used as input to the classification method called Deep Learning algorithm. The results of the classification are used to move the DC motor in a wheelchair. The wheelchair will move according to the color seen by the subject. The success rate in this research was 66, 67%.

4 citations

Journal ArticleDOI
TL;DR: It is concluded from the experimental results that the Eigenfaces method is suitable for font recognition of degraded documents.
Abstract: Introduction: In this paper, a system for recognizing fonts has been designed and implemented. The system is based on the Eigenfaces method. Because font recognition works in conjunction with other methods like Optical Character Recognition (OCR), we used Decapod and OCRopus software as a framework to present the method. Materials and Methods: In our experiments, text typeset with three English fonts (Comic Sans MS, DejaVu Sans Condensed,Times New Roman) have been used. Results and Discussion: The system is tested thoroughly using synthetic and degraded data. The experimental results show that Eigenfaces algorithm is very good at recognizing fonts of synthetic clean data as well as degraded data. The correct recognition rate for synthetic data for Eigenfaces is 99% based on Euclidean Distance. The overall accuracy of Eigenfaces is 97% based on 6144 degraded samples and considering Euclidean Distance performance criterion. Conclusions: It is concluded from the experimental results that the Eigenfaces method is suitable for font recognition of degraded documents. The three percentage incorrect classification can be mediated by relying on intra-word font information.

3 citations