scispace - formally typeset
R

Radu Tudor Ionescu

Researcher at University of Bucharest

Publications -  154
Citations -  3078

Radu Tudor Ionescu is an academic researcher from University of Bucharest. The author has contributed to research in topics: Convolutional neural network & Deep learning. The author has an hindex of 20, co-authored 153 publications receiving 1836 citations.

Papers
More filters
Proceedings ArticleDOI

Unmasking the Abnormal Events in Video

TL;DR: This is the first work to apply unmasking for a computer vision task and the empirical results indicate that the abnormal event detection framework can achieve state-of-the-art results, while running in real-time at 20 frames per second.
Journal ArticleDOI

Local Learning With Deep and Handcrafted Features for Facial Expression Recognition

TL;DR: Zhang et al. as discussed by the authors proposed an approach that combines automatic features learned by convolutional neural networks (CNN) and handcrafted features computed by the bag-of-visual-words (BOVW) model in order to achieve the state of the art results in facial expression recognition (FER).
Proceedings ArticleDOI

Object-Centric Auto-Encoders and Dummy Anomalies for Abnormal Event Detection in Video

TL;DR: An unsupervised feature learning framework based on object-centric convolutional auto-encoders to encode both motion and appearance information is introduced and a supervised classification approach based on clustering the training samples into normality clusters is proposed.
Posted Content

Object-centric Auto-encoders and Dummy Anomalies for Abnormal Event Detection in Video

TL;DR: Zhang et al. as mentioned in this paper proposed an unsupervised feature learning framework based on object-centric convolutional auto-encoders to encode both motion and appearance information.
Book ChapterDOI

Deep Appearance Features for Abnormal Behavior Detection in Video

TL;DR: The empirical results indicate that the novel framework for abnormal event detection in video that is based on deep features extracted with pre-trained convolutional neural networks can reach state-of-the-art results, while running in real-time at 20 frames per second.