D
Dimosthenis Karatzas
Researcher at Autonomous University of Barcelona
Publications - 72
Citations - 4062
Dimosthenis Karatzas is an academic researcher from Autonomous University of Barcelona. The author has contributed to research in topics: Computer science & Image retrieval. The author has an hindex of 20, co-authored 57 publications receiving 3017 citations. Previous affiliations of Dimosthenis Karatzas include CVC Capital Partners.
Papers
More filters
Proceedings ArticleDOI
ICDAR 2015 competition on Robust Reading
Dimosthenis Karatzas,Lluis Gomez-Bigorda,Anguelos Nicolaou,Suman K. Ghosh,Andrew D. Bagdanov,Masakazu Iwamura,Jiri Matas,Lukas Neumann,Vijay Chandrasekhar,Shijian Lu,Faisal Shafait,Seiichi Uchida,Ernest Valveny +12 more
TL;DR: A new Challenge 4 on Incidental Scene Text has been added to the Challenges on Born-Digital Images, Focused Scene Images and Video Text and tasks assessing End-to-End system performance have been introduced to all Challenges.
Proceedings ArticleDOI
ICDAR 2013 Robust Reading Competition
Dimosthenis Karatzas,Faisal Shafait,Seiichi Uchida,Masakazu Iwamura,Lluís Gómez i Bigorda,Sergi Robles Mestre,Joan Mas,David Fernandez Mota,Jon Almazan,Lluís-Pere de las Heras +9 more
TL;DR: The datasets and ground truth specification are described, the performance evaluation protocols used are details, and the final results are presented along with a brief summary of the participating methods.
Proceedings ArticleDOI
ICDAR2017 Robust Reading Challenge on Multi-Lingual Scene Text Detection and Script Identification - RRC-MLT
Nibal Nayef,Fei Yin,Imen Bizid,Hyun-Soo Choi,Yuan Feng,Dimosthenis Karatzas,Zhenbo Luo,Umapada Pal,Christophe Rigaud,Joseph Chazalon,Wafa Khlif,Muhammad Muzzamil Luqman,Jean-Christophe Burie,Cheng-Lin Liu,Jean-Marc Ogier +14 more
TL;DR: This paper presents the dataset, the tasks and the findings of this RRC-MLT challenge, which aims at assessing the ability of state-of-the-art methods to detect Multi-Lingual Text in scene images, such as in contents gathered from the Internet media and in modern cities where multiple cultures live and communicate together.
Proceedings ArticleDOI
Scene Text Visual Question Answering
Ali Furkan Biten,Rubèn Tito,Andres Mafla,Lluis Gomez,Marçal Rusiñol,C. V. Jawahar,Ernest Valveny,Dimosthenis Karatzas +7 more
TL;DR: The ST-VQA dataset as discussed by the authors proposes a series of tasks of increasing difficulty for which reading the scene text in the context provided by the visual information is necessary to reason and generate an appropriate answer.
Proceedings ArticleDOI
Multi-script Text Extraction from Natural Scenes
Lluis Gomez,Dimosthenis Karatzas +1 more
TL;DR: This paper presents a method built around a perceptual organisation framework that exploits collaboration of proximity and similarity laws to create text-group hypotheses and demonstrates that the algorithm is competitive with state of the art approaches on a standard dataset covering text in variable orientations and two languages.