J
Jan Hula
Researcher at University of Ostrava
Publications - 11
Citations - 243
Jan Hula is an academic researcher from University of Ostrava. The author has contributed to research in topics: Transfer of learning & Language model. The author has an hindex of 5, co-authored 11 publications receiving 153 citations. Previous affiliations of Jan Hula include Johns Hopkins University.
Papers
More filters
Posted Content
Poly-YOLO: higher speed, more precise detection and instance segmentation for YOLOv3.
TL;DR: A new version of Y OLO with better performance and extended with instance segmentation called Poly-YOLO, which has the same precision as YOLOv3, but it is three times smaller and twice as fast, thus suitable for embedded devices.
Posted Content
Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling
Alex Wang,Jan Hula,Patrick Xia,Raghavendra Pappagari,R. Thomas McCoy,Roma Patel,Najoung Kim,Ian Tenney,Yinghui Huang,Katherin Yu,Shuning Jin,Berlin Chen,Benjamin Van Durme,Edouard Grave,Ellie Pavlick,Samuel R. Bowman +15 more
TL;DR: The authors conducted a large-scale systematic study of candidate pretraining tasks, comparing 19 different tasks both as alternatives and complements to language modeling and found that language modeling can support the use of language modeling, especially when combined with pretraining on additional labeled-data tasks.
Proceedings ArticleDOI
Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling.
Alex Wang,Jan Hula,Patrick Xia,Raghavendra Pappagari,R. Thomas McCoy,Roma Patel,Najoung Kim,Ian Tenney,Yinghui Huang,Katherin Yu,Shuning Jin,Berlin Chen,Benjamin Van Durme,Edouard Grave,Ellie Pavlick,Samuel R. Bowman +15 more
TL;DR: This paper conducted a large-scale systematic study of candidate pretraining tasks, comparing 19 different tasks both as alternatives and complements to language modeling and found that language modeling can support the use of language modeling, especially when combined with pretraining on additional labeled-data tasks.
Posted Content
Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling.
Samuel R. Bowman,Ellie Pavlick,Edouard Grave,Benjamin Van Durme,Alex Wang,Jan Hula,Patrick Xia,Raghavendra Pappagari,R. Thomas McCoy,Roma Patel,Najoung Kim,Ian Tenney,Yinghui Huang,Katherin Yu,Shuning Jin,Berlin Chen +15 more
TL;DR: The primary results support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models and suggest that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work.
Journal ArticleDOI
Data Preprocessing Technique for Neural Networks Based on Image Represented by a Fuzzy Function
TL;DR: This paper proposes a preprocessing technique that enriches the original image data using local intensity information and introduces a new image structure named image represented by a fuzzy function to encode this information into an image.