O
Oladimeji Farri
Researcher at Philips
Publications - 69
Citations - 1423
Oladimeji Farri is an academic researcher from Philips. The author has contributed to research in topics: Deep learning & Artificial neural network. The author has an hindex of 18, co-authored 69 publications receiving 1163 citations. Previous affiliations of Oladimeji Farri include University of Minnesota.
Papers
More filters
Proceedings Article
Neural Paraphrase Generation with Stacked Residual LSTM Networks
Aaditya Prakash,Sadid A. Hasan,Kathy Lee,Vivek V. Datla,Ashequl Qadir,Joey Liu,Oladimeji Farri +6 more
TL;DR: The authors proposed a stacked residual LSTM network for paraphrase generation, which adds residual connections between LSTMs layers for efficient training, and achieved state-of-the-art performance on three different datasets: PPDB, WikiAnswers and MSCOCO.
Journal ArticleDOI
Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification.
Imon Banerjee,Yuan Ling,Matthew C. Chen,Sadid A. Hasan,Curtis P. Langlotz,N Moradzadeh,Brian E. Chapman,Timothy J. Amrhein,David A. Mong,Daniel L. Rubin,Oladimeji Farri,Matthew P. Lungren +11 more
TL;DR: Investigation of cutting-edge deep learning methods for information extraction from medical imaging free text reports at a multi-institutional scale and compares them to the state-of-the-art domain-specific rule-based system - PEFinder and traditional machine learning methods - SVM and Adaboost suggests feasibility of broader usage of neural network models in automated classification of multi-Institutional imaging text reports.
Proceedings ArticleDOI
Adverse Drug Event Detection in Tweets with Semi-Supervised Convolutional Neural Networks
Kathy Lee,Ashequl Qadir,Sadid A. Hasan,Vivek V. Datla,Aaditya Prakash,Joey Liu,Oladimeji Farri +6 more
TL;DR: This work builds several semi-supervised convolutional neural network models for ADE classification in tweets, specifically leveraging different types of unlabeled data in developing the models to address the problem.
Posted Content
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference
Reza Ghaeini,Sadid A. Hasan,Vivek V. Datla,Joey Liu,Kathy Lee,Ashequl Qadir,Yuan Ling,Aaditya Prakash,Xiaoli Z. Fern,Oladimeji Farri +9 more
TL;DR: A novel dependent reading bidirectional LSTM network (DR-BiLSTM) is proposed to efficiently model the relationship between a premise and a hypothesis during encoding and inference in the natural language inference (NLI) task.
Posted Content
Neural Paraphrase Generation with Stacked Residual LSTM Networks
Aaditya Prakash,Sadid A. Hasan,Kathy Lee,Vivek V. Datla,Ashequl Qadir,Joey Liu,Oladimeji Farri +6 more
TL;DR: This work is the first to explore deep learning models for paraphrase generation with a stacked residual LSTM network, where it adds residual connections between L STM layers for efficient training of deep LSTMs.