scispace - formally typeset
I

Irina Rish

Researcher at Université de Montréal

Publications -  221
Citations -  7792

Irina Rish is an academic researcher from Université de Montréal. The author has contributed to research in topics: Computer science & Approximation algorithm. The author has an hindex of 34, co-authored 198 publications receiving 6830 citations. Previous affiliations of Irina Rish include IBM & University of California, Irvine.

Papers
More filters

An empirical study of the naive Bayes classifier

Irina Rish
TL;DR: This work analyzes the impact of the distribution entropy on the classification error, showing that low-entropy feature distributions yield good performance of naive Bayes and demonstrates that naive Baye works well for certain nearlyfunctional feature dependencies.
Proceedings Article

Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks

TL;DR: In this paper, a deep recurrent convolutional network was proposed to learn robust representations from multi-channel EEG time-series, and demonstrated its advantages in the context of mental load classification task.
Proceedings ArticleDOI

Critical event prediction for proactive management in large-scale computer clusters

TL;DR: An attempt to build a proactive prediction and control system for large clusters using time-series methods, rule-based classification algorithms and Bayesian network models to predict system performance parameters (SARs) with a high degree of accuracy.
Posted Content

Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference

TL;DR: In this paper, the authors propose a meta-experience replay (MER) algorithm, which combines experience replay with optimization based meta-learning to learn parameters that make interference based on future gradients less likely.
Proceedings Article

Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference

TL;DR: In this paper, the authors propose a meta-experience replay (MER) algorithm, which combines experience replay with optimization based meta-learning to learn parameters that make interference based on future gradients less likely.