D
Dmitry Rudoy
Researcher at Technion – Israel Institute of Technology
Publications - 33
Citations - 513
Dmitry Rudoy is an academic researcher from Technion – Israel Institute of Technology. The author has contributed to research in topics: Biology & Engineering. The author has an hindex of 11, co-authored 16 publications receiving 480 citations. Previous affiliations of Dmitry Rudoy include Intel.
Papers
More filters
Proceedings ArticleDOI
Learning Video Saliency from Human Gaze Using Candidate Selection
TL;DR: A novel method for video saliency estimation, inspired by the way people watch videos, is proposed, which explicitly model the continuity of the video by predicting the saliency map of a given frame, conditioned on the map from the previous frame.
Journal ArticleDOI
On Estimating Optimal Performance of CPU Dynamic Thermal Management
TL;DR: A theoretical analysis targeted at estimating the optimal strategy of dynamic thermal management strategies that use dynamic voltage scaling (DVS) for power control and uses the patterns exhibited in order to analyze some existing DTM techniques.
Patent
Combining power prediction and optimal control approaches for performance optimization in thermally limited designs
TL;DR: In this paper, the operating rate of an electronic system is maximized without exceeding a thermal constraint, such as a maximum junction temperature of an integrated circuit (IC) or other portion of the electronic system.
Journal ArticleDOI
Viewpoint Selection for Human Actions
Dmitry Rudoy,Lihi Zelnik-Manor +1 more
TL;DR: The proposed view selection approach is generic while the other can be trained to fit any preferred action recognition method, and shows that it improves action recognition results.
Posted Content
Learning Gaze Transitions from Depth to Improve Video Saliency Estimation
TL;DR: This paper introduces a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing videos that contain a depth map (RGBD) on a 2D screen, and demonstrates that this approach outperforms state-of-the-art methods for video saliency.