Understanding Low- and High-Level Contributions to Fixation Prediction
Citations
674 citations
503 citations
166 citations
Cites background from "Understanding Low- and High-Level C..."
...In [35], a readout architecturewas proposed to predict human attention on static images, in which both DNN features and low-level (isotropic contrast) features are considered....
[...]
...Benefiting from the most recent success of deep learning, deep neural networks (DNNs) [14], [15], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36] have also been developed to detect 2D video saliency, rather than exploring the HVSrelated features as in heuristic saliency detection approaches....
[...]
133 citations
Cites background or methods from "Understanding Low- and High-Level C..."
...[65] conclude that no matter which score is used, there still might be images for which a weaker model performs better than the best model....
[...]
...B) Right: Performances of DeepGaze II and ICF models over the MIT1003 dataset [65]....
[...]
...More recently, they introduced the DeepGaze II model [65] built upon DeepGaze I....
[...]
101 citations
Cites background from "Understanding Low- and High-Level C..."
...The DeepGazeII [38] model get the best sAUC score and relatively lower scores on other...
[...]
...The DeepGazeII [38] model get the best sAUC score and relatively lower scores on other metrics....
[...]
References
73,978 citations
49,914 citations
49,639 citations
"Understanding Low- and High-Level C..." refers methods in this paper
...The models make use of convolutional filters that have been learned on other tasks, most notably object recognition in the ImageNet dataset [10]....
[...]
...Subsequently, the DeepGaze I model showed that DNN features trained on object recognition (AlexNet [29] trained on the ImageNet dataset [10]) could significantly outperform training from scratch [32]....
[...]
40,257 citations
12,531 citations