scispace - formally typeset
Search or ask a question
Author

Farzad Ramezani

Bio: Farzad Ramezani is an academic researcher from University of Tehran. The author has contributed to research in topics: Isothermal process & Creep. The author has an hindex of 3, co-authored 3 publications receiving 30 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors developed a novel informational connectivity method to test whether peri-frontal brain areas contribute to familiar face recognition and found that feed-forward flow dominates for the most familiar faces and top-down flow was only dominant when sensory evidence was insufficient to support face recognition.

28 citations

Journal ArticleDOI
TL;DR: A method to predict the level of the air pollution of a location by taking an image by a camera of a smart phone then processing it and a Convolutional Neural Network is designed to receive a sky image as an input and result a level of air pollution.
Abstract: Air pollution is one of the most important problems in the new era. Detecting the level of air pollution from an image taken by a camera can be informative for the people who are not aware of exact air pollution level be declared daily by some organizations like municipalities. In this paper, we propose a method to predict the level of the air pollution of a location by taking an image by a camera of a smart phone then processing it. We collected an image dataset from city of Tehran. Afterward, we proposed two methods for estimation of level of air pollution. In the first method, the images are preprocessed and then Gabor transform is used to extract features from the images. At the end, two shallow classification methods are employed to model and predict the level of air pollution. In the second proposed method, a Convolutional Neural Network(CNN) is designed to receive a sky image as an input and result a level of air pollution. Some experiments have been done to evaluate the proposed method. The results show that the proposed 9 method has an acceptable accuracy in detection of the air pollution level. Our deep classifier achieved accuracy about 59.38% which is 10 about 6% higher than traditional combination of feature extraction and classification methods.

15 citations

Journal ArticleDOI
TL;DR: This work designed two series of rapid object categorization tasks to first investigate the performance of human peripheral vision in categorizing natural object images at different eccentricities and abstraction levels (superordinate, basic, and subordinate), and studied how modulating the foveal representation impacts peripheral object categorizations at any of the abstraction levels.
Abstract: Behavioral studies in humans indicate that peripheral vision can do object recognition to some extent. Moreover, recent studies have shown that some information from brain regions retinotopic to visual periphery is somehow fed back to regions retinotopic to the fovea and disrupting this feedback impairs object recognition in human. However, it is unclear to what extent the information in visual periphery contributes to human object categorization. Here, we designed two series of rapid object categorization tasks to first investigate the performance of human peripheral vision in categorizing natural object images at different eccentricities and abstraction levels (superordinate, basic, and subordinate). Then, using a delayed foveal noise mask, we studied how modulating the foveal representation impacts peripheral object categorization at any of the abstraction levels. We found that peripheral vision can quickly and accurately accomplish superordinate categorization, while its performance in finer categorization levels dramatically drops as the object presents further in the periphery. Also, we found that a 300-ms delayed foveal noise mask can significantly disturb categorization performance in basic and subordinate levels, while it has no effect on the superordinate level. Our results suggest that human peripheral vision can easily process objects at high abstraction levels, and the information is fed back to foveal vision to prime foveal cortex for finer categorizations when a saccade is made toward the target object.

14 citations

Journal ArticleDOI
TL;DR: In this paper , isothermal and temperature-sweep creep experiments adapted to filaments which were derived from spin-coated and subsequently crumpled thin polystyrene films were performed.
Abstract: Abstract We present results from isothermal and temperature-sweep creep experiments adapted to filaments which were derived from spin coated and subsequently crumpled thin polystyrene films. Due to the existence of residual stresses induced by preparation, the filaments showed significant shrinkage which we followed as a function of time at various temperatures. In addition, the influence of preparation conditions and subsequent annealing of supported thin polymer films on shrinkage and relaxation behavior was investigated. The temporal evolution of shrinkage revealed a sequence of relaxation regimes. We explored the temperature dependence of this relaxation and compared our observations with published results on drawn melt-spun fibers. This comparison revealed intriguing similarities between both systems prepared along different pathways. For instance, the magnitudes of shrinkage of melt-spun fibers and of filaments from crumpled spin coated polymer films are similar. Thus, our results suggest the existence of generic mechanisms of “forgetting”, i.e., how non-equilibrated polymers lose their memory of past processing events. Graphical abstract
Journal ArticleDOI
TL;DR: In this article , isothermal and temperature-sweep creep experiments adapted to filaments which were derived from spin-coated and subsequently crumpled thin polystyrene films were performed.
Abstract: Abstract We present results from isothermal and temperature-sweep creep experiments adapted to filaments which were derived from spin coated and subsequently crumpled thin polystyrene films. Due to the existence of residual stresses induced by preparation, the filaments showed significant shrinkage which we followed as a function of time at various temperatures. In addition, the influence of preparation conditions and subsequent annealing of supported thin polymer films on shrinkage and relaxation behavior was investigated. The temporal evolution of shrinkage revealed a sequence of relaxation regimes. We explored the temperature dependence of this relaxation and compared our observations with published results on drawn melt-spun fibers. This comparison revealed intriguing similarities between both systems prepared along different pathways. For instance, the magnitudes of shrinkage of melt-spun fibers and of filaments from crumpled spin coated polymer films are similar. Thus, our results suggest the existence of generic mechanisms of “forgetting”, i.e., how non-equilibrated polymers lose their memory of past processing events. Graphical abstract

Cited by
More filters
01 Jan 1998
TL;DR: The lateral intraparietal area (LIP) as mentioned in this paper has been shown to have visual responses to stimuli appearing abruptly at particular retinal locations (their receptive fields) and the visual representation in LIP is sparse, with only the most salient or behaviourally relevant objects being strongly represented.
Abstract: When natural scenes are viewed, a multitude of objects that are stable in their environments are brought in and out of view by eye movements. The posterior parietal cortex is crucial for the analysis of space, visual attention and movement 1 . Neurons in one of its subdivisions, the lateral intraparietal area (LIP), have visual responses to stimuli appearing abruptly at particular retinal locations (their receptive fields)2. We have tested the responses of LIP neurons to stimuli that entered their receptive field by saccades. Neurons had little or no response to stimuli brought into their receptive field by saccades, unless the stimuli were behaviourally significant. We established behavioural significance in two ways: either by making a stable stimulus task-relevant, or by taking advantage of the attentional attraction of an abruptly appearing stimulus. Our results show that under ordinary circumstances the entire visual world is only weakly represented in LIP. The visual representation in LIP is sparse, with only the most salient or behaviourally relevant objects being strongly represented.

1,007 citations

Journal ArticleDOI
TL;DR: Findings illustrate that peripheral and foveal processing are closely connected, mastering the compromise between a large peripheral visual field and high resolution at the fovea.
Abstract: Visual processing varies dramatically across the visual field. These differences start in the retina and continue all the way to the visual cortex. Despite these differences in processing, the perceptual experience of humans is remarkably stable and continuous across the visual field. Research in the last decade has shown that processing in peripheral and foveal vision is not independent, but is more directly connected than previously thought. We address three core questions on how peripheral and foveal vision interact, and review recent findings on potentially related phenomena that could provide answers to these questions. First, how is the processing of peripheral and foveal signals related during fixation? Peripheral signals seem to be processed in foveal retinotopic areas to facilitate peripheral object recognition, and foveal information seems to be extrapolated toward the periphery to generate a homogeneous representation of the environment. Second, how are peripheral and foveal signals re-calibrated? Transsaccadic changes in object features lead to a reduction in the discrepancy between peripheral and foveal appearance. Third, how is peripheral and foveal information stitched together across saccades? Peripheral and foveal signals are integrated across saccadic eye movements to average percepts and to reduce uncertainty. Together, these findings illustrate that peripheral and foveal processing are closely connected, mastering the compromise between a large peripheral visual field and high resolution at the fovea.

67 citations

Journal ArticleDOI
TL;DR: A deep learning method for feature extraction together with a mixture of experts for classification and an ensemble classifier that combines the advantages of different ELMs using a gating network and its accuracy is very high while the processing time is close to real-time.
Abstract: This paper considers the accident images and develops a deep learning method for feature extraction together with a mixture of experts for classification. For the first task, the outputs of the last max-pooling layer of a Convolution Neural Network (CNN) are used to extract the hidden features automatically. For the second task, a mixture of advanced variations of Extreme Learning Machine (ELM) including basic ELM, constraint ELM (CELM), On-Line Sequential ELM (OSELM) and Kernel ELM (KELM), is developed. This ensemble classifier combines the advantages of different ELMs using a gating network and its accuracy is very high while the processing time is close to real-time. To show the efficiency, the different combinations of the traditional feature extraction and feature selection methods and the various classifiers are examined on two kinds of benchmarks including accident images’ data set and some general data sets. It is shown that the proposed system detects the accidents with 99.31% precision, recall and F-measure. Besides, the precisions of accident-severity classification and involved-vehicle classification are 90.27% and 92.73%, respectively. This system is suitable for on-line processing on the accident images that will be captured by Unmanned Aerial Vehicles (UAV) or other surveillance systems.

35 citations

Journal Article
TL;DR: In this paper, the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task and the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01).
Abstract: Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search.

30 citations