scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Imaging in 2016"


Journal ArticleDOI
TL;DR: This work designs two temporal–spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences and proposes a fusion network to combine all the extracted features at decision level.
Abstract: Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal–spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.

68 citations


Journal ArticleDOI
TL;DR: A typology for coding and analyzing information extracted for literature reviews based on Saldaňa’s (2012) coding methods is presented and delineate how using this systematic approach promotes counselor identity and addresses the call for ethical, transparent research and evidence-based practices.
Abstract: Onwuegbuzie and Frels (2014) provided a step-by-step guide illustrating how discourse analysis can be used to analyze literature. However, more works of this type are needed to address the way that counselor researchers conduct literature reviews. Therefore, we present a typology for coding and analyzing information extracted for literature reviews based on Saldaňa’s (2012) coding methods. We present stages for conducting these analyses using an actual body of published works and illustrate how to use a computer-assisted qualitative data analysis software program, namely, QDA Miner. Finally, we delineate how using this systematic approach promotes counselor identity and addresses the call for ethical, transparent research and evidence-based practices.

56 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms.
Abstract: In recent years, we have witnessed a huge diffusion of building information modeling (BIM) approaches in the field of architectural design, although very little research has been undertaken to explore the value, criticalities, and advantages attributable to the application of these methodologies in the cultural heritage domain. Furthermore, the last developments in digital photogrammetry lead to the easy generation of reliable low-cost three-dimensional textured models that could be used in BIM platforms to create semanticaware objects that could compose a specific library of historical architectural elements. In this case, the transfer between the point cloud and its corresponding parametric model is not so trivial and the level of geometrical abstraction could not be suitable with the scope of the BIM. The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms. © 2016 SPIE and IS&T

50 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol, which is composed of 2000 Brazilian license plates consisting of 14,000 alphanumeric symbols and their corresponding bounding box annotations.
Abstract: Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

44 citations


Journal ArticleDOI
TL;DR: The discriminability for slicing detection is increased through the maximization process from the point of view of the Kullback–Leibler divergence, and a threshold expansion and Markov state decomposition algorithm are presented.
Abstract: We propose an efficient Markov feature extraction method for color image splicing detection. The maximum value among the various directional difference values in the discrete cosine transform domain of three color channels is used to choose the Markov features. We show that the discriminability for slicing detection is increased through the maximization process from the point of view of the Kullback–Leibler divergence. In addition, we present a threshold expansion and Markov state decomposition algorithm. Threshold expansion reduces the information loss caused by the coefficient thresholding that is used to restrict the number of Markov features. To compensate the increased number of features due to the threshold expansion, we propose an even–odd Markov state decomposition algorithm. A fixed number of features, regardless of the difference directions, color channels and test datasets, are used in the proposed algorithm. We introduce three kinds of Markov feature vectors. The number of Markov features for splicing detection used in this paper is relatively small compared to the conventional methods, and our method does not require additional feature reduction algorithms. Through experimental simulations, we demonstrate that the proposed method achieves high performance in splicing detection.

40 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined students' perceptions of their life skills while attending project-based learning (PBL) schools and found that most improved skills within an academic year as responsibility, problem-solving, self-directedness, and work ethic.
Abstract: This research aimed to examine students’ perceptions of their life skills while attending project-based learning (PBL) schools. The study focused on three questions including: 1) What are students’ perceptions of their development of life skills in project-based learning schools? 2) In what ways, if any, do students perceive an increase in their life skill development over a one-year period of time? 3) What relationship, if any, is there between grade level and students’ perceptions of their life skills? The subjects were 275 6-12 students from two project-based learning charter schools in Minnesota. One school was located in a rural location; the other in an urban location. The triangulating data collection methods included a Likert-scale survey, semi-structured interviews, and focus groups. Quantitative analysis using SPSS were used to analyze the survey data. Qualitative analysis methods used were coding and identification of emergent themes. Qualitative results showed perceptions of most improved skills as time management, collaboration, communication, and self-directedness. Quantitative data results showed most improved skills within an academic year as responsibility, problem-solving, self-directedness, and work ethic. Self-directedness was the single skill that was evident in all data results. The results showed students’ perceptions of their life skills were positive and that project-based learning helped them develop multiple life skills including, but not limited to communication, collaboration, problem-solving, responsibility, and time management. Implications of this research suggest that project-based learning has a positive influence on students’ life skills development across 6-12 grade levels and helps prepare them to be successful in the 21 st century global community and economy.

37 citations


Journal ArticleDOI
TL;DR: A hybrid distortion function for JPEG steganography exploiting block fluctuation and quantization steps is proposed, using the syndrome trellis coding to embed secret data and presents less detectable artifacts.
Abstract: A hybrid distortion function for JPEG steganography exploiting block fluctuation and quantization steps is proposed. To resist multidomain steganalysis, both spatial domain and discrete cosine transformation (DCT) domain are involved in the proposed distortion function. In spatial domain, a distortion value is allotted for each 8×8 block according to block fluctuation. In DCT domain, quantization steps are employed to allot distortion values for DCT coefficients in a block. The two elements, block distortion and quantization steps, are combined together to measure the embedding risk. By employing the syndrome trellis coding to embed secret data, the embedding changes are constrained in complex regions, where modifications are hard to be detected. When compared to current state-of-the-art steganographic methods for JPEG images, the proposed method presents less detectable artifacts.

35 citations


Journal ArticleDOI
TL;DR: An LBP extension that takes the vector information of color into account due to a color order is proposed, which provides good performance on several benchmark databases for two classification problems with regard to larger-size LBP-based features of color textures.
Abstract: Texture description is a challenging problem with color images. Despite some attempts to include colors in local binary patterns (LBPs), no proposal has emerged as a color counterpart of grayscale LBPs. This is because colors are defined by vectors that are not naturally ordered and several ways exist to compare them. We propose an LBP extension that takes the vector information of color into account due to a color order. As several color orders are available and the selection of the most suitable one is difficult, we combine two of them in a texture descriptor called “mixed color order LBPs.” This small-size feature provides good performance on several benchmark databases for two classification problems with regard to larger-size LBP-based features of color textures.

34 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed direct method to derive RST invariants from Krawtchouk moment invariants can significantly improve the performance in terms of recognition accuracy and noise robustness.
Abstract: The existing Krawtchouk moment invariants are derived by a linear combination of geometric moment invariants. This indirect method cannot achieve perfect performance in rotation, scale, and translation (RST) invariant image recognition since the derivation of these invariants are not built on Krawtchouk polynomials. A direct method to derive RST invariants from Krawtchouk moments, named explicit Krawtchouk moment invariants, is proposed. The proposed method drives Krawtchouk moment invariants by algebraically eliminating the distorted (i.e., rotated, scaled, and translated) factor contained in the Krawtchouk moments of distorted image. Experimental results show that, compared with the indirect methods, the proposed approach can significantly improve the performance in terms of recognition accuracy and noise robustness.

33 citations


Journal ArticleDOI
TL;DR: This work proposes to use a network of cheap low-resolution visual sensors (30×30 pixels) for long-term behavior analysis and analyzes mobility patterns and some of the key ADL parameters to detect increasing or decreasing health conditions.
Abstract: Recent advancements in visual sensor technologies have made behavior analysis practical for in-home monitoring systems. The current in-home monitoring systems face several challenges: (1) visual sensor calibration is a difficult task and not practical in real-life because of the need for recalibration when the visual sensors are moved accidentally by a caregiver or the senior citizen, (2) privacy concerns, and (3) the high hardware installation cost. We propose to use a network of cheap low-resolution visual sensors (30×30 pixels) for long-term behavior analysis. The behavior analysis starts by visual feature selection based on foreground/background detection to track the motion level in each visual sensor. Then a hidden Markov model (HMM) is used to estimate the user’s locations without calibration. Finally, an activity discovery approach is proposed using spatial and temporal contexts. We performed experiments on 10 months of real-life data. We show that the HMM approach outperforms the k-nearest neighbor classifier against ground truth for 30 days. Our framework is able to discover 13 activities of daily livings (ADL parameters). More specifically, we analyze mobility patterns and some of the key ADL parameters to detect increasing or decreasing health conditions.

33 citations


Journal ArticleDOI
TL;DR: The proposed total variation (TV)-regularized weighted nuclear norm minimization (TWNNM) method produces superior denoising results for the mixed noise case in comparison with several state-of-the-art Denoising methods.
Abstract: Many nuclear norm minimization (NNM)-based methods have been proposed for hyperspectral image (HSI) mixed denoising due to the low-rank (LR) characteristics of clean HSI. However, the NNM-based methods regularize each eigenvalue equally, which is unsuitable for the denoising problem, where each eigenvalue stands for special physical meaning and should be regularized differently. However, the NNM-based methods only exploit the high spectral correlation, while ignoring the local structure of HSI and resulting in spatial distortions. To address these problems, a total variation (TV)-regularized weighted nuclear norm minimization (TWNNM) method is proposed. To obtain the desired denoising performance, two issues are included. First, to exploit the high spectral correlation, the HSI is restricted to be LR, and different eigenvalues are minimized with different weights based on the WNNM. Second, to preserve the local structure of HSI, the TV regularization is incorporated, and the alternating direction method of multipliers is used to solve the resulting optimization problem. Both simulated and real data experiments demonstrate that the proposed TWNNM approach produces superior denoising results for the mixed noise case in comparison with several state-of-the-art denoising methods.

Journal ArticleDOI
TL;DR: In this article, the authors examined the failure of efforts at addressing environmental issues via environmental education and suggested that environmental problems are on the increase due to lack of deliberate responsibility and stewardship, lack of a unique environmental education curricula and ineffective pedagogy.
Abstract: The period of environmentalism heightened environmental concern and subsequently the emergence of Environmental Education that is anchored on awareness. It is thought that increase in environmental awareness will reverse the misuse of the environment and its resources. Four decades after the international call for Environmental Education, Earth’s degradation is far from abating as it’s pristinity is consistently and irreversibly being eroded by no less than from anthropocentric activities. Humans have seen themselves as the dominant species that is apart and not part of the organisms that constitute the environment. The philosophical value free nature concepts and the theological assumption that human are the ultimate species together with the rise of capitalism and its surrogates consumerism together conspire to diminuate environmental health. To protect the environment therefore, we must refocus EE to change human’s view of the environment and attitude towards the utilization of its resources. Environmental education can become more effective in creating respect for the environment. This paper examined the failure of efforts at addressing environmental issues via environmental education. The paper posits that environmental problems are on the increase due to lack of deliberate responsibility and stewardship, lack of a unique EE curricula and ineffective pedagogy. We suggest therefore that EE can target human perception and attitude and direct then towards biocentric stewardship for the environment. This can be achieved through a deliberate pedagogy of environmental values that promotes sustainable attitude and respect for the environment. Humans must bear the burden of responsibility to ensure the wellbeing of the environment. We must replace the philosophical value free nature concepts that nature is a common commodity and the theological assumption that humans are the ultimate species. We must also rethink our consumerism nature and the endless faith in the efficacy of technology to solve reoccurrence human induced ecological problems. These issues must be embedded in the school curriculum. Pedagogical approach to EE should essentially be the experiential model. The school curriculum must be the carrier and doer of these values that are crucial to the sustainability of the environment. Environmental ethics, environmental code of conduct, environmental nationalism, nature as manifestation of God, ascetic consumerism are recommended as key component of environmental curricula and pedagogy.

Journal ArticleDOI
TL;DR: A robust encoding method is utilized in which the residuals of local descriptors, with respect to a discriminative model, are aggregated into fixed length vectors, resulting in a powerful vector representation.
Abstract: We focus on the problem of pose-based gait recognition. Our contribution is two-fold. First, we incorporate a local histogram descriptor that allows us to encode the trajectories of selected limbs via a one-dimensional version of histogram of oriented gradients features. In this way, a gait sequence is encoded into a sequence of local gradient descriptors. Second, we utilize a robust encoding method in which the residuals of local descriptors, with respect to a discriminative model, are aggregated into fixed length vectors. This technique combines the advantages of both residual aggregation and soft-assignment techniques, resulting in a powerful vector representation. For classification purposes, we use a nonlinear kernel to map vectors into a reproducing kernel Hilbert space. Then, we classify an encoded gait sequence according to the sparse representation-based classification method. Experimental evaluation on two publicly available datasets demonstrates the effectiveness of the proposed scheme on both recognition and verification tasks.

Journal ArticleDOI
TL;DR: This paper shows how multiview fusion can be applied to such a ConvNet LSTM architecture and shows that deep learning performs better than a traditional approach using spatiotemporal features even without requiring any background subtraction.
Abstract: Convolutional neural networks (ConvNets) coupled with long short term memory (LSTM) networks have been recently shown to be effective for video classification as they combine the automatic feature extraction capabilities of a neural network with additional memory in the temporal domain. This paper shows how multiview fusion can be applied to such a ConvNet LSTM architecture. Two different fusion techniques are presented. The system is first evaluated in the context of a driver activity recognition system using data collected in a multicamera driving simulator. These results show significant improvement in accuracy with multiview fusion and also show that deep learning performs better than a traditional approach using spatiotemporal features even without requiring any background subtraction. The system is also validated on another publicly available multiview action recognition dataset that has 12 action classes and 8 camera views.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks and can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.
Abstract: Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed underwater image-enhancement method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images.
Abstract: Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

Journal ArticleDOI
TL;DR: The self-calibration variational message passing (SC-VMP) algorithm is proposed to improve the performance of RCI with phase error and results show that the proposed algorithm can estimate the phase error accurately and improve the imaging quality significantly.
Abstract: Radar coincidence imaging (RCI) is a high-resolution imaging technique without the limitation of relative motion between target and radar. In sparsity-driven RCI, the prior knowledge of imaging model requires to be known accurately. However, the phase error generally exists as a model error, which may cause inaccuracies of the model and defocus the image. The problem is formulated using Bayesian hierarchical prior modeling, and the self-calibration variational message passing (SC-VMP) algorithm is proposed to improve the performance of RCI with phase error. The algorithm determines the phase error as part of the imaging process. The scattering coefficient and phase error are iteratively estimated using VMP and Newton’s method, respectively. Simulation results show that the proposed algorithm can estimate the phase error accurately and improve the imaging quality significantly.

Journal ArticleDOI
TL;DR: The authors found that the most frequently reported traits most frequently associated with individuals with ASC were poor social skills, introverted and withdrawn, poor communication, and difficult personality or behaviour, along with additional traits frequently used to describe disabled and non-disabled people.
Abstract: This research aimed to ascertain the contents (Study 1) and valence (Study 2) of the stereotype associated with Autism Spectrum Conditions (ASC) in university students. Study 1 used a free-response methodology where participants listed the characteristics that they thought society associates with individuals with ASC. This study revealed that the stereotypic traits most frequently reported by students without personal experience of ASC were poor social skills, being introverted and withdrawn, poor communication and difficult personality or behaviour. Study 2 had participants rate the valence of the 10 most frequently mentioned stereotypic traits identified in Study 1, along with additional traits frequently used to describe disabled and non-disabled people. This study found that eight of the ten most frequently listed stereotypic traits from Study 1 were seen as negative, and were rated significantly more negatively than traits used to describe non-disabled people. The knowledge of the contents and valence of the stereotype of ASC gained from this research can be used to tackle negative aspects of this stereotype.

Journal ArticleDOI
TL;DR: CIELAB markedly outperformed the other spaces followed by HSV and CIELUV and CIE XYZ came out as the worst performing space and no significant difference emerged among the performance of the other device-dependent spaces.
Abstract: This paper presents a comparison of color spaces for material classification. The study includes three device-independent (CIELAB, CIELUV, and CIE XYZ) and seven device-dependent spaces (RGB, HSV, YIQ, YUV, YCbCr, Ohta’s I1I2I3, and RG-YeB-WhBl). The pros and cons of the different spaces and the procedures for converting color data among them are discussed in detail. An experiment based on 12 different image data sets was carried out to comparatively evaluate the performance of each space for material classification purposes. The results showed that CIELAB markedly outperformed the other spaces followed by HSV and CIELUV. Conversely, CIE XYZ came out as the worst performing space. Interestingly, no significant difference emerged among the performance of the other device-dependent spaces.

Journal ArticleDOI
TL;DR: A supervised appearance model (sAM) is proposed that improves on AAM by replacing PCA with partial least-squares regression and is used for the problems of age and gender classification.
Abstract: Age and gender classification are two important problems that recently gained popularity in the research community, due to their wide range of applications. Research has shown that both age and gender information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical model that captures shape and texture variations, has been one of the most widely used feature extraction techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when used for classification. This is primarily because principal component analysis (PCA), which is at the core of the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the problems of age and gender classification. Our experiments show that sAM has better predictive power than the conventional AAM.

Journal ArticleDOI
TL;DR: A convolutional neural network is proposed to classify images of buildings using sparse features at the network’s input in conjunction with primary color pixel values and the results are encouraging and allow for prefiltering of the content in the search tasks.
Abstract: We propose a convolutional neural network to classify images of buildings using sparse features at the network’s input in conjunction with primary color pixel values. As a result, a trained neuronal model is obtained to classify Mexican buildings in three classes according to the architectural styles: prehispanic, colonial, and modern with an accuracy of 88.01%. The problem of poor information in a training dataset is faced due to the unequal availability of cultural material. We propose a data augmentation and oversampling method to solve this problem. The results are encouraging and allow for prefiltering of the content in the search tasks.

Journal ArticleDOI
TL;DR: It is shown that the use of these descriptors in a multiple classifiers framework makes it possible to achieve a very high classification accuracy in classifying texture images acquired under different lighting conditions and how the proposed combining strategy hand-crafted and convolutional neural networks features can be used together to further improve the classification accuracy.
Abstract: The analysis of color and texture has a long history in image analysis and computer vision. These two properties are often considered as independent, even though they are strongly related in images of natural objects and materials. Correlation between color and texture information is especially relevant in the case of variable illumination, a condition that has a crucial impact on the effectiveness of most visual descriptors. We propose an ensemble of hand-crafted image descriptors designed to capture different aspects of color textures. We show that the use of these descriptors in a multiple classifiers framework makes it possible to achieve a very high classification accuracy in classifying texture images acquired under different lighting conditions. A powerful alternative to hand-crafted descriptors is represented by features obtained with deep learning methods. We also show how the proposed combining strategy hand-crafted and convolutional neural networks features can be used together to further improve the classification accuracy. Experimental results on a food database (raw food texture) demonstrate the effectiveness of the proposed strategy.

Journal ArticleDOI
TL;DR: This paper examined the content validity of the school growth mindset construct using SPSS to perform correlation analysis with multicultural relevant, organizational learning variables from the literature that were shown to explain improved school outcomes.
Abstract: According to school growth mindset theory a school’s organizational structure influences teachers’ beliefs in their collective ability to help all students grow and learn; including those from diverse cultural, religious, identity, and socioeconomic demographics. The implicit theory of growth mindset has been quantified for a school’s culture on the What’s My School Mindset scale. This exploratory study was an initial effort to examine the content validity of the school growth mindset construct using SPSS to perform correlation analysis with multicultural relevant, organizational learning variables from the literature that were shown to explain improved school outcomes. Regression analysis tested the hypothesis that the independent variables would explain variations in a school’s growth mindset mean. Data was collected from a random stratified sample of middle and high school teachers (n = 64) and administrators (n = 5) in a large northwestern state. Responses were collected on the 19-question Likert-style WMSM survey. The overarching research question was, Is there a relationship between principal openness to change, faculty openness to change, work locus of control, and a school growth mindset? The results revealed organizational learning variables significantly correlated with a growth mindset culture and explained significant variations in the WMSM mean. The results have positive implications for providing school administrators with a way to measure their school’s culture and to provide feedback to teachers that can challenge their beliefs and inform improvements in culturally responsive teaching practices.

Journal ArticleDOI
TL;DR: A 3-D ultrasound palmprint recognition system that is based on the analysis of the principal curvatures of palm surface, i.e., mean curvature image, Gaussian curvatures image, and surface type is proposed and evaluated.
Abstract: Palmprint recognition systems that use three-dimensional (3-D) information of the palm surface are the most recently explored techniques to overcome some two-dimensional palmprint difficulties. These techniques are based on light structural imaging. In this work, a 3-D ultrasound palmprint recognition system is proposed and evaluated. Volumetric images of a region of the human hand are obtained by moving an ultrasound linear array along its elevation direction and one by one acquiring a number of B-mode images, which are then grouped in a 3-D matrix. The acquisition time was contained in about 5 s. Much information that can be exploited for 3-D palmprint recognition is extracted from the ultrasound volumetric images, including palm curvature and other under-skin information as the depth of the various traits. The recognition procedure developed in this work is based on the analysis of the principal curvatures of palm surface, i.e., mean curvature image, Gaussian curvature image, and surface type. The proposed method is evaluated by performing verification and identification experiments. Preliminary results have shown that the proposed system exhibits an acceptable recognition rate. Further possible improvements of the proposed technique are finally highlighted and discussed.

Journal ArticleDOI
TL;DR: The method increases mean and variance of the image by the optimum iterations on low coefficients of images, which improves contrast and brightness, respectively, and simultaneously, edges also become sharper.
Abstract: Image enhancement techniques are intended to improve the quality of an image without any kind of distortion or degradation. The literature is rich enough in this area, but there also exist some limitations. A technique is proposed for image enhancement by combining anisotropic diffusion with dynamic stochastic resonance in discrete wavelet transform domain. The method increases mean and variance of the image by the optimum iterations on low coefficients of images, which improves contrast and brightness, respectively, and simultaneously, edges also become sharper. It is well demonstrated by performing on various test images. Specifically, the adaptation and efficiency of the proposed technique for medical images are shown, because generally medical images appear contaminated with noise in terms of low illumination.

Journal ArticleDOI
TL;DR: Experimental results based on four large datasets show that the vision-based system can count and classify vehicles in real time with a high level of performance under different environmental situations, thus performing better than the conventional inductive loop detectors.
Abstract: The article is dedicated to the presentation of a vision-based system for road vehicles counting and classification. The system is able to achieve counting with a very good accuracy even in difficult scenarios linked to occlusions and/or presence of shadows. The principle of the system is to use already installed cameras in road networks without any additional calibration procedure. We propose a robust segmentation algorithm that detects foreground pixels corresponding to moving vehicles. First, the approach models each pixel of the background with an adaptive Gaussian distribution. This model is coupled with a motion detection procedure which allows to correctly locate in space and time moving vehicles. The nature of trials carried out, including peak periods and various vehicle types, leads to an increase of occlusions between cars and between cars and trucks. A specific method for severe occlusion detection, based on the notion of solidity, has been carried out, and tested. Furthermore, the method developed in this work is capable of managing the shadows with high resolution. The related algorithm has been tested and compared to a classical method. Experimental results based on four large data-sets show that our method can count and classify vehicles in real-time with a high level of performance (more than 98%) under different environmental situations, thus performing better than the conventional inductive loop detectors.

Journal ArticleDOI
TL;DR: A hybrid K-M+ANN-based method capable of modeling the color mixing mechanism is devised to predict the reflectance values of a blend to address the rising need to create high-quality products for the fashion market.
Abstract: Color matching of fabric blends is a key issue for the textile industry, mainly due to the rising need to create high-quality products for the fashion market. The process of mixing together differently colored fibers to match a desired color is usually performed by using some historical recipes, skillfully managed by company colorists. More often than desired, the first attempt in creating a blend is not satisfactory, thus requiring the experts to spend efforts in changing the recipe with a trial-and-error process. To confront this issue, a number of computer-based methods have been proposed in the last decades, roughly classified into theoretical and artificial neural network (ANN)–based approaches. Inspired by the above literature, the present paper provides a method for accurate estimation of spectrophotometric response of a textile blend composed of differently colored fibers made of different materials. In particular, the performance of the Kubelka-Munk (K-M) theory is enhanced by introducing an artificial intelligence approach to determine a more consistent value of the nonlinear function relationship between the blend and its components. Therefore, a hybrid K-M+ANN-based method capable of modeling the color mixing mechanism is devised to predict the reflectance values of a blend.

Journal ArticleDOI
TL;DR: This work investigates the extension of the recently proposed weighted Fourier burst accumulation method into the wavelet domain and suggests replacing the rigid registration step used in the original algorithm with a nonrigid registration in order to process sequences acquired through atmospheric turbulence.
Abstract: Abstract. We investigate the extension of the recently proposed weighted Fourier burst accumulation (FBA) method into the wavelet domain. The purpose of FBA is to reconstruct a clean and sharp image from a sequence of blurred frames. This concept lies in the construction of weights to amplify dominant frequencies in the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed spectra. We first suggest replacing the rigid registration step used in the original algorithm with a nonrigid registration in order to process sequences acquired through atmospheric turbulence. Second, we propose to work in a wavelet domain instead of the Fourier one. This leads us to the construction of two types of algorithms. Finally, we propose an alternative approach to replace the weighting idea by an approach promoting the sparsity in the used space. Several experiments are provided to illustrate the efficiency of the proposed methods.

Journal ArticleDOI
TL;DR: The proposed algorithm enormously reduces the computational complexity of a 3-D Hahn moment and its inverse moment transform and can be also implemented easily for high order of moments.
Abstract: We propose an algorithm for fast computation of three-dimensional (3-D) Hahn moments. First, the symmetry property of Hahn polynomials is provided to decrease the computational complexity at 12%. Second, 3-D Hahn moments are computed by using an algorithm based on matrix multiplication. The proposed algorithm enormously reduces the computational complexity of a 3-D Hahn moment and its inverse moment transform. It can be also implemented easily for high order of moments. The performance of the proposed algorithm is proved through object reconstruction experiments. The experimental results and complexity analysis show that the proposed method outperforms the straightforward method, especially for large size noise-free and noisy 3-D objects.

Journal ArticleDOI
TL;DR: Analysis by ethnicity, initial major, and engagement demonstrate that underrepresented minorities have different engagement patterns, but these engagement behaviors do not contribute significantly to staying in the STEM fields.
Abstract: Persistence studies in science, technology, engineering, and math (STEM) fields indicate that the pipeline to degree attainment is “leaky” and underrepresented minorities are not persisting in the STEM fields. Those students who do not persist in the STEM fields either migrate to other fields of study or drop out of higher education altogether. Studies of STEM student attrition point to a student perception of faculty disconnection from students calling this the “chilly climate” (Seymour & Hewitt, 1997). Engagement theory states, “…it is the individual’s integration into the academic and social systems of the college that most directly related to his continuance in that college” (Tinto, 1993). A “chilly climate” in the STEM fields could then reflect in measures of academic and social engagement. This study uses the Beginning Postsecondary Longitudinal Student Survey, 2004-2009 (BPS:04/09) (Cominole, Wheeless, Dudley, Franklin, & Wine, 2007) and logistic regression analyses to examine academic and social engagements’ impact on STEM field persistence in postsecondary education, net of individual and institutional factors. Analysis by ethnicity, initial major, and engagement demonstrate that underrepresented minorities have different engagement patterns, but these engagement behaviors do not contribute significantly to staying in the STEM fields.