scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Imaging in 2021"


Journal ArticleDOI
TL;DR: An industrial quality prediction system with a combination of multiple Program Component Analysis (PCA) and Decision Stump (DS) algorithm for MMP quality prediction is proposed and it is clear that this model is capable of performing accurate classification and prediction in the field of industrial quality.
Abstract: Based on an assessment of production capabilities, manufacturing sectors' core competency is increased. The importance of product quality in this aspect cannot be overstated. Several academics have introduced Deming's 14 principles, Shewhart cycle, total quality management, and other approaches to decrease the external failure costs and enhance product yield rates. Analysis of industrial data and process monitoring is becoming increasingly important as a part of the Industry 4.0 paradigm. In order to reduce the internal failure cost and inspection overhead, quality control (QC) schemes are utilized by industries. The final product quality has an interactive and cumulative effect of various parameters like operators and equipment in multistage manufacturing processes (MMP). In other cases, the final product is inspected in a single workstation with QC. It's challenging to do a cause analysis in MMP whenever a failure occurs. Several industries are looking for the optimal quality prediction model in order to achieve flawless production. The majority of current approaches solely handles single-stage manufacturing and is inadequate in dealing with MMP quality concerns. To overcome this issue, this paper proposes an industrial quality prediction system with a combination of multiple Program Component Analysis (PCA) and Decision Stump (DS) algorithm for MMP quality prediction. A SECOM (SEmiCOnductor Manufacturing) dataset is used for verification and validation of the proposed model. Based on the findings, it is clear that this model is capable of performing accurate classification and prediction in the field of industrial quality.

30 citations


Journal ArticleDOI
TL;DR: In this paper, a review on improving productivity for soil nutrition is presented, and the main focus is to determine strategies for the effects of a balanced nutrition system of maize-chickpea.
Abstract: Generally, a soil nutrients test has been performed for determining the productivity measures of any plant. It includes many challenges of environmental impacts and climate adaptation. To maintain the crop nutrients quality without affecting previous performance from the soil, it is required to minimize the challenges in the soil health sector can be increased economic returns from crop productivity. This article represents the review on improving productivity for soil nutrition. Soil nutrition was tested and assessed using the existing method, and deficiencies in the soil were identified that could be improved using some standardized methods. This productivity function of soil supply is measured by a various spatial scale which is a part of this research. The objective aims to achieve high productivity in the context of soil and also to realize environmental impact for soil functionality, productivity function, and resources information. The classification of soils corresponding multitude of approaches developed globally for potential soil productivity. The main focus is to determine strategies for the effects of a balanced nutrition system of maize-chickpea. The treatment and control can be developed and tested every year on crop yield. Besides, this research presents a future enhancement of improved productivity tests for a balanced soil nutrition system for better crop yield. The soil classification will be categorized with a knowledge base algorithm for further accuracy for the system.

24 citations


Journal ArticleDOI
TL;DR: The smart home automation is that the exploitation internet enabled devices remotely and mechanically management appliances such as lighting, heating system and security measures in and around your home to reduce consumption of the electricity towards green environment.
Abstract: The smart home automation is that the exploitation internet enabled devices remotely and mechanically management appliances such as lighting, heating system and security measures in and around your home. This papers talks about relative emission effects in Home Energy Management. Also the result outcome is that consumption of the electricity will be reduced towards green environment. Moreover, the research paper is considering the analysis of calculate the negative effects in environment due to full home automation system. While calculating these negative effects, the Life Cycle Assessment (LCA) should be in sum total. This study uses to analysis the electricity consumption for environment impact of Home Energy Management system (HEMs). The research article discusses home automation system consumes the energy for different devices connected for smart home. The maximum energy consumption in smart home network is smart plugs due to an uninterrupted supply. Therefore this research article comprises about home automation energy management that shows the balance energy consumption between the devices in a regular interval. Also this research article provides a future challenge tasks in security issues in smart home environment. Also the perception for smart home environment focuses the Interoperability, Reliability, Integration of smart homes and term privacy in context, term security and privacy vulnerabilities to smart home.

24 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an approach that jointly permutates and diffuses (JPD) the pixels in a color image for encryption, which is capable of resisting various types of attacks.
Abstract: Image encryption plays an essential role in the community of image security. Most existing image encryption approaches adopt a permutation–diffusion scheme to permute pixel positions and change pixel values separately. One limitation of this scheme is that it has a high risk of being cracked. To solve this problem, we propose an approach that jointly permutates and diffuses (JPD) the pixels in a color image for encryption. First, a 4D hyperchaotic system with two positive Lyapunov exponents is used to generate a sequence for almost all encryption procedures. To enhance security, the plain image’s information is introduced to the hyperchaotic system’s initial parameters. Then, the hyperchaotic sequence is used to permute and diffuse the pixels in images jointly. More specifically, two index matrices that determine which pixels will be permuted and diffused and one mask matrix that determines how the pixels will be diffused are generated by the hyperchaotic sequence. We test the proposed JPD with several popular images. Experimental results and security analysis demonstrate that the JPD is capable of resisting various types of attacks. Moreover, the JPD can accelerate the encryption process. All these indicate that the JPD is effective and efficient for color image encryption.

24 citations


Journal ArticleDOI
TL;DR: This hybrid proposed method achieves a higher accuracy of 90%.
Abstract: Fault detection in the transmission is a challenging task when examining the accuracy of the system. This fault can be caused by a man-made force or by using concurrent overvoltage in the power distribution line. This research focuses on two sections to handle the power transmission line problem and can be rectified as previously stated. An intelligent approach is utilized for monitoring and controlling line faults in order to improve the accuracy of the equipment in transmission line fault detection. After several iterations of the procedure, the combination of line and master unit improves the system's accuracy and reliability. The master unit identifies faulty poles in the network based on the variation of current and voltage of each node and calculates the distance between the station and the faulty node to reduce manual effort. In the proposed work, many sensors are used to detect the line fault in a network by placing the appropriate point. The pure information can be transferred to an authorized person or unit after many iterations due to knowledgeable devices. The faulty status of the pole information is displayed in the control unit by a display unit comprised of an alarm unit to alert the corresponding section using ZigBee techniques. The GSM unit provides the faulty status of an Journal of Electronics and Informatics (2021) Vol.03/ No.01 Pages: 36-48 https://www.irojournals.com/iroei/ DOI: https://doi.org/10.36548/jei.2021.1.004 37 ISSN: 2582-3825 (online) Submitted:26.01.2021 Revised: 19.02.2021 Accepted: 12.03.2021 Published: 27.03.2021 authorized person to rectify the problems immediately which further improve the reliability of the system. When compared to existing methods, our hybrid proposed method achieves a higher accuracy of 90%. This method aids to reduce the labor costs gradually to visit all-pole points instead of faulty pole points and thereby increasing the reliability of the electrical consumers.

20 citations


Journal ArticleDOI
TL;DR: The suggesting idea is that the autonomous vehicle uncertainty control is estimating by modified version of action based coarse trajectory planning, which permits the planner to avoid complex and unusual traffic efficiently.
Abstract: The motion planning framework is one of the challenging tasks in autonomous driving cars. During motion planning, predicting of trajectory is computed by Gaussian propagation. Recently, the localization uncertainty control will be estimating by Gaussian framework. This estimation suffers from real time constraint distribution for (Global Positioning System) GPS error. In this research article compared novel motion planning methods and concluding the suitable estimating algorithm depends on the two different real time traffic conditions. One is the realistic unusual traffic and complex target is another one. The real time platform is used to measure the several estimation methods for motion planning. Our research article is that comparing novel estimation methods in two different real time environments and an identifying better estimation method for that. Our suggesting idea is that the autonomous vehicle uncertainty control is estimating by modified version of action based coarse trajectory planning. Our suggesting framework permits the planner to avoid complex and unusual traffic (uncertainty condition) efficiently. Our proposed case studies offer to choose effectiveness framework for complex mode of surrounding environment.

19 citations


Journal ArticleDOI
TL;DR: A mathematical model provides the details of immediate response to the user and less execution time of the complex system process and the authors discussed future improvements to the current DC motor design in the proposed system.
Abstract: Low-level and medium-level leg injury patients can be operated wheelchair independently in the clinical region. The construction of an electric wheelchair is one of the solutions to operate electric wheelchair by the patients themselves. The motor is an essential part of an electrical wheelchair for driving from one place to another place. The response of the system is very important for the optimization of the system. The existing methods fail in gradual sensitivity during motion and lack of response time to the user. This article is consist of a design for optimizing the existing DC motor transfer function for the smart wheelchair. The perfect angular tuning of the derivative controller provides a better executing time for our proposed model. The smoother responses from the smart wheelchair are obtaining by the dynamic response of closed-loop control. The design of DC motors is to drive smart wheelchair as per the needs. Besides, the construction of a mathematical model for proposing a system involves the DC motor drive to the smart wheelchair arrangement. The proposed model gives independent Journal of Electronics and Informatics (2021) Vol.03/ No.01 Pages: 49-60 https://www.irojournals.com/iroei/ DOI: https://doi.org/10.36548/jei.2021.1.005 50 ISSN: 2582-3825 (online) Submitted:30.01.2021 Revised: 25.02.2021 Accepted: 16.03.2021 Published: 31.03.2021 mobility of smart wheelchair with less response time and better sensitivity. Here, the mathematical model provides the details of immediate response to the user and less execution time of the complex system process. Finally, the authors discussed future improvements to the current DC motor design in the proposed system.

18 citations


Journal ArticleDOI
TL;DR: The proposed research work has simulated the proposed wavelet transform (WT) method with MATLAB / SIMULINK and it helps to effectively detect the healthy and faulty conditions of the motor.
Abstract: Signal processing is considered as an efficient technique to detect the faults in three-phase induction motors. Detection of different varieties of faults in the rotor of the motor are widely studied at the industrial level. To extend further, this research article presents the analysis on various signal processing techniques for fault detection in three-phase induction motor due to the damages in rotor bar. Usually, Fast Fourier Transform (FFT) and STFT are used to analyze the healthy and faulty motor conditions based on the signal characteristics. The proposed study covers the advantages and limitations of the proposed wavelet transform (WT) and each technique for detecting the broken bar of induction motors. The good frequency information can be collected from FFT techniques for handling multiple faults identification in three-phase induction motor. Despite the hype, the detection accuracy gets reduced during the dynamic condition of the machine because the frequency information on sudden time changes cannot be employed by FFT. The WT method signal analysis is compared with FFT to propose fault detection method for induction motor. The WT method is proving better accuracy when compared to all existing methods for signal information analysis. The proposed research work has simulated the proposed method with MATLAB / SIMULINK and it helps to effectively detect the healthy and faulty conditions of the motor.

18 citations


Journal ArticleDOI
TL;DR: An Artificial Intelligence and IoT incorporated frost forecasting is proposed in this novel work and it is found that the proposed methodology provides a more effective prediction of temperature.
Abstract: An Artificial Intelligence and IoT incorporated frost forecasting is proposed in this novel work. The objects present inside a greenhouse are connected to each other through Internet of Things (IoT), using devices such as actuators, sensors and assisting aids. A smart system incorporating IoT is designed, developed and implemented using Fuzzy associative memory and Artificial Neural Networks (ANN) in order to manage any ill effects in irrigation caused due to frost conditions. The temperature inside the green house is monitored continuously on comparison with the outside temperature, thereby steps are taken to stabilize the temperature to make it suitable for plant growth. The temperature inside the greenhouses are forecasted by means of ANN and using fuzzy control, temperature of the crops are predicted and watered as per the required using 5 levels of water pump output. The output obtained is analyzed and compared with similar Fourierstatistical method and it is found that the proposed methodology provides a more effective prediction of temperature.

17 citations


Journal ArticleDOI
TL;DR: The hybrid system configuration is used for meeting the thermal and electrical load demands of an off-grid network simultaneously with the model proposed in this paper and several environmental and cost benefits are observed.
Abstract: The hybrid system configuration is used for meeting the thermal and electrical load demands of an off-grid network simultaneously with the model proposed in this paper. Li-ion battery, Micro Gas Turbine (MGT), wind turbine and solar photovoltaic configurations are analyzed. Hybrid Optimization of Multiple Electric Renewables (HOMER) software is used for estimating utilization of various strategies for power management, recovered waste heat and excess energy in the model. The heating demand is met and examined by the thermal load controller with and without the options of waste heat recovery. The hybrid system hardware components are sized, compared and analyzed based on cyclic charging (CC) and load following (LF) dispatch strategies. Various electrical to thermal load ratio are considered for examining the system performance. Various uncertainties and their effects are reported on comparison of grid-connected and stand-alone options. The hardware components are reduced in size thereby appreciable cost benefits are observed in the results. In the optimized hybrid system, the renewable energy fraction is increased causing high renewable penetrations and the CO2 emission is reduced by a large value. For all the configurations analyzed, several environmental and cost benefits are offered by the CC strategy.

13 citations


Journal ArticleDOI
TL;DR: In this article, the characteristics of pre-service teachers' 21st-century skill concepts and their compatibility with the contemporary 21stcentury skill lists, 21st century self-skills and to compare and discuss, in terms of curricula and their fields.
Abstract: The study aimed to determine the characteristics of pre-service teachers’ 21st-century skill concepts and their compatibility with the contemporary 21st-century skill lists, 21st-century self-skills and to compare and discuss, in terms of curricula and their fields. 71 pre-service science and 59 pre-service mathematics teachers were participated this phenomenological study. The statements by the participants were transformed into codes. These codes were categorized based on the framework for the 21st century skills. 21st-century skills codes with contemporary concepts relating to subcategories like “cognitive skills”, “process skills”, “communication and collaboration skills”, “initiative and self-direction skills”, “career skills”, and “technology knowledge/usage/production skills” indicate that teacher candidates are knowledgeable about 21st-century skills. Also the study found out that the greatest effects on the 21st-century skills of pre-service science and mathematics teachers are the curricula and the education they are taught. In this context, this research was based on the belief that determining the influence of pre-service teachers’ out-of-school and in-school trainings, their curricula, branches, etc. on their 21st-century skills will be guiding in terms of organizing curricula and environments of education.

Journal ArticleDOI
TL;DR: In this paper, the effect of inquiry-based instruction in the teaching of fundamental movement skills to fifth-grade students on the children's perceived motor competence was investigated using a post-test experimental design with control group, the study was carried out with 260 fifth grade students studying in ten different classes at five different schools located in the city centre of Manisa during the 2019-2020 academic year.
Abstract: The aim of this study is to investigate the effect of inquiry-based instruction in the teaching of fundamental movement skills to fifth-grade students on the children’s perceived motor competence. Utilizing a post-test experimental design with control group, the study was carried out with 260 fifth-grade students studying in ten different classes at five different schools located in the city centre of Manisa during the 2019-2020 academic year. For collection of the data, the “Perceived Motor Competence Questionnaire in Childhood” (PMC-C), and a “Personal Information Form” (PIF) developed by the researchers, were used. To test the effectiveness of the quasi-experimental process in the post-test design with control group, t-test was used. In the inquiry-based instruction in the teaching of fundamental movement skills of the students, a statistically significant difference was found in favour of the experimental group in the subscales of perceived motor competence. Regarding the gender variable of the students, a statistically significant difference was found between female and male students in favour of boys in the subscales of fundamental motor skills. In conclusion, it can be said that the inquiry-based instructional model was more effective than the direct instructional model in developing the fundamental motor skills of “locomotor skills” and “object control skills”. Moreover, when evaluated in terms of gender, male students benefited more from the inquiry-based instructional model in terms of “object control skills”.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed to connect the two temporal shift modules in a cascaded manner, which can expand the receptive field of the model and strengthen the model's spatial feature extraction capabilities.
Abstract: Violence behavior recognition is an important research scenario in behavior recognition and has broad application prospects in the field of network information review and intelligent security. Inspired by the long-short-term memory network, we estimate that temporal shift module (TSM) may have more room for improvement in the feature extraction ability of long-term information. In order to verify the above conjecture, we explored based on TSM. After many attempts, it was finally proposed to connect the two TSMs in a cascaded manner, which can expand the receptive field of the model. In addition, an efficient channel attention module was introduced at the front end of the network, which strengthened the model’s spatial feature extraction capabilities. At the same time due to behavior recognition prone to over-fitting, we extended and processed on the basis of some open-source datasets to form a larger violence dataset and solved the problem of over-fitting. The final experimental results show that the algorithm proposed can improve the model’s feature extraction ability of violent behavior in the space and temporal dimension and realize the recognition of violent behavior, which verified the above point of view.

Journal ArticleDOI
TL;DR: In this article, a CNN-based deep learning framework was proposed to improve the recognition performance of iris, ocular, and periocular modalities for off-angle images.
Abstract: Iris is one of the most well-known biometrics; it is a nonintrusive and contactless authentication technique with high accuracy, enhanced security, and unique distinctiveness. However, its dependence on image quality and its frontal image acquisition requirement limit its recognition performance and hinder its potential use in standoff applications. Standoff biometric systems require a less controlled environment than traditional systems, so their captured images will likely be nonideal, including off-angle. We present convolutional neural network (CNN)-based deep learning frameworks to improve the recognition performance of iris, ocular, and periocular biometric modalities for off-angle images. Our contribution is fourfold: first, the performances of popular AlexNet, GoogLeNet, and ResNet50 architectures are presented for off-angle biometrics. Second, we study the effect of the gaze angle difference between training and testing images on iris, ocular, and periocular recognitions. Third, we investigate the network behavior for untrained gaze angles and the information fusion capability of CNN networks at multiple off-angle images. Finally, deep learning-based results are compared with a traditional iris recognition algorithm using the gallery approach. Our results with off-angle images ranging from −50 deg to 50 deg in gaze angle show that the proposed methods improve the recognition performance of iris, ocular, and periocular recognition.

Journal ArticleDOI
TL;DR: A frequency-based deconvolution module that enables the network to learn the global context while selectively reconstructing the high-frequency components is proposed that outperforms current state-of-the-art image inpainting techniques both qualitatively and quantitatively.
Abstract: We present an image inpainting technique using frequency-domain information. Prior works on image inpainting predict the missing pixels by training neural networks using only the spatial-domain information. However, these methods still struggle to reconstruct high-frequency details for real complex scenes, leading to a discrepancy in color, boundary artifacts, distorted patterns, and blurry textures. To alleviate these problems, we investigate if it is possible to obtain better performance by training the networks using frequency-domain information (discrete Fourier transform) along with the spatial-domain information. To this end, we propose a frequency-based deconvolution module that enables the network to learn the global context while selectively reconstructing the high-frequency components. We evaluate our proposed method on the publicly available datasets: celebFaces attribute (CelebA) dataset, Paris streetview, and describable textures dataset and show that our method outperforms current state-of-the-art image inpainting techniques both qualitatively and quantitatively.

Journal ArticleDOI
TL;DR: In this article, a set of methods that can be used to create explanation maps for a particular image, which assign an importance score to each pixel of the image based on its contribution to the decision of the network, are presented.
Abstract: In recent years, deep learning has become prevalent to solve applications from multiple domains. Convolutional neural networks (CNNs) particularly have demonstrated state-of-the-art performance for the task of image classification. However, the decisions made by these networks are not transparent and cannot be directly interpreted by a human. Several approaches have been proposed to explain the reasoning behind a prediction made by a network. We propose a topology of grouping these methods based on their assumptions and implementations. We focus primarily on white box methods that leverage the information of the internal architecture of a network to explain its decision. Given the task of image classification and a trained CNN, our work aims to provide a comprehensive and detailed overview of a set of methods that can be used to create explanation maps for a particular image, which assign an importance score to each pixel of the image based on its contribution to the decision of the network. We also propose a further classification of the white box methods based on their implementations to enable better comparisons and help researchers find methods best suited for different scenarios.

Journal ArticleDOI
TL;DR: A novel weighted feature fusion HRNet is designed to achieve higher detection precision and is used as the backbone to maintain high-resolution feature representation through the whole process rather than using upsampling to generate high- resolution feature representation as HourglassNet.
Abstract: Recently, anchor-free methods have brought new ideas to the field of object detection that eliminate the need for anchor boxes in object detection and provide a simpler detection structure. CenterNet is the representative anchor-free method. However, this method still has the problem of obtaining high-resolution representation from low-resolution representation using upsampling, and the predicted heatmap is not accurate enough in space and does not make full use of the shallow low-level features of the network. We introduce CenterNet-HRA to solve this problem. An attention module is proposed to calibrate the high-level semantic features of the network output using the shallow low-level features from different receptive fields; HRNet is used as the backbone to maintain high-resolution feature representation through the whole process rather than using upsampling to generate high-resolution feature representation as HourglassNet. Considering that the feature representations with different resolutions have different contributions to the network but HRNet fuses them without distinction, a novel weighted feature fusion HRNet is designed to achieve higher detection precision. Our method achieves an average precision (AP) of 42.3% at 13.5 frames-per-second (FPS) (40.3% AP at 13.3 FPS for CenterNet-HG) on the MS-COCO benchmark.

Journal ArticleDOI
TL;DR: The proposed E-ProSRGAN model generates SR samples with better high-frequency details and perception measures than that of the other existing GAN-based SISR methods with significant reduction in the number of training parameters for larger upscaling factor.
Abstract: Single-image super-resolution (SISR) refers to reconstructing a high-resolution image from given low-resolution observation. Recently, convolutional neural network (CNN)-based SISR methods have achieved remarkable results in terms of peak-signal-to-noise ratio and structural similarity measures. These models use pixel-wise loss functions to optimize their models, which results in blurry images. However, the generative adversarial network (GAN) has the ability to generate visually plausible solutions. The different GAN-based SISR methods obtain perceptually better SR results when compared to that with the existing CNN-based methods. However, the existing GAN-based SISR methods need a large number of training parameters in the architecture to obtain better SR performance, which makes them unsuitable for many real-world applications. We propose a computationally efficient enhanced progressive approach for SISR task using GAN, which we referred as E-ProSRGAN. In the proposed method, we introduce a novel design of residual block called enhanced parallel densely connected residual network, which helps to obtain better SR performance with less number of training parameters. The quantitative performance of the proposed E-ProSRNet (i.e., generator network of E-ProSRGAN) model is better for higher upscaling factors ×3 and ×4 for most of datasets when compared to the same obtained using different CNN-based methods whose trainable parameters are less than 7 M. In the case of upscaling factor ×2, E-ProSRNet obtains second highest structural similarity index measure values for Set5 and BSD100 datasets. The proposed E-ProSRGAN model generates SR samples with better high-frequency details and perception measures than that of the other existing GAN-based SISR methods with significant reduction in the number of training parameters for larger upscaling factor.

Journal ArticleDOI
TL;DR: A semisupervised target track recognition algorithm based on a semisuPervised generative adversarial network (SSGAN) that learns a robust model from a few labeled target track examples with the presence of outliers is proposed.
Abstract: Gradually crowded, complex airspace makes it necessary to identify the flight track patterns of interested targets. Existing studies on radar-based target track recognition rarely consider the impact of outliers in the acquired data, which happens very often for small air vehicles such as drones. In addition, the performance achieved with a few labeled track examples has significant room for improvement. We propose a semisupervised target track recognition algorithm based on a semisupervised generative adversarial network (SSGAN) that learns a robust model from a few labeled target track examples with the presence of outliers. Our method identifies and eliminates the outliers in the data set and fills in for the removed data. The proposed method extracts a strong recognition flight feature from the basic flight features and forms the strong recognition flight feature combination (SRFFC) by integrating the advanced flight features. The SRFFC is fed into the SSGAN model to identify target track patterns. Experiments were conducted using simulated data sets. Our results demonstrate that the proposed method achieves a highly competitive target track recognition performance in terms of accuracy, precision, and recall in comparison with the state-of-the-art methods. The minimum accuracy of our proposed method is 97%, which achieves an improvement of 15.7% compared with the state-of-the-art methods. In addition, our method exhibits great robustness with respect to the number of labeled data and choice of parameters.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship between innovation and entrepreneurship skills of individuals who receive sports education according to different variables and found that as the innovation skills of students studying at institutions providing sports education increased, their level of entrepreneurship also increased.
Abstract: It is a fact that the concept of innovation, which is a necessity in every field today, is now indispensable in the sports sector. Especially, it is thought that determining the relationship between the innovation skills of the students in sports education institutions and the entrepreneurship of the students who are candidates to work in the sports sector is important in the development of innovation awareness of the students who will work in this field in the future. The aim of this study is to examine the relationship between innovation and entrepreneurship skills of individuals who receive sports education according to different variables. The study group was formed by the voluntary participation of 240 people, 161 males (67.1%) 79 females (32.9%), studying at the School of Physical Education and Sports at Istanbul Gelisim University, and selected by the purposeful sampling method. In addition to the personal information form, the Individual Innovativeness Scale (IIS) developed by Hurt et al. (1977) and adapted into Turkish by Sarioglu (2014), and the Entrepreneurship Scale (ES) developed by Yilmaz and Sunbul (2009) to measure the entrepreneurship levels of university students were used as data collection tools. After the data showed normal distribution, T-Test, ANOVA and Pearson Correlation Test were used in the analyses, and the Tukey test was used to determine the difference between the groups. According to the results, it was determined that the innovation skills and entrepreneurship levels of the individuals’ varied according to different variables. As a result, it was determined that as the innovation skills of students studying at institutions providing sports education increased, their level of entrepreneurship also increased.

Journal ArticleDOI
TL;DR: The automatic color equalization (ACE) algorithm as mentioned in this paper was proposed in 2002, which mimics the color and contrast adjustment of the human visual system (HVS) and has been widely used in the field of image enhancement.
Abstract: Digital image processing is at the base of everyday applications aiding humans in several fields, such as underwater monitoring, analysis of cultural heritage drawings, and medical imaging for computer-aided diagnosis. The starting point of all such application regards the image enhancement step. A desirable image enhancement step should simultaneously standardize the illumination in the image set, possibly removing bad or not-uniform illumination effects, and reveal all hidden details. In 2002, a successful perceptual image enhancement model, the automatic color equalization (ACE) algorithm, was proposed, which mimics the color and contrast adjustment of the human visual system (HVS). Given its widespread usage, its correlation with the HVS, and since it is easily implementable, we propose a scoping review to identify and classify the available evidence on ACE, starting from the papers citing the two funding papers on the algorithm. The aim of this work is the identification of what extent and in which ways ACE may have influenced the research in the color imaging field. Thanks to an accurate process of papers tagging, classification, and validation, we provide an overview of the main application domains in which ACE was successfully used and of the different ways in which this algorithm was implemented, modified, used, or compared.

Journal ArticleDOI
TL;DR: In this paper, a study aimed to determine the relationship between leisure satisfaction and social media addiction of university students, and the level of significance in the study was set at 0.05.
Abstract: This study aimed to determine the relationship between leisure satisfaction and social media addiction of university students. The study group of the research was formed by the voluntary participation of 193 students (133 male and 60 female), studying at the School of Physical Education and Sports of Istanbul Gelisim University. In addition to the personal information form, the “Leisure Satisfaction Scale (LSS)” developed by Beard and Raghep (1980) and adapted into Turkish by Gokce and Orhan (2011), and the “Social Media Addiction Scale (SMAS)” developed by Bakir Aygar and Uzun (2018) were used as data collection tools. After the data showed normal distribution in the Kolmogrov-Smirnov normality test, t-test, ANOVA and Pearson Correlation test were used in the analysis. The level of significance in the study was set at 0.05. In the research findings; gender and age groups of individuals affect their leisure time satisfaction levels; It has also been found that age groups affect social media addiction. As a result, it was determined that leisure satisfaction levels and social media addiction changed according to various variables of university students, and a negative significant relationship was found between leisure satisfaction and social media addiction.

Journal ArticleDOI
Ahmet Atlı1
TL;DR: In this paper, a 6-week core training program that was applied to football players improved the performance of vertical jump, 30-m speed, agility, and flexibility of players.
Abstract: In this study, it was aimed to examine the effect of a core training program that was applied on football players on some performance parameters. In total, 40 football players, aged between 18 and 24 years old, who regularly trained in football and were from various amateur football teams participated: 20 athletes in the training group and 20 athletes in the control group. It was taken the pre-test measurements of the athletes’ vertical jump, 30-m speed, agility, and flexibility; after the 6-week core training program, which was applied three days a-week, and it was taken the post-test measurements of the athletes. The training group applied the core training in addition to football training for 6-week, whereas the participants in the control group did not apply any training program other than their ongoing football training. It was used the SPSS 22 statistics program to evaluate the data and Shapiro-Wilk test to determine the normality distribution of the data. Owing to the normal distribution of the data, it was used a paired t-test to compare the pre-test and post-test values within the groups and accepted the confidence interval for statistical processes as p 0.05). It was found a statistically significant difference in the 30-m speed pre-test and post-test values of the training group (p 0.05). It was found a statistically significant difference in the agility pre-test and post-test values of the training group (p 0.05). Considering the in-group flexibility pre-test and post-test comparisons, a statistically significant difference was found in the flexibility pre-test and post-test values of the training group (p 0.05). Based on the results of the present research, the 6-week core training program that was applied to football players improved the performance of vertical jump, 30-m speed, agility, and flexibility.

Journal ArticleDOI
TL;DR: The distributed generators (DGs) consists of radial rural distribution networks that makes use of off-voltage tap changing transformers that can be determined using the novel estimation technique proposed in this paper.
Abstract: The distributed generators (DGs) consists of radial rural distribution networks that makes use of off-voltage tap changing transformers. Ideal tap changer positions for these transformers can be determined using the novel estimation technique proposed in this paper. A branchy low-voltage network is brought down to its equivalent line along with the utilization of spatial network decomposition in this technique. Evolutionary algorithm is used for determining the PV nodes ideal voltage module values in ideal seasonal control plan. A PQ node with 3 DGs incorporated in a radial 40-node network and a PQ node with 10 DGs are incorporated in a radial 33-node network are the distribution networks used for modelling the proposed system.

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a robust license plate detection and recognition (LPDR) framework with automatic rectification, where a spatial transformation network with thin-plate-spline transformation was introduced to solve the problem in which the LP tilt and distortion affect recognition accuracy.
Abstract: We propose a robust license plate detection and recognition (LPDR) framework with automatic rectification. We explore the YOLOv2 object detector based on deep learning and train it to detect license plates (LPs) effectively. The LPs in natural scene images tend to be tilted and distorted because of the shooting angle or the geometric deformation of LPs. To solve the problem in which the LP tilt and distortion affect recognition accuracy, we introduce a spatial transformation network with thin-plate-spline transformation and propose a neural network called inverse compositional spatial transformer network-hierarchical spatial transformer network (ICSTN-CRNN). ICSTN-CRNN can automatically rectify and recognize LPs. Furthermore, we manually supplement the LP character annotations in PKUData. Our LPDR method achieves satisfactory results on three datasets, including Chinese City Parking Dataset, PKUData, and application-oriented license plate. Through a series of comparative experiments, we prove that our method is more accurate than other advanced methods.

Journal ArticleDOI
TL;DR: Both man-made impulse noise and thermal Gaussian noise are examined in this proposed study to determine the performance of blind Eigen value-based spectrum sensing.
Abstract: One of the most crucial roles of the cognitive radio (CR) is detection of spectrum ‘holes’. The ‘no a-priori knowledge required’ prospective of blind detection techniques has attracted the attention of researchers and industries, using simple Eigen values. Over the years, a number of study and research has been carried out to determine the impact of thermal noise in the performance of the detector. However, there has not been much work on the impact of man-made noise, which also hinders the performance of the detector. As a result, both man-made impulse noise and thermal Gaussian noise are examined in this proposed study to determine the performance of blind Eigen value-based spectrum sensing. Many studies have been conducted over long sample length by oversampling or increasing the duration of sensing. As a result, a research progress has been made on shorter sample lengths by using a novel algorithm. The proposed system utilizes three algorithms; they are contra-harmonic-mean minimum Eigen value, contra-harmonic mean Maximum Eigen value and maximum Eigenvalue harmonic mean. For smaller sample lengths, there is a substantial rise in the number of cooperative secondary users, as well as a low signal-to-noise ratio when employing the maximum Eigen value Harmonic mean. The experimental analysis of the proposed work with respect to impulse noise and Gaussian signal using Nakagami-m fading channel is observed and the results identified are tabulated.

Journal ArticleDOI
TL;DR: In this article, a study was designed to examine the relationship between prospective pre-school teachers' attitudes towards science education and their learning styles, which was designed as correlational survey model.
Abstract: The knowledge, skills and attitudes of prospective pre-school teacher towards science education enable more effective classroom practices and science teaching. Teaching scientific processes at an early age affects students’ attitudes towards science in the coming years. In this context, this study was designed to examine the relationship between prospective pre-school teachers’ attitudes towards science education and their learning styles. The study was designed as correlational survey model. The sample of this study consists of 193 (165 female, 28 male) prospective pre-school teachers studying in the first, second, third and fourth class of faculty of education of a state university. The data were collected using the “The Science Teaching Attitude Scale” developed by Thompson and Shringley (1986) and adapted into Turkish by Ozkan, Tekkaya, and Cakiroglu (2002) and “Kolb Learning Style Inventory” developed by Kolb (1984) and adapted into Turkish by Evin Gencel (2007) in the spring semester of the 2019-2020 academic year. Descriptive statistical analysis and predictive statistical analysis were used in the statistical calculations of the data obtained in the study. As a result of the study, it was found that there was no statistically significant difference in the attitudes of prospective teachers towards science education according to their learning styles. In addition, it was determined that prospective pre-school teachers developed positive attitude towards science education and had different learning styles. Based on the results, suggestions have been made regarding the organization of learning environments according to learning styles and the studies that will increase the attitude levels towards science education.

Journal ArticleDOI
TL;DR: A method for synthesizing wood’s heterogeneous texture that can analyze the characteristics of different wood textures, select the most appropriate input sample block size, and then generate a new image with a sample texture is proposed.
Abstract: Currently, there is an increasing demand for wooden furniture and products, especially in the decorating business where wood textures are widely used. These textures have different artificial designs, and because different consumers have different needs for wood texture, the trend of using computer algorithms to design wood textures emerged. We propose a method for synthesizing wood’s heterogeneous texture. It can analyze the characteristics of different wood textures, select the most appropriate input sample block size, and then generate a new image with a sample texture. Compared with the deep learning method, our method reduces pressure on system resources and production costs. The proposed method also generates higher quality reconstructed images than traditional algorithms.

Journal ArticleDOI
TL;DR: Computerized perception algorithms give measurable indicators that may be used to determine the severity of OA from photographs in an automated and systematic manner, and the study of Knee radiography and its quantitative analysis is analyzed.
Abstract: The most common orthopedic illness in the worldwide, osteoarthritis (OA), affects mainly hand, hip, and knee joints. OA invariably leads to surgical intervention, which is a huge burden on both the individual and the society. There are numerous risk factors that contribute to OA, although the pathogenesis of OA and the molecular basis of through such are unknown at this time. OA is presently identified with an analyses were used to examine and, if required, corroborated through imaging - a radiography study. These traditional methods, on the other hand, are not susceptible to sense the beginning phases of OA, making the creation of precautionary interventions for specific disease problematic. As a result, other approaches which might permit for the timely identification of OA are needed. As a result, computerized perception algorithms give measurable indicators that may be used to determine the severity of OA from photographs in an automated and systematic manner. The study of Knee radiography and its quantitative analysis is analyzed in this paper.

Journal ArticleDOI
Boxuan Li1, Benfei Wang1, Xiaojun Tan1, Jiezhang Wu1, Liangliang Wei1 
TL;DR: In this article, a novel method to detect, recognize, and extract the location points of single ArUco marker based on convolutional neural networks (CNN) was proposed, which achieved a high mean average precision exceeding 0.9 in the coverless test set and over 0.4 under corner coverage.
Abstract: The ArUco marker is one of the most popular squared fiducial markers using for precise location acquisition during autonomous unmanned aerial vehicle (UAV) landings. This paper presents a novel method to detect, recognize, and extract the location points of single ArUco marker based on convolutional neural networks (CNN). YOLOv3 and YOLOv4 networks are applied for end-to-end detection and recognition of ArUco markers under occlusion. A custom lightweight network is employed to increase the processing speed. The bounding box regression mechanism of the YOLO algorithm is modified to locate four corners of each ArUco marker and classify markers irrespective of the orientation. The deep-learning method achieves a high mean average precision exceeding 0.9 in the coverless test set and over 0.4 under corner coverage, whereas traditional algorithm fails under the occlusion condition. The custom lightweight network notably speeds up the prediction process with acceptable decline in performance. The proposed bounding box regression mechanism can locate marker corners with less than 3% average distance error for each corner without coverage and less than 8% average distance error under corner occlusion.