scispace - formally typeset
Search or ask a question

Showing papers on "Selection (genetic algorithm) published in 2020"


Journal ArticleDOI
TL;DR: A simple new approach to variable selection in linear regression, with a particular focus on quantifying uncertainty in which variables should be selected, is introduced, based on a new model based on the ‘sum of single effects’ model, called ‘SuSiE’.
Abstract: We introduce a simple new approach to variable selection in linear regression, with a particular focus on quantifying uncertainty in which variables should be selected. The approach is based on a new model—the ‘sum of single effects’ model, called ‘SuSiE’—which comes from writing the sparse vector of regression coefficients as a sum of ‘single‐effect’ vectors, each with one non‐zero element. We also introduce a corresponding new fitting procedure—iterative Bayesian stepwise selection (IBSS)—which is a Bayesian analogue of stepwise selection methods. IBSS shares the computational simplicity and speed of traditional stepwise methods but, instead of selecting a single variable at each step, IBSS computes a distribution on variables that captures uncertainty in which variable to select. We provide a formal justification of this intuitive algorithm by showing that it optimizes a variational approximation to the posterior distribution under SuSiE. Further, this approximate posterior distribution naturally yields convenient novel summaries of uncertainty in variable selection, providing a credible set of variables for each selection. Our methods are particularly well suited to settings where variables are highly correlated and detectable effects are sparse, both of which are characteristics of genetic fine mapping applications. We demonstrate through numerical experiments that our methods outperform existing methods for this task, and we illustrate their application to fine mapping genetic variants influencing alternative splicing in human cell lines. We also discuss the potential and challenges for applying these methods to generic variable‐selection problems.

350 citations


Journal ArticleDOI
TL;DR: The importance of including appropriate variables, following the proper steps, and adopting the proper methods when selecting variables for prediction models is focused on.
Abstract: Clinical prediction models are used frequently in clinical practice to identify patients who are at risk of developing an adverse outcome so that preventive measures can be initiated. A prediction model can be developed in a number of ways; however, an appropriate variable selection strategy needs to be followed in all cases. Our purpose is to introduce readers to the concept of variable selection in prediction modelling, including the importance of variable selection and variable reduction strategies. We will discuss the various variable selection techniques that can be applied during prediction model building (backward elimination, forward selection, stepwise selection and all possible subset selection), and the stopping rule/selection criteria in variable selection (p values, Akaike information criterion, Bayesian information criterion and Mallows’ Cp statistic). This paper focuses on the importance of including appropriate variables, following the proper steps, and adopting the proper methods when selecting variables for prediction models.

284 citations


Journal ArticleDOI
27 Mar 2020-Science
TL;DR: It is shown that positive selection, not drift, is the major force shaping clonal hematopoiesis, and bounds on the number of hematoietic stem cells are provided, and the fitness advantages of key pathogenic variants, at single-nucleotide resolution, as well as the distribution of fitness effects within commonly mutated driver genes.
Abstract: Somatic mutations acquired in healthy tissues as we age are major determinants of cancer risk. Whether variants confer a fitness advantage or rise to detectable frequencies by chance remains largely unknown. Blood sequencing data from ~50,000 individuals reveal how mutation, genetic drift, and fitness shape the genetic diversity of healthy blood (clonal hematopoiesis). We show that positive selection, not drift, is the major force shaping clonal hematopoiesis, provide bounds on the number of hematopoietic stem cells, and quantify the fitness advantages of key pathogenic variants, at single-nucleotide resolution, as well as the distribution of fitness effects (fitness landscape) within commonly mutated driver genes. These data are consistent with clonal hematopoiesis being driven by a continuing risk of mutations and clonal expansions that become increasingly detectable with age.

264 citations


Journal ArticleDOI
TL;DR: A new method to binarize a continuous pigeon inspired optimizer is proposed and compared to the traditional way for binarizing continuous swarm intelligent algorithms.
Abstract: Feature selection plays a vital role in building machine learning models. Irrelevant features in data affect the accuracy of the model and increase the training time needed to build the model. Feature selection is an important process to build Intrusion Detection System (IDS). In this paper, a wrapper feature selection algorithm for IDS is proposed. This algorithm uses the pigeon inspired optimizer to utilize the selection process. A new method to binarize a continuous pigeon inspired optimizer is proposed and compared to the traditional way for binarizing continuous swarm intelligent algorithms. The proposed algorithm was evaluated using three popular datasets: KDDCUP99, NLS-KDD and UNSW-NB15. The proposed algorithm outperformed several feature selection algorithms from state-of-the-art related works in terms of TPR, FPR, accuracy, and F-score. Also, the proposed cosine similarity method for binarizing the algorithm has a faster convergence than the sigmoid method.

206 citations


Journal ArticleDOI
TL;DR: This paper focuses only on selection hyper-heuristics and presents critical discussion, current research trends and directions for future research, and the existing classification of selectionhyper- heuristics is extended, in order to reflect the nature of the challenges faced in contemporary research.

178 citations


Journal ArticleDOI
TL;DR: In recent years, the terms machine learning, data science, and predictive modeling have become ubiquitous in nearly every discipline in which data analysis plays a central role as discussed by the authors. But when first diving into the field of data analysis, it was not a new topic.
Abstract: In recent years, the terms machine learning, data science, and predictive modeling have become ubiquitous in nearly every discipline in which data analysis plays a central role. When first diving i...

155 citations


Journal ArticleDOI
TL;DR: Social media search indexes for dry cough, fever, chest distress, coronavirus, and pneumonia were collected from 31 December 2019 to 9 February 2020 and SMSI findings on lag day 10 were significantly correlated with new confirmed COVID-19 cases, suggesting SMSI could be a significant predictor of the number of CO VID-19 infections.
Abstract: Predicting the number of new suspected or confirmed cases of novel coronavirus disease 2019 (COVID-19) is crucial in the prevention and control of the COVID-19 outbreak. Social media search indexes (SMSI) for dry cough, fever, chest distress, coronavirus, and pneumonia were collected from 31 December 2019 to 9 February 2020. The new suspected cases of COVID-19 data were collected from 20 January 2020 to 9 February 2020. We used the lagged series of SMSI to predict new suspected COVID-19 case numbers during this period. To avoid overfitting, five methods, namely subset selection, forward selection, lasso regression, ridge regression, and elastic net, were used to estimate coefficients. We selected the optimal method to predict new suspected COVID-19 case numbers from 20 January 2020 to 9 February 2020. We further validated the optimal method for new confirmed cases of COVID-19 from 31 December 2019 to 17 February 2020. The new suspected COVID-19 case numbers correlated significantly with the lagged series of SMSI. SMSI could be detected 6-9 days earlier than new suspected cases of COVID-19. The optimal method was the subset selection method, which had the lowest estimation error and a moderate number of predictors. The subset selection method also significantly correlated with the new confirmed COVID-19 cases after validation. SMSI findings on lag day 10 were significantly correlated with new confirmed COVID-19 cases. SMSI could be a significant predictor of the number of COVID-19 infections. SMSI could be an effective early predictor, which would enable governments' health departments to locate potential and high-risk outbreak areas.

149 citations


Journal ArticleDOI
TL;DR: It is proven that the fourth measure, called relative neighborhood self-information, is better for feature selection than the other measures, because not only does it consider both the lower and the upper approximations but also the change of its magnitude is largest with the variation of feature subsets.
Abstract: The concept of dependency in a neighborhood rough set model is an important evaluation function for the feature selection. This function considers only the classification information contained in the lower approximation of the decision while ignoring the upper approximation. In this paper, we construct a class of uncertainty measures: decision self-information for the feature selection. These measures take into account the uncertainty information in the lower and the upper approximations. The relationships between these measures and their properties are discussed in detail. It is proven that the fourth measure, called relative neighborhood self-information, is better for feature selection than the other measures, because not only does it consider both the lower and the upper approximations but also the change of its magnitude is largest with the variation of feature subsets. This helps to facilitate the selection of optimal feature subsets. Finally, a greedy algorithm for feature selection has been designed and a series of numerical experiments was carried out to verify the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm often chooses fewer features and improves the classification accuracy in most cases.

147 citations


Journal ArticleDOI
TL;DR: This paper reviews the current state-of-the-art of electric load forecasting technologies and presents recent works pertaining to the combination of different ML algorithms into two or more methods for the construction of hybrid models.
Abstract: Load forecasting is a pivotal part of the power utility companies. To provide load-shedding free and uninterrupted power to the consumer, decision-makers in the utility sector must forecast the future demand for electricity with a minimum error percentage. Load prediction with less percentage of error can save millions of dollars to the utility companies. There are numerous Machine Learning (ML) techniques to amicably forecast electricity demand, among which the hybrid models show the best result. Two or more than two predictive models are amalgamated to design a hybrid model, each of which provides improved performances by the merit of individual algorithms. This paper reviews the current state-of-the-art of electric load forecasting technologies and presents recent works pertaining to the combination of different ML algorithms into two or more methods for the construction of hybrid models. A comprehensive study of each single and multiple load forecasting model is performed with an in-depth analysis of their advantages, disadvantages, and functions. A comparison between their performance in terms of Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE) values are developed with pertinent literature of several models to aid the researchers with the selection of suitable models for load prediction.

118 citations


Journal ArticleDOI
TL;DR: The results suggest that different types of travellers present differences in hotel key factors, criterion importance and selection results, however, families and friends have similar hotel selection results.

114 citations


Journal ArticleDOI
TL;DR: This work conducted a comprehensive analysis of the genomic and phenotypic changes associated with modern maize breeding through chronological sampling of 350 elite inbred lines representing multiple eras of germplasm from both China and the United States to demonstrate the use of the breeding-era approach for identifying breeding signatures.
Abstract: Since the development of single-hybrid maize breeding programs in the first half of the twentieth century1, maize yields have increased over sevenfold, and much of that increase can be attributed to tolerance of increased planting density2-4. To explore the genomic basis underlying the dramatic yield increase in maize, we conducted a comprehensive analysis of the genomic and phenotypic changes associated with modern maize breeding through chronological sampling of 350 elite inbred lines representing multiple eras of germplasm from both China and the United States. We document several convergent phenotypic changes in both countries. Using genome-wide association and selection scan methods, we identify 160 loci underlying adaptive agronomic phenotypes and more than 1,800 genomic regions representing the targets of selection during modern breeding. This work demonstrates the use of the breeding-era approach for identifying breeding signatures and lays the foundation for future genomics-enabled maize breeding.

Journal ArticleDOI
TL;DR: Experimental analysis shows that the ACO-FCP ensemble model is superior and more robust than its counterparts, and this study strongly recommends that the proposed ACO -FCP model is highly competitive than traditional and other artificial intelligence techniques.

Journal ArticleDOI
TL;DR: In this article, the authors propose a new network resampling strategy based on splitting node pairs rather than nodes, which is applicable to cross-validation for a wide range of network model selection tasks.
Abstract: Summary While many statistical models and methods are now available for network analysis, resampling of network data remains a challenging problem. Cross-validation is a useful general tool for model selection and parameter tuning, but it is not directly applicable to networks since splitting network nodes into groups requires deleting edges and destroys some of the network structure. In this paper we propose a new network resampling strategy, based on splitting node pairs rather than nodes, that is applicable to cross-validation for a wide range of network model selection tasks. We provide theoretical justification for our method in a general setting and examples of how the method can be used in specific network model selection and parameter tuning tasks. Numerical results on simulated networks and on a statisticians’ citation network show that the proposed cross-validation approach works well for model selection.

Journal ArticleDOI
TL;DR: Re-sequencing of elite cultivars from the historical series of wheat breeding in China demonstrates the impact of " founder genotypes" on the output of breeding efforts over multiple decades, and suggests "founder genotype" perspectives are in fact more dynamic when applied in the context of modern genomics-informed breeding.

Journal ArticleDOI
TL;DR: Analysis of full-length SARS-CoV-2 genomes showed evidence of epistatic interactions among sites in the genome that may be important in the generation of variants adapted to humans, and the identification of ongoing selection even in a scenario of conserved sequences collected over the first 3 months of this pandemic.
Abstract: In this study, we analyzed full-length SARS-CoV-2 genomes from multiple countries to determine early trends in the evolutionary dynamics of the novel COVID-19 pandemic. Results indicated SARS-CoV-2 evolved early into at least three phylogenetic groups, characterized by positive selection at specific residues of the accessory proteins ORF3a and ORF8. Also, we are reporting potential relevant sites under positive selection at specific sites of non-structural proteins nsp6 and helicase. Our analysis of co-evolution showed evidence of epistatic interactions among sites in the genome that may be important in the generation of variants adapted to humans. These observations might impact not only public health but also suggest that more studies are needed to understand the genetic mechanisms that may affect the development of therapeutic and preventive tools, like antivirals and vaccines. Collectively, our results highlight the identification of ongoing selection even in a scenario of conserved sequences collected over the first 3 months of this pandemic.

Book ChapterDOI
01 Jan 2020
TL;DR: This chapter presents the most fundamental concepts, operators, and mathematical models of this algorithm, which mimics Darwinian theory of survival of the fittest in nature.
Abstract: Genetic Algorithm (GA) is one of the most well-regarded evolutionary algorithms in the history. This algorithm mimics Darwinian theory of survival of the fittest in nature. This chapter presents the most fundamental concepts, operators, and mathematical models of this algorithm. The most popular improvements in the main component of this algorithm (selection, crossover, and mutation) are given too. The chapter also investigates the application of this technique in the field of image processing. In fact, the GA algorithm is employed to reconstruct a binary image from a completely random image.

Journal ArticleDOI
TL;DR: A new ABC is proposed, in which a new selection method based on neighborhood radius is used, and unlike the probability selection in the original ABC, NSABC chooses the best solution in the neighborhood radius to generate offspring.

Journal ArticleDOI
TL;DR: A comprehensive survey on feature selection approaches for clustering is introduced by reflecting the advantages/disadvantages of current approaches from different perspectives and identifying promising trends for future research.
Abstract: The massive growth of data in recent years has led challenges in data mining and machine learning tasks. One of the major challenges is the selection of relevant features from the original set of available features that maximally improves the learning performance over that of the original feature set. This issue attracts researchers’ attention resulting in a variety of successful feature selection approaches in the literature. Although there exist several surveys on unsupervised learning (e.g., clustering), lots of works concerning unsupervised feature selection are missing in these surveys (e.g., evolutionary computation based feature selection for clustering) for identifying the strengths and weakness of those approaches. In this paper, we introduce a comprehensive survey on feature selection approaches for clustering by reflecting the advantages/disadvantages of current approaches from different perspectives and identifying promising trends for future research.

Journal ArticleDOI
TL;DR: In this paper, an angle-based selection strategy and a shift-based density estimation strategy are employed in the environmental selection to delete poor individuals one by one, and the results suggest that AnD can achieve highly competitive performance.

Journal ArticleDOI
TL;DR: There is a need for comprehensive high throughput phenotyping of physio-morphological traits that is growth stage-based to improve the efficiency of breeding drought-tolerant wheat.
Abstract: In the past, there have been drought events in different parts of the world, which have negatively influenced the productivity and production of various crops including wheat (Triticum aestivum L.), one of the world's three important cereal crops. Breeding new high yielding drought-tolerant wheat varieties is a research priority specifically in regions where climate change is predicted to result in more drought conditions. Commonly in breeding for drought tolerance, grain yield is the basis for selection, but it is a complex, late-stage trait, affected by many factors aside from drought. A strategy that evaluates genotypes for physiological responses to drought at earlier growth stages may be more targeted to drought and time efficient. Such an approach may be enabled by recent advances in high-throughput phenotyping platforms (HTPPs). In addition, the success of new genomic and molecular approaches rely on the quality of phenotypic data which is utilized to dissect the genetics of complex traits such as drought tolerance. Therefore, the first objective of this review is to describe the growth-stage based physio-morphological traits that could be targeted by breeders to develop drought-tolerant wheat genotypes. The second objective is to describe recent advances in high throughput phenotyping of drought tolerance related physio-morphological traits primarily under field conditions. We discuss how these strategies can be integrated into a comprehensive breeding program to mitigate the impacts of climate change. The review concludes that there is a need for comprehensive high throughput phenotyping of physio-morphological traits that is growth stage-based to improve the efficiency of breeding drought-tolerant wheat.

Journal ArticleDOI
12 Feb 2020-Nature
TL;DR: P phenotypic selection analysis is used to estimate the type and strength of selection that acts on more than 15,000 transcripts in rice ( Oryza sativa), which provides insight into the adaptive evolutionary role of selection on gene expression.
Abstract: Levels of gene expression underpin organismal phenotypes1,2, but the nature of selection that acts on gene expression and its role in adaptive evolution remain unknown1,2. Here we assayed gene expression in rice (Oryza sativa)3, and used phenotypic selection analysis to estimate the type and strength of selection on the levels of more than 15,000 transcripts4,5. Variation in most transcripts appears (nearly) neutral or under very weak stabilizing selection in wet paddy conditions (with median standardized selection differentials near zero), but selection is stronger under drought conditions. Overall, more transcripts are conditionally neutral (2.83%) than are antagonistically pleiotropic6 (0.04%), and transcripts that display lower levels of expression and stochastic noise7–9 and higher levels of plasticity9 are under stronger selection. Selection strength was further weakly negatively associated with levels of cis-regulation and network connectivity9. Our multivariate analysis suggests that selection acts on the expression of photosynthesis genes4,5, but that the efficacy of selection is genetically constrained under drought conditions10. Drought selected for earlier flowering11,12 and a higher expression of OsMADS18 (Os07g0605200), which encodes a MADS-box transcription factor and is a known regulator of early flowering13—marking this gene as a drought-escape gene11,12. The ability to estimate selection strengths provides insights into how selection can shape molecular traits at the core of gene action. Phenotypic selection analysis is used to estimate the type and strength of selection that acts on more than 15,000 transcripts in rice (Oryza sativa), which provides insight into the adaptive evolutionary role of selection on gene expression.

Journal ArticleDOI
TL;DR: A hyperplane assisted evolutionary algorithm, referred here as hpaEA, is proposed which significantly outperforms the compared algorithms on 20 out of 36 benchmark instances and is compared with five state-of-the-art many- objective evolutionary algorithms on 36 many-objective benchmark instances.
Abstract: In many-objective optimization problems (MaOPs), forming sound tradeoffs between convergence and diversity for the environmental selection of evolutionary algorithms is a laborious task. In particular, strengthening the selection pressure of population toward the Pareto-optimal front becomes more challenging, since the proportion of nondominated solutions in the population scales up sharply with the increase of the number of objectives. To address these issues, this paper first defines the nondominated solutions exhibiting evident tendencies toward the Pareto-optimal front as prominent solutions, using the hyperplane formed by their neighboring solutions, to further distinguish among nondominated solutions. Then, a novel environmental selection strategy is proposed with two criteria in mind: 1) if the number of nondominated solutions is larger than the population size, all the prominent solutions are first identified to strengthen the selection pressure. Subsequently, a part of the other nondominated solutions are selected to balance convergence and diversity and 2) otherwise, all the nondominated solutions are selected; then a part of the dominated solutions are selected according to the predefined reference vectors. Moreover, based on the definition of prominent solutions and the new selection strategy, we propose a hyperplane assisted evolutionary algorithm, referred here as hpaEA , for solving MaOPs. To demonstrate the performance of hpaEA , extensive experiments are conducted to compare it with five state-of-the-art many-objective evolutionary algorithms on 36 many-objective benchmark instances. The experimental results show the superiority of hpaEA which significantly outperforms the compared algorithms on 20 out of 36 benchmark instances.

Journal ArticleDOI
TL;DR: Large data computations in ssGBLUP were solved by exploiting limited dimensionality of genomic data due to limited effective population size, and involves new validation procedures that are unaffected by selection, parameter estimation that accounts for all the genomic data used in selection, and strategies to address reduction in genetic variances after genomic selection was implemented.
Abstract: Early application of genomic selection relied on SNP estimation with phenotypes or de-regressed proofs (DRP). Chips of 50k SNP seemed sufficient for an accurate estimation of SNP effects. Genomic estimated breeding values (GEBV) were composed of an index with parent average, direct genomic value, and deduction of a parental index to eliminate double counting. Use of SNP selection or weighting increased accuracy with small data sets but had minimal to no impact with large data sets. Efforts to include potentially causative SNP derived from sequence data or high-density chips showed limited or no gain in accuracy. After the implementation of genomic selection, EBV by BLUP became biased because of genomic preselection and DRP computed based on EBV required adjustments, and the creation of DRP for females is hard and subject to double counting. Genomic selection was greatly simplified by single-step genomic BLUP (ssGBLUP). This method based on combining genomic and pedigree relationships automatically creates an index with all sources of information, can use any combination of male and female genotypes, and accounts for preselection. To avoid biases, especially under strong selection, ssGBLUP requires that pedigree and genomic relationships are compatible. Because the inversion of the genomic relationship matrix (G) becomes costly with more than 100k genotyped animals, large data computations in ssGBLUP were solved by exploiting limited dimensionality of genomic data due to limited effective population size. With such dimensionality ranging from 4k in chickens to about 15k in cattle, the inverse of G can be created directly (e.g., by the algorithm for proven and young) at a linear cost. Due to its simplicity and accuracy, ssGBLUP is routinely used for genomic selection by the major chicken, pig, and beef industries. Single step can be used to derive SNP effects for indirect prediction and for genome-wide association studies, including computations of the P-values. Alternative single-step formulations exist that use SNP effects for genotyped or for all animals. Although genomics is the new standard in breeding and genetics, there are still some problems that need to be solved. This involves new validation procedures that are unaffected by selection, parameter estimation that accounts for all the genomic data used in selection, and strategies to address reduction in genetic variances after genomic selection was implemented.

Journal ArticleDOI
TL;DR: The VIKOR model has been considered a viable tool for many decision-making applications in the past few years, given the advantages of considering the compromise between maximizing the utility of group and minimizing personal regrets and the superiority of the new method is further illustrated.
Abstract: The VIKOR model has been considered a viable tool for many decision-making applications in the past few years, given the advantages of considering the compromise between maximizing the utility of group and minimizing personal regrets. The q-rung interval-valued orthopair fuzzy set (q-RIVOFS) is a generalization of intuitionistic fuzzy set (IFS) and Pythagorean fuzzy set (PFS) and has emerged to solve more complex and uncertain decision making problems which IFS and PFS cannot handle. In this manuscript, the key innovation is to combine the traditional VIKOR model with q-RIVOFS to develop the q-rung interval-valued orthopair fuzzy VIKOR model. In the new developed model, to express more information, the attribute’s values in MAGDM problems are depicted by q-RIVOFNs. First of all, some basic theories and aggregation operators of q-RIVOFNs are simply introduced. Then we develop the origin VIKOR model to q-RIVOFS environment and briefly express the computing steps of this new established model. Thereafter, the effectiveness of the model is verified by an example of supplier selection of medical consumer products and through comparative analysis, the superiority of the new method is further illustrated.


Proceedings Article
03 Jun 2020
TL;DR: A unified and principled method for both the querying and training processes in deep batch active learning is proposed, providing theoretical insights from the intuition of modeling the interactive procedure in active learning as distribution matching by adopting the Wasserstein distance.
Abstract: In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning We are providing theoretical insights from the intuition of modeling the interactive procedure in active learning as distribution matching, by adopting the Wasserstein distance As a consequence, we derived a new training loss from the theoretical analysis, which is decomposed into optimizing deep neural network parameters and batch query selection through alternative optimization In addition, the loss for training a deep neural network is naturally formulated as a min-max optimization problem through leveraging the unlabeled data information Moreover, the proposed principles also indicate an explicit uncertainty-diversity trade-off in the query batch selection Finally, we evaluate our proposed method on different benchmarks, consistently showing better empirical performances and a better time-efficient query strategy compared to the baselines

Journal ArticleDOI
TL;DR: An integrated methodology including the Intuitionistic Fuzzy Technique for Order Preference by Similarity to Ideal Solution (IF-TOPSIS) and a modified two-phase fuzzy goal programming model are proposed to better address this selection problem in a multi-item/multi-supplier/ multi-period environment.

Journal ArticleDOI
TL;DR: A practical structure of LIS-based spatial modulation (LIS-SM) is proposed, in order to utilize both transmit and receive antenna indices, and a low-complexity selection algorithm is designed on the basis of minimum squared Euclidian distance and signal-to-leakage-and-noise ratio.
Abstract: Novel communication technology based on large intelligent surface (LIS) [1] has arisen recently, with the aim to enhance the signal quality at the receiver In this paper, a practical structure of LIS-based spatial modulation (LIS-SM) is proposed, in order to utilize both transmit and receive antenna indices Meanwhile, the theoretical average bit error rate (ABER) performance bound of the developed LIS-SM scheme is investigated For the sake of achieving further spatial diversity gain, we extend its employment to the antenna selection (AS) scenario, and a low-complexity selection algorithm is designed on the basis of minimum squared Euclidian distance and signal-to-leakage-and-noise ratio as well as the idea of greedy elimination algorithm Performance analysis shows that AS-aided LIS-SM is more robust in terms of ABER compared with conventional LIS-SM Moreover, complexity analysis also depicts that the proposed fast selection algorithm achieves much lower complexity yet a comparable ABER performance, compared to the traditional exhaustive search

Journal ArticleDOI
TL;DR: An integrated framework is proposed for HGR using deep neural network and Fuzzy Entropy controlled Skewness (FEcS) approach and the obtained overall recognition results lead to conclude that the proposed system is very promising.
Abstract: Human gait recognition (HGR) shows high importance in the area of video surveillance due to remote access and security threats. HGR is a technique commonly used for the identification of human style in daily life. However, many typical situations like change of clothes condition and variation in view angles degrade the system performance. Lately, different machine learning (ML) techniques have been introduced for video surveillance which gives promising results among which deep learning (DL) shows best performance in complex scenarios. In this article, an integrated framework is proposed for HGR using deep neural network and Fuzzy Entropy controlled Skewness (FEcS) approach. The proposed technique works in two phases: In the first phase, Deep Convolutional Neural Network (DCNN) features are extracted by pre-trained CNN models (VGG19 and AlexNet) and their information is mixed by parallel fusion approach. In the second phase, entropy and skewness vectors are calculated from fused feature vector (FV) to select best subsets of features by suggested FEcS approach. The best subsets of picked features are finally fed to multiple classifiers and finest one is chosen on the basis of accuracy value. The experiments were done on four well-known datasets namely AVAMVG gait, CASIA A, B and C. The achieved accuracy of each dataset was 99.8%, 99.7%, 93.3% and 92.2%, respectively. Therefore, the obtained overall recognition results lead to conclude that the proposed system is very promising.

Journal ArticleDOI
TL;DR: This Review discusses how genomic technologies are providing a deeper understanding of colour traits, revealing fresh insights into their genetic architecture, evolvability and origins of adaptive variation.
Abstract: Coloration is an easily quantifiable visual trait that has proven to be a highly tractable system for genetic analysis and for studying adaptive evolution. The application of genomic approaches to evolutionary studies of coloration is providing new insight into the genetic architectures underlying colour traits, including the importance of large-effect mutations and supergenes, the role of development in shaping genetic variation and the origins of adaptive variation, which often involves adaptive introgression. Improved knowledge of the genetic basis of traits can facilitate field studies of natural selection and sexual selection, making it possible for strong selection and its influence on the genome to be demonstrated in wild populations.