scispace - formally typeset
Search or ask a question

Showing papers on "Selection (genetic algorithm) published in 2001"


Journal ArticleDOI
TL;DR: Comparisons of estimated linear selection gradients and differentials suggest that indirect components of phenotypic selection were usually modest relative to direct components, and no evidence that stabilizing selection is stronger or more common than disruptive selection in nature.
Abstract: How strong is phenotypic selection on quantitative traits in the wild? We reviewed the literature from 1984 through 1997 for studies that estimated the strength of linear and quadratic selection in terms of standardized selection gradients or differentials on natural variation in quantitative traits for field populations. We tabulated 63 published studies of 62 species that reported over 2,500 estimates of linear or quadratic selection. More than 80% of the estimates were for morphological traits; there is very little data for behavioral or physiological traits. Most published selection studies were unreplicated and had sample sizes below 135 individuals, resulting in low statistical power to detect selection of the magnitude typically reported for natural populations. The absolute values of linear selection gradients |β| were exponentially distributed with an overall median of 0.16, suggesting that strong directional selection was uncommon. The values of |β| for selection on morphological and o...

1,703 citations


Proceedings Article
07 Jul 2001
TL;DR: A new selection technique for evolutionary multiobjective optimization algorithms in which the unit of selection is a hyperbox in objective space, which is shown to be more sensitive to ensuring a good spread of development along the Pareto frontier than individual-based selection.
Abstract: We describe a new selection technique for evolutionary multiobjective optimization algorithms in which the unit of selection is a hyperbox in objective space. In this technique, instead of assigning a selective fitness to an individual, selective fitness is assigned to the hyperboxes in objective space which are currently occupied by at least one individual in the current approximation to the Pareto frontier. A hyperbox is thereby selected, and the resulting selected individual is randomly chosen from this hyperbox. This method of selection is shown to be more sensitive to ensuring a good spread of development along the Pareto frontier than individual-based selection. The method is implemented in a modern multiobjective evolutionary algorithm, and performance is tested by using Deb's test suite of `T' functions with varying properties. The new selection technique is found to give significantly superior results to the other methods compared, namely PAES, PESA, and SPEA; each is a modern multi-objective optimization algorithm previously found to outperform earlier approaches on various problems.

982 citations


Journal ArticleDOI
01 Jun 2001-Heredity
TL;DR: This article provides a brief review of some of the statistical methods used to detect selection using DNA sequence data or other molecular data that appear to be useful for identifying specific regions or specific sites targeted by selection.
Abstract: Examining genomic data for traces of selection provides a powerful tool for identifying genomic regions of functional importance. Many methods for identifying such regions have focused on conserved sites. However, positive selection may also be an indication of functional importance. This article provides a brief review of some of the statistical methods used to detect selection using DNA sequence data or other molecular data. Statistical tests based on allelic distributions or levels of variability often depend on strong assumptions regarding population demographics. In contrast, tests based on comparisons of the level of variability in nonsynonymous and synonymous sites can be constructed without demographic assumptions. Such tests appear to be useful for identifying specific regions or specific sites targeted by selection.

884 citations


Journal ArticleDOI
TL;DR: In this paper, a consistent model and moment selection criteria for GMM estimation is proposed, based on the J statistic for testing over-identifying restrictions, which is similar to the likelihood-based selection criteria BIC, HQIC, and AIC.

736 citations


Journal ArticleDOI
TL;DR: Computer simulation is used to investigate the accuracy and power of the likelihood ratio test (LRT) in detecting positive selection at amino acid sites and it is found that use of the chi(2) distribution makes the test conservative, especially when the data contain very short and highly similar sequences.
Abstract: The selective pressure at the protein level is usually measured by the nonsynonymous/synonymous rate ratio (omega = dN/dS), with omega 1 indicating purifying (or negative) selection, neutral evolution, and diversifying (or positive) selection, respectively. The omega ratio is commonly calculated as an average over sites. As every functional protein has some amino acid sites under selective constraints, averaging rates across sites leads to low power to detect positive selection. Recently developed models of codon substitution allow the omega ratio to vary among sites and appear to be powerful in detecting positive selection in empirical data analysis. In this study, we used computer simulation to investigate the accuracy and power of the likelihood ratio test (LRT) in detecting positive selection at amino acid sites. The test compares two nested models: one that allows for sites under positive selection (with omega > 1), and another that does not, with the chi2 distribution used for significance testing. We found that use of the chi(2) distribution makes the test conservative, especially when the data contain very short and highly similar sequences. Nevertheless, the LRT is powerful. Although the power can be low with only 5 or 6 sequences in the data, it was nearly 100% in data sets of 17 sequences. Sequence length, sequence divergence, and the strength of positive selection also were found to affect the power of the LRT. The exact distribution assumed for the omega ratio over sites was found not to affect the effectiveness of the LRT.

671 citations


Journal ArticleDOI
TL;DR: The GA/KNN method is capable of selecting a subset of predictive genes from a large noisy data set for sample classification and is a multivariate approach that can capture the correlated structure in the data.
Abstract: Motivation: We recently introduced a multivariate approach that selects a subset of predictive genes jointly for sample classification based on expression data. We tested the algorithm on colon and leukemia data sets. As an extension to our earlier work, we systematically examine the sensitivity, reproducibility and stability of gene selection/sample classification to the choice of parameters of the algorithm. Methods: Our approach combines a Genetic Algorithm (GA) and the k-Nearest Neighbor (KNN) method to identify genes that can jointly discriminate between different classes of samples (e.g. normal versus tumor). The GA/KNN method is a stochastic supervised pattern recognition method. The genes identified are subsequently used to classify independent test set samples. Results: The GA/KNN method is capable of selecting a subset of predictive genes from a large noisy data set for sample classification. It is a multivariate approach that can capture the correlated structure in the data. We find that for a given data set gene selection is highly repeatable in independent runs using the GA/KNN method. In general, however, gene selection may be less robust than classification. Availability: The method is available at http://dir.niehs.nih. gov/microarray/datamining

647 citations


Journal ArticleDOI
TL;DR: The purpose in this paper is to provide a general approach to model selection via penalization for Gaussian regression and to develop the point of view about this subject.
Abstract: Our purpose in this paper is to provide a general approach to model selection via penalization for Gaussian regression and to develop our point of view about this subject. The advantage and importance of model selection come from the fact that it provides a suitable approach to many different types of problems, starting from model selection per se (among a family of parametric models, which one is more suitable for the data at hand), which includes for instance variable selection in regression models, to nonparametric estimation, for which it provides a very powerful tool that allows adaptation under quite general circumstances. Our approach to model selection also provides a natural connection between the parametric and nonparametric points of view and copes naturally with the fact that a model is not necessarily true. The method is based on the penalization of a least squares criterion which can be viewed as a generalization of Mallows’Cp. A large part of our efforts will be put on choosing properly the list of models and the penalty function for various estimation problems like classical variable selection or adaptive estimation for various types of lp-bodies.

560 citations


Journal ArticleDOI
01 Feb 2001-Thorax
TL;DR: Members of the core Writing Group are: Professor P Armstrong (Imaging), Dr J Congleton, Mr S W Fountain (Chairman), Dr T Jagoe, Dr D F McAuley, Dr J MacMahon, Dr M F Muers, and Dr P K Plant.
Abstract: Writing Group: Professor P Armstrong (Imaging), Dr J Congleton,* Mr S W Fountain (Chairman),* Dr T Jagoe, Dr D F McAuley,* Dr J MacMahon,* Dr M F Muers,* Mr R D Page,* Dr P K Plant,* Dr M Roland, Dr R M Rudd, Mr W S Walker,* Dr T J Williams*. Specialist advisors: Professor M I Saunders (Royal College of Radiologists), Dr A G Nicholson (Royal College of Pathologists). *Members of the core Writing Group.

494 citations



Journal ArticleDOI
TL;DR: The procedures presented are appropriate when it is possible to repeatedly obtain small, incremental samples from each simulated system and are based on the assumption of normally distributed data, so the impact of batching is analyzed.
Abstract: We present procedures for selecting the best or near-best of a finite number of simulated systems when best is defined by maximum or minimum expected performance. The procedures are appropriate when it is possible to repeatedly obtain small, incremental samples from each simulated system. The goal of such a sequential procedure is to eliminate, at an early stage of experimentation, those simulated systems that are apparently inferior, and thereby reduce the overall computational effort required to find the best. The procedures we present accommodate unequal variances across systems and the use of common random numbers. However, they are based on the assumption of normally distributed data, so we analyze the impact of batching (to achieve approximate normality or independence) on the performance of the procedures. Comparisons with some existing indifference-zone procedures are also provided.

422 citations


Journal ArticleDOI
TL;DR: The methodology proposed here allows the estimation of a mean response to a dynamic treatment regime under the assumption of sequential randomization.
Abstract: A dynamic treatment regime is a list of rules for how the level of treatment will be tailored through time to an individual's changing severity. In general, individuals who receive the highest level of treatment are the individuals with the greatest severity and need for treatment. Thus, there is planned selection of the treatment dose. In addition to the planned selection mandated by the treatment rules, staff judgment results in unplanned selection of the treatment level. Given observational longitudinal data or data in which there is unplanned selection of the treatment level, the methodology proposed here allows the estimation of a mean response to a dynamic treatment regime under the assumption of sequential randomization.


Journal ArticleDOI
TL;DR: Viability selection that was measured over short periods (days) was typically stronger than selection measured over longer periods (months and years), but the strength of sexual selection did not vary with duration of selection episodes; as a result, sexual selection was stronger than viability selection over longer time scales, but not over short time scales.
Abstract: Directional selection is a major force driving adaptation and evolutionary change. However, the distribution, strength, and tempo of phenotypic selection acting on quantitative traits in natural populations remain unclear across different study systems. We reviewed the literature (1984–1997) that reported the strength of directional selection as indexed by standardized linear selection gradients (β). We asked how strong are viability and sexual selection, and whether strength of selection is correlated with the time scale over which it was measured. Estimates of the magnitude of directional selection (|β|) were exponentially distributed, with few estimates greater than 0.50 and most estimates less than 0.15. Sexual selection (measured by mating success) appeared stronger than viability selection (measured by survival). Viability selection that was measured over short periods (days) was typically stronger than selection measured over longer periods (months and years), but the strength of sexual selection did not vary with duration of selection episodes; as a result, sexual selection was stronger than viability selection over longer time scales (months and years), but not over short time scales (days).

Journal ArticleDOI
TL;DR: A general method for restricting the selection of antibiotic-resistant mutants based on the use of antibiotic concentrations that require cells to obtain 2 concurrent resistance mutations for growth is proposed.
Abstract: Studies with fluoroquinolones have led to a general method for restricting the selection of antibiotic-resistant mutants. The strategy is based on the use of antibiotic concentrations that require cells to obtain 2 concurrent resistance mutations for growth. That concentration has been called the mutant prevention concentration (MPC) because no resistant colony is recovered even when >10 10 cells are plated. Resistant mutants are selected exclusively within a concentration range (mutant selection window) that extends from the point where growth inhibition begins, approximated by the minimal inhibitory concentration, up to the MPC. The dimensions of the mutant selection window can be reduced in a variety of ways, including adjustment of antibiotic structure and dosage regimens. The window can be closed to prevent mutant selection through combination therapy with ≥2 antimicrobial agents if their normalized pharmacokinetic profiles superimpose at concentrations that inhibit growth. Application of these principles could drastically restrict the selection of drug-resistant pathogens.

Patent
09 Jul 2001
TL;DR: In this paper, a method and system for dynamically selecting a communication channel between an access point (AP) and a plurality of stations (STAs) in an IEEE 802.11 wireless local area network (WLAN) is presented.
Abstract: Disclosed is a method and system for dynamically selecting a communication channel between an access point (AP) and a plurality of stations (STAs) in an IEEE 802.11 wireless local area network (WLAN). The method includes the steps of: determining whether a new channel between the AP and STAs within a particular basic service set (BSS) is needed; requesting a channel signal quality measure to some of the plurality of stations by the AP; reporting a channel signal quality report back to the AP based on a received signal strength indication (RSSI) and a packet error rate (PER) of all channels detected by the stations within the BSS; selecting a new channel based on the channel quality report for use in communication between the AP and the plurality of stations.

Journal ArticleDOI
TL;DR: In this article, the authors employ judicial decisionmaking in the U.S. Courts of Appeals as a window through which to reexamine the politics of selection to the lower courts.
Abstract: The importance of lower federal courts in the policymaking process has stimulated extensive research programs focused on the process of selecting the judges of these courts and the factors influencing their decisions. The present study employs judicial decisionmaking in the U.S. Courts of Appeals as a window through which to reexamine the politics of selection to the lower courts. It differs from previous studies of selection in three ways. First, it takes advantage of recent innovations in measurement to go beyond reliance on political party as a measure of the preferences of actors in the selection process. Second, employing these new measures it examines the relative effects of the operation of policy and partisan agendas in the selection process. Third, a more complex model of selection is assessed than in most previous studies-one that expressly examines the role of senators and senatorial preferences in the selection process. The results clearly suggest that the politics of selection differ dramatic...

Book
01 Jan 2001
TL;DR: This introductory chapter is intended to provide broad guidelines to help breeding programs: assess whether physiological criteria should be included in a breeding strategy; evaluate specific physiological selection traits and determine their usefulness in breeding.
Abstract: How can disciplinary research in physiology complement wheat breeding? This introductory chapter is intended to provide broad guidelines to help breeding programs: 1) assess whether physiological criteria should be included in a breeding strategy; 2) evaluate specific physiological selection traits and determine their usefulness in breeding. The other chapters in this book provide more explicit information on how physiological approaches can be used in breeding work for a variety of environmental conditions. Physiological criteria are commonly though not explicitly used in breeding programs. A good example is selection for reduced height, which improves lodging resistance, partitioning of total biomass to grain yield, and responsiveness to management. Another is differential sensitivity to photoperiod and vernalizing cold, which permit adaptation of varieties to a wide range of latitudes, as well as to winter-and spring-sown habitats. Despite a lack of detailed understanding of how photoperiod and vernalization sensitivity interact with each other and the environment, the relatively simple inheritance of photoperiod (Ppd) and vernalization (Vrn) sensitivity genes and their obvious phenotypic expression (i.e. earliness versus lateness) has permitted them to be modified in many breeding programs. The same is true for the height reduction (Rht) gene. In the future an increased understanding of the genetic basis of these traits may enable breeding programs to exploit them further. Selection for reduced height and improved adaptation to environment has had a profound impact on modern plant breeding, and the improvement in yield potential of spring wheat since the Green Revolution has been shown to be associated with a number of other physiological factors (Reynolds et al., 1999). Nonetheless, most breeding programs do not put much emphasis on selecting physiological traits per se (Rajaram and van Ginkel, 1996). Exceptions would include: 1) the stay-green character, which has been selected for in relation to improved disease resistance and is associated with high chlorophyll content and photosynthetic rate in Veery wheats, for example Seri-82 (Fischer et al., 1998), and 2) more erect leaf angle, a common trait in many high yielding bread and durum wheat plant types that was introgressed into the CIMMYT germplasm pool in the early 1970s (Fischer, 1996). A recent survey of plant breeders and physiologists addressed the question of how physiological approaches in plant breeding could have greater impact (Jackson et al., 1996). According to the survey, while the impacts of physiological research on breeding programs have been limited in the past, future impacts …

Journal ArticleDOI
TL;DR: Two new suboptimal search strategy suitable for feature selection in very high-dimensional remote sensing images (e.g., those acquired by hyperspectral sensors) are proposed, which allow interesting tradeoffs between the qualities of selected feature subsets and computational cost.
Abstract: A new suboptimal search strategy suitable for feature selection in very high-dimensional remote sensing images (e.g., those acquired by hyperspectral sensors) is proposed. Each solution of the feature selection problem is represented as a binary string that indicates which features are selected and which are disregarded. In turn, each binary string corresponds to a point of a multidimensional binary space. Given a criterion function to evaluate the effectiveness of a selected solution, the proposed strategy is based on the search for constrained local extremes of such a function in the above-defined binary space. In particular, two different algorithms are presented that explore the space of solutions in different ways. These algorithms are compared with the classical sequential forward selection and sequential forward floating selection suboptimal techniques, using hyperspectral remote sensing images (acquired by the airborne visible/infrared imaging spectrometer [AVIRIS] sensor) as a data set. Experimental results point out the effectiveness of both algorithms, which can be regarded as valid alternatives to classical methods, as they allow interesting tradeoffs between the qualities of selected feature subsets and computational cost.

Journal ArticleDOI
TL;DR: In this paper, a two-stage and sequential selection procedure is proposed to reduce either opportunity cost loss or the probability of incorrect selection and allow for different replication costs for each system.
Abstract: Standard "indifference-zone" procedures that allocate computer resources to infer the best of a finite set of simulated systems are designed with a statistically conservative, least favorable configuration assumption consider the probability of correct selection (but not the opportunity cost) and assume that the cost of simulating each system is the same. Recent Bayesian work considers opportunity cost and shows that an average case analysis may be less conservative but assumes a known output variance, an assumption that typically is violated in simulation. This paper presents new two-stage and sequential selection procedures that integrate attractive features of both lines of research. They are derived assuming that the simulation output is normally distributed with unknown mean and variance that may differ for each system. We permit the reduction of either opportunity cost loss or the probability of incorrect selection and allow for different replication costs for each system. The generality of our formulation comes at the expense of difficulty in obtaining exact closed-form solutions. We therefore derive a bound for the expected loss associated potentially incorrect selections, then asymptotically minimize that bound. Theoretical and empirical results indicate that our approach compares favorably with indifference-zone procedures.

Journal ArticleDOI
TL;DR: A conceptual and mathematical framework that links attributes of the breeding system to group composition and genetic structure is presented here, and recent empirical studies are reviewed in the context of this framework.
Abstract: Molecular genetic studies of group kin composition and local genetic structure in social organisms are becoming increasingly common. A conceptual and mathematical framework that links attributes of the breeding system to group composition and genetic structure is presented here, and recent empirical studies are reviewed in the context of this framework. Breeding system properties, including the number of breeders in a social group, their genetic relatedness, and skew in their parentage, determine group composition and the distribution of genetic variation within and between social units. This group genetic structure in turn influences the opportunities for conflict and cooperation to evolve within groups and for selection to occur among groups or clusters of groups. Thus, molecular studies of social groups provide the starting point for analyses of the selective forces involved in social evolution, as well as for analyses of other fundamental evolutionary problems related to sex allocation, reproductive skew, life history evolution, and the nature of selection in hierarchically structured populations. The framework presented here provides a standard system for interpreting and integrating genetic and natural history data from social organisms for application to a broad range of evolutionary questions.

Journal ArticleDOI
TL;DR: The pattern that has emerged suggests that several types of selection are plausible for the maintenance of HLA polymorphism.
Abstract: The nature of polymorphism and molecular sequence variation in the genes of the human major histocompatibility complex (MHC) provides strong support for the idea that these genes are under selection. With the understanding that selection shapes MHC variation new questions have become the focus of study. What is the mode of selection that accounts for MHC polymorphism? Is variation maintained by pathogen pressure or by reproductive mechanisms? Discerning between these requires drawing on information from studies on association between HLA genes and infectious diseases, reproductive success and mating preferences relative to HLA genotypes, and theoretical studies that compare the outcomes of different selection regimes. The pattern that has emerged suggests that several types of selection are plausible for the maintenance of HLA polymorphism.

Journal ArticleDOI
TL;DR: In this paper, a matching estimator is used to calculate the training effect for different subgroups of the sample and demonstrate how bounding the matching estimators can be used to evaluate the intrinsic uncertainty of estimated training effects due to selection on unobserved individual characteristics.
Abstract: In this paper we evaluate a Norwegian vocational training rehabilitation program by comparing employment outcomes of trainees and nonparticipants using nonexperimental data. A matching estimator is used to calculate the training effect for different subgroups of the sample. We demonstrate how bounding the matching estimator can be used to evaluate the intrinsic uncertainty of estimated training effects due to selection on unobserved individual characteristics. After adjustment for observed selection into training programs we find that the overall training effect is around six percentage points. This is mainly due to a high and significant effect for individuals with a low probability of program participation. After calculating upper and lower bounds on the test statistics used to test the hypothesis of no training effect we find that the overall effect is sensitive to unobserved selection. However, the result that the training effect is positive for individuals who are less likely to participate in a training program is not sensitive to selection bias. These individuals also have the lowest employment probabilities, which indicates potential harmful cream skimming in the Norwegian vocational rehabilitation sector.Copyright 2001 by Blackwell Publishing Ltd

Journal ArticleDOI
TL;DR: A general account of selection is set out to see how well it accommodates these very different sorts of selection, which can generate complexity and novelty primarily because they are so wasteful and inefficient.
Abstract: Authors frequently refer to gene-based selection in biological evolution, the reaction of the immune system to antigens, and operant learning as exemplifying selection processes in the same sense of this term. However, as obvious as this claim may seem on the surface, setting out an account of "selection" that is general enough to incorporate all three of these processes without becoming so gen- eral as to be vacuous is far from easy. In this target article, we set out such a general account of selection to see how well it accommo- dates these very different sorts of selection. The three fundamental elements of this account are replication, variation, and environmental interaction. For selection to occur, these three processes must be related in a very specific way. In particular, replication must alternate with environmental interaction so that any changes that occur in replication are passed on differentially because of environmental in- teraction. One of the main differences among the three sorts of selection that we investigate concerns the role of organisms. In traditional bio- logical evolution, organisms play a central role with respect to environmental interaction. Although environmental interaction can occur at other levels of the organizational hierarchy, organisms are the primary focus of environmental interaction. In the functioning of the immune system, organisms function as containers. The interactions that result in selection of antibodies during a lifetime are between entities (antibodies and antigens) contained within the organism. Resulting changes in the immune system of one organism are not passed on to later organisms. Nor are changes in operant behavior resulting from behavioral selection passed on to later organisms. But oper- ant behavior is not contained in the organism because most of the interactions that lead to differential replication include parts of the world outside the organism. Changes in the organism's nervous system are the effects of those interactions. The role of genes also varies in these three systems. Biological evolution is gene-based (i.e., genes are the primary replicators). Genes play very different roles in op- erant behavior and the immune system. However, in all three systems, iteration is central. All three selection processes are also incred- ibly wasteful and inefficient. They can generate complexity and novelty primarily because they are so wasteful and inefficient.

Patent
10 Jul 2001
TL;DR: In this paper, the authors present an automated travel planning apparatus consisting of three separate databases, including a map database for storing bit-mapped images covering numerous geographic regions, a routing database, for storing node, link, and shape data for roads geographically located within the geographic regions and for storing place data indicating the geographic location of places such as towns and cities, and a place of interest database containing the geographic locations of numerous places of interest.
Abstract: An automated travel planning apparatus includes three separate databases, including a map database for storing bit-mapped images covering numerous geographic regions, a routing database for storing node, link, and shape data for roads geographically located within the geographic regions and for storing place data indicating the geographic location of places such as towns and cities, and a places of interest database containing the geographic locations of numerous places of interest. A processor within the automated travel planning apparatus may be divided into several functional components, including a map selection component, a routing component, and a place selection component. In response to user input at the user interface, the map selection component chooses a bit-mapped image from the map database for display on the display monitor. After a user selects, via the user interface, a departure point and a destination point, the routing component employs the routing database to generate and display a route between the selected departure and destination points. If the user requests a list of places near the displayed route, the place selection component employs the places of interest database to generate and display a list of places of interest which are within a predetermined distance of the generated route.

Journal ArticleDOI
TL;DR: The fields of software packages considered and chosen, the weights assigned to different selection criteria, the size and structure of the team responsible for the decision, the methods employed and the effort expended are addressed.
Abstract: In this paper we detail the results from an empirical study concerning differences in characteristics of the enterprise resource planning (ERP) system selection process between small or medium and large sized organizations. In particular we address the fields of software packages considered and chosen, the weights assigned to different selection criteria, the size and structure of the team responsible for the decision, the methods employed and the effort expended.

Journal ArticleDOI
TL;DR: Bayesian model averaging is proposed as a formal way of taking account of model uncertainty in case-control studies, and this yields an easily interpreted summary, the posterior probability that a variable is a risk factor, and is indicated to be reasonably well calibrated in the situations simulated.
Abstract: Covariate and confounder selection in case-control studies is often carried out using a statistical variable selection method, such as a two-step method or a stepwise method in logistic regression. Inference is then carried out conditionally on the selected model, but this ignores the model uncertainty implicit in the variable selection process, and so may underestimate uncertainty about relative risks. We report on a simulation study designed to be similar to actual case-control studies. This shows that p-values computed after variable selection can greatly overstate the strength of conclusions. For example, for our simulated case-control studies with 1000 subjects, of variables declared to be 'significant' with p-values between 0.01 and 0.05, only 49 per cent actually were risk factors when stepwise variable selection was used. We propose Bayesian model averaging as a formal way of taking account of model uncertainty in case-control studies. This yields an easily interpreted summary, the posterior probability that a variable is a risk factor, and our simulation study indicates this to be reasonably well calibrated in the situations simulated. The methods are applied and compared in the context of a case-control study of cervical cancer.

Journal ArticleDOI
01 Nov 2001
TL;DR: A new algorithm is developed to select optimal short or long DNA oligos from genes or open reading frames and predict their hybridization behavior and should provide a good approximation to the true optimum set.
Abstract: Motivation: High density DNA oligo microarrays are widely used in biomedical research. Selection of optimal DNA oligos that are deposited on the microarrays is critical. Based on sequence information and hybridization free energy, we developed a new algorithm to select optimal short (20–25 bases) or long (50 or 70 bases) oligos from genes or open reading frames (ORFs) and predict their hybridization behavior. Having optimized probes for each gene is valuable for two reasons. By minimizing background hybridization they provide more accurate determinations of true expression levels. Having optimum probes minimizes the number of probes needed per gene, thereby decreasing the cost of each microarray, raising the number of genes on each chip and increasing its usage. Results: In this paper we describe algorithms to optimize the selection of specific probes for each gene in an entire genome. The criteria for truly optimum probes are easily stated but they are not computable at all levels currently. We have developed an heuristic approach that is efficiently computable at all levels and should provide a good approximation to the true optimum set. We have run the program on the complete genomes for several model organisms and deposited the results in a database that is available on-line (http://ural.wustl.edu/∼lif/probe.pl). Availability: The program is available upon request.

Journal ArticleDOI
TL;DR: The dynamic change of crossover and mutation probabilities, the space reduction and the global elitism during the evolution process show that great improvement can be achieved for all GA types.
Abstract: This paper presents an exhaustive study of the Simple Genetic Algorithm (SGA), Steady State Genetic Algorithm (SSGA) and Replacement Genetic Algorithm (RGA). The performance of each method is analyzed in relation to several operators types of crossover, selection and mutation, as well as in relation to the probabilities of crossover and mutation with and without dynamic change of its values during the optimization process. In addition, the space reduction of the design variables and global elitism are analyzed. All GAs are effective when used with its best operations and values of parameters. For each GA, both sets of best operation types and parameters are found. The dynamic change of crossover and mutation probabilities, the space reduction and the global elitism during the evolution process show that great improvement can be achieved for all GA types. These GAs are applied to TEAM benchmark problem 22.

Journal ArticleDOI
TL;DR: Several generalized elitist procedures for the design of composite laminates are explored and it is shown that GES procedures are superior to an SI procedure for two types of problems.

Patent
24 Aug 2001
TL;DR: In this article, the authors present a gaming device that enables players to accumulate awards by activating an award distributor having a plurality of award symbols and at least one selection group activator symbol.
Abstract: The present invention rovides a gaming device that enables players to accumulate awards by activating an award distributor having a plurality of award symbols and at least one selection group activator symbol. The gaming device provides the player with a plurality of activations where an award is associated with each award symbol indicated in each activation. When a selection group activator symbol is indicated, the gaming device displays at least one selection set having a plurality of selections associated with selection awards. The gaming device enables a player to select one selection and provides the associated selection award to the player. The number of available selections in the selection set decreases by one after the player picks a selection. If the selection group activator symbol is subsequently indicated, the player picks from the remaining available selections in the selection set.